Eyvind Niklasson

222 posts

Eyvind Niklasson

Eyvind Niklasson

@eyvindn

research @ google, working on self-organising systems

zürich Beigetreten Mayıs 2009
3.5K Folgt1.2K Follower
Eyvind Niklasson retweetet
Sam Greydanus
Sam Greydanus@samgreydanus·
"The Cursive Transformer" is a project I finished about a year ago but forgot to post about. We trained a Transformer to write cursive! greydanus.github.io/2025/03/30/cur…
Sam Greydanus tweet media
English
6
23
277
13.4K
Eyvind Niklasson
Eyvind Niklasson@eyvindn·
if you have something cool at the intersection of self-organisation and evolution, submit it to the Evo-Self workshop at @GeccoConf, extended deadline April 3rd! organised w/ @nisioti_eleni @miltonllera @risi1979 @MarcelloBarylli @RandazzoEttore @zzznah @mayalen_etc
Ettore Randazzo@RandazzoEttore

Excited about self-organising sytems? Do you have a cool paper, either in the works or ready? Then consider applying to our Evolving self-organisation workshop at @GeccoConf ! Submission Deadline March 27 Also check out the amazing workshop website: …-self-organisation-workshop.github.io/gecco-2026/

English
0
7
19
2.6K
Eyvind Niklasson retweetet
Fabricio Nicola
Fabricio Nicola@nicola_fabricio·
Mammals have hundreds of joints and muscles. Controlling them individually would be nearly impossible. How does the nervous system organize such complexity into coherent actions? Our new study explores this question through a natural behavior: jumping. 1/15 🧵
English
6
59
245
25.9K
Eyvind Niklasson retweetet
François Chollet
François Chollet@fchollet·
The next major breakthrough will branch out at a much lower level than deep learning model architecture. It will be a new approach. A better model architecture can lead to incremental data efficiency & generalization gains, but it won't fix the fundamental issues of the parametric learning paradigm.
Rohan Paul@rohanpaul_ai

Sam Altman just said in his new interview, that a new AI architecture is coming that will be a massive upgrade, just like Transformers were over Long Short-Term Memory. And also now the current class of frontier models are powerful enough to have the brainpower needed to help us research these ideas. His advice is to use the current AI to help you find that next giant step forward. --- From 'TreeHacks' YT Channel (link in comment)

English
101
54
879
141.8K
Eyvind Niklasson retweetet
Eyvind Niklasson retweetet
Eyvind Niklasson retweetet
Seungwook Han
Seungwook Han@seungwookh·
Can language models learn useful priors without ever seeing language? We pre-pre-train transformers on neural cellular automata — fully synthetic, zero language. This improves language modeling by up to 6%, speeds up convergence by 40%, and strengthens downstream reasoning. Surprisingly, it even beats pre-pre-training on natural text! Blog: hanseungwook.github.io/blog/nca-pre-p… (1/n)
Seungwook Han tweet media
English
48
260
1.7K
243.1K
Eyvind Niklasson retweetet
Andrej Karpathy
Andrej Karpathy@karpathy·
@rabrg << what our Universe looks like to God
English
27
8
269
23.4K
Eyvind Niklasson retweetet
Nicolas Zucchet
Nicolas Zucchet@NicolasZucchet·
Not every research project has to end up in a conference publication, but some are still worth sharing. This is one of them! Working on this made us deeply question the fundamentals of deep learning (specifically backpropagation through time) so we wrote a blog post about it 🖥️
Nicolas Zucchet tweet media
English
1
7
84
5.3K
Eyvind Niklasson retweetet
Alex Mordvintsev
Alex Mordvintsev@zzznah·
Working on the new simulator. I just wanted to see what Atari2600 fetching data from ROM looks like at CMOS FET level (@tinytapeout TT09 Atari circuit by @__ReJ__)
English
105
484
4.3K
177.8K
Eyvind Niklasson retweetet
Alex Mordvintsev
Alex Mordvintsev@zzznah·
Growing Graphs demo is finally out! 🕸️✨ 🔗 znah.net/graphs/ Videos from a few months ago finally meet a finished implementation, thanks Gemini for doing the boring parts. Inspired by Paul Cousin's Graph-Rewriting Automata: like a Game of Life, but cells can split if they want to #GenerativeArt #WASM #SwissGL
English
35
220
1.4K
86K
Eyvind Niklasson retweetet
Hans Chiu
Hans Chiu@chiu_hans·
#genuary9 Crazy automaton. Cellular automata with a small language model as rule. The model is trying to recover the corrupted text. #genuary2026 #genuary
English
18
50
546
42.6K
Eyvind Niklasson retweetet
hardmaru
hardmaru@hardmaru·
Survival of the fittest code. Core War (1984) is a game where programs must crash their opponents to survive. Warriors written in an assembly language called Redcode fight for control of a virtual machine. Our new paper: Digital Red Queen: Adversarial Program Evolution in Core War with LLMs, explores what happens when LLMs drive an adversarial evolutionary arms race in this domain. We task LLMs to write Warrior programs in Redcode that must out-compete a virtual world full of such programs. Core War is a Turing-complete environment where code and data share the same address space, which leads to some very chaotic self-modifying code dynamics. This approach is inspired by the Red Queen hypothesis in evolutionary biology: the principle that species must continually adapt and evolve simply to survive against ever changing competitors. In our work, programs continuously adapt to defeat a growing history of opponents rather than a static benchmark. We find that this adversarial process leads to the emergence of increasingly general strategies, including targeted self-replication, data bombing, and massive multithreading. Most intriguingly, it reveals a form of convergent evolution. Different code implementations settle into similar high performing behaviors, mirroring how biological agents independently evolve similar traits to solve the same problems. I think this work positions Core War as a sandbox for studying Red Queen dynamics in artificial systems. It offers a safe controlled environment for analyzing how AI agents might evolve in real world adversarial settings such as cybersecurity. By simulating these adversarial dynamics in an isolated sandbox, we offer a glimpse into the future where deployed LLM systems may start competing against one another for limited resources in the real world.
English
82
322
2.2K
172.8K
Eyvind Niklasson retweetet
Mithil Vakde
Mithil Vakde@evilmathkid·
Announcing New Pareto Frontier on ARC-AGI 27.5% for just $2 333x cheaper than TRM! Beats every non-thinking LLM in existence Cost so low, its literally off the chart Vanilla transformer. No special architectures. Tiny. Trained in 2 hrs. Open source. Thread:
Mithil Vakde tweet media
English
66
143
1.4K
437.7K