Kenneth Stanley

2K posts

Kenneth Stanley banner
Kenneth Stanley

Kenneth Stanley

@kenneth0stanley

SVP of Open-Endedness @LilaSciences. Prev: Maven CEO, Lead@OpenAI, Uber AI, prof@UCF. NEAT,HyperNEAT,novelty search, POET. Book:Why Greatness Cannot Be Planned

San Francisco, CA Katılım Mart 2016
1.1K Takip Edilen16.7K Takipçiler
Sabitlenmiş Tweet
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
Job opportunity: the Open-Endedness Team at @LilaSciences is seeking a talented research engineer who would enjoy applying your wizard-like technical skills to genuinely deep and fundamental AI research into the algorithmic basis of creativity and innovation. This role is about radical innovation at the research frontier and 10xing the capabilities of research scientists running world-changing experiments (we're not talking about “business solutions”): data pipelines, large models, distributed training, optimizing not just for current algorithms, but for exotic new approaches as well. We want our algorithms operating as efficiently as possible and scaling as seamlessly as possible as hardware resources expand. If that sounds exciting, apply below👇
English
8
23
156
17.9K
Kenneth Stanley retweetledi
Lila Sciences
Lila Sciences@LilaSciences·
🤖What if a robot that doesn't know it's trying to walk learns faster than one explicitly trained to walk? Our SVP of Open-Endedness @kenneth0stanley joined @DuncanCJ_ to discuss why objectives are the enemy of greatness, building Scientific Superintelligence at LILA, and the future of creativity. 🎧Watch the full episode⬇️
Duncan CJ@DuncanCJ_

New episode with @kenneth0stanley on Why Objectives Are the Enemy of Greatness. What if the surest way to fail at something ambitious is to have a clear plan to achieve it? What if a robot that doesn't know it's trying to walk learns faster than one explicitly trained to walk? Kenneth Stanley is one of the most provocative minds in artificial intelligence. Former professor and team leader at @OpenAI, he co-authored the cult classic Why Greatness Cannot Be Planned—a manifesto that challenges everything we think we know about achievement. His research discovered something shocking: in AI experiments, robots seeking novelty learned to walk better than robots explicitly trained to walk. Over the last two years, Ken has taken these theories from the lab into the real world. As SVP of Open-Endedness at @LilaSciences, he's building scientific super intelligence—AI that autonomously discovers new biology and chemistry. His work reveals a profound truth: the path to greatness is paved with stepping stones you can't predict, and the greatest discoveries come from following what's interesting, not what's planned. ⏰ Timestamps: 00:00 - Introduction: When Objectives Become the Enemy 03:01 - How AI Experiments Led to Social Critique 08:00 - The Picbreeder Experiment: Breeding Pictures Online 15:00 - A Car from an Alien Face (Deceptive Stepping Stones) 21:50 - It's Not Random: The Sharp Compass of Interestingness 25:36 - From AI to Life Lessons: Permission to Follow Curiosity 32:55 - The Spaceship That Looked Like a Mistake 42:19 - Getting Lost in St. Petersburg: The Collection Metaphor 44:59 - Stepping Stones That Looked Like Mistakes 49:48 - Advice for the Founder Feeling Hollow Despite Hitting Metrics 53:29 - Balancing Investors, Boards, and Open-Ended Exploration 1:03:12 - Will Humans Become Curators of AI Creativity? Look up Duncan CJ on YouTube, Apple Podcasts, etc. Enjoy!

English
0
2
10
1.7K
Kenneth Stanley retweetledi
Reads with Ravi
Reads with Ravi@readswithravi·
5) Why Greatness Cannot Be Planned by Joel Lehman and Kenneth Stanley This book challenges readers to consider life without a destination and discovery without a compass.
Reads with Ravi tweet media
English
3
6
61
10.4K
Kenneth Stanley retweetledi
Sam Earle
Sam Earle@Smearle_RH·
My PhD thesis defense will be here zoom.us/my/togelius tomorrow (Monday) at 9am EST. All are welcome! 🙂 Talk title: "Open-ended Learning via Procedural Content Generation in Video Games: Environment Substrates, Morphogenesis, and Designer-Player Loops". Come watch me make it make sense!
English
5
5
32
3.1K
Kenneth Stanley retweetledi
Uljad
Uljad@uljadb99·
Natural evolution's open-endedness leads to beautiful, complex emergent structures and self-organizing behavior 🌱✨. Replicating this in silico is famously hard 💻. Our paper points to a promising direction by evolving populations of competing neural cellular automata with lifelike behavior 🧬🤖 #Isambard ⚠️⚠️flashing lights, rapid cuts, or strobe effects in this thread! 🚨🚨 1/n
English
3
27
142
20.2K
Kenneth Stanley retweetledi
Boris ✈️🇧🇷 ICLR 2026
Boris ✈️🇧🇷 ICLR 2026@BorisMeinardus·
🚨Why should one huge LLM know and solve everything? - No single human does, yet our civilization does endless innovation. Introducing AC/DC - it continually coevolves a population of small expert LLMs that collectively outperform GPT-4o. (ICLR 2026 w/ @SakanaAILabs) 👇🧵
English
4
24
216
42.4K
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
@hal9kcyon The question of how to integrate new stuff is wide open. Evolution might help discover one avenue or it could be something else.
English
0
0
4
145
morph
morph@hal9kcyon·
@kenneth0stanley It seems backprop is a very limited way to integrate new representations with old ones. Are there evolutionary methods that are more promising?
English
1
1
3
160
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
The more you learn the easier it should be to learn more. The key word is easier. What could be more natural? That’s the real puzzle of continual learning. Merely avoiding brain damage from accumulating additional knowledge is barely scratching the surface.
English
5
10
88
5.7K
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
Nice counterpoint! But I think the mistake in your argument is that “it is able to learn almost all we know” is misleading. You can see my point about continual learning leans on the fact that it should be “simple and straightforward.” So I never claimed that a poor continual learner cannot learn at all. On the contrary, it can learn a lot. But the key failing (which you highlight yourself) in the LLM is that it must be done through “brute force.” That is definitively not the “simple and straightforward” process that we’d expect to pair with fluid and fertile imagination.
English
1
0
0
75
Vladimir Fedosov
Vladimir Fedosov@Vladimi39901452·
@kenneth0stanley Ok - at the beggining LLM cannot imagine anything but it is able to learn almost all we know! Seems it is not very good connection:). We use pure brute force to train LLMs - all weights are affected each step. It does not work for CL. But we don't know an another way to train.
English
1
0
0
109
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
Difficulty achieving continual learning is also a bad omen for creativity: what you can imagine is naturally a function of what you can learn. Both are mediated by the adjacent possible to the same internal representations! Contorted algorithms (or the absence of clean options) for what should be simple and straightforward continual learning are therefore a hint that the large models they serve are creatively barren. That explains why something that is close to “knowing everything” and often competitive with the abilities of experts can still produce fewer breakthroughs than you would expect from a human with similarly astounding knowledge and expertise.
English
9
13
121
13K
Kenneth Stanley retweetledi
deep Manifold
deep Manifold@BetaTomorrow·
@kenneth0stanley First question: why is continual learning even possible ? then training progression, then neural plasticity...
deep Manifold tweet mediadeep Manifold tweet media
English
1
1
4
480
Hansel
Hansel@hanselh_·
@kenneth0stanley @LilaSciences open-endedness feels like one of the most underrated research directions in AI right now. systems that generate their own objectives instead of optimizing fixed benchmarks could be what separates narrow tools from genuinely autonomous agents
English
1
0
2
47
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
Job opportunity: the Open-Endedness Team at @LilaSciences is seeking a talented research engineer who would enjoy applying your wizard-like technical skills to genuinely deep and fundamental AI research into the algorithmic basis of creativity and innovation. This role is about radical innovation at the research frontier and 10xing the capabilities of research scientists running world-changing experiments (we're not talking about “business solutions”): data pipelines, large models, distributed training, optimizing not just for current algorithms, but for exotic new approaches as well. We want our algorithms operating as efficiently as possible and scaling as seamlessly as possible as hardware resources expand. If that sounds exciting, apply below👇
English
8
23
156
17.9K
Kenneth Stanley retweetledi
Kishen Patel
Kishen Patel@quichenpatel·
Finally got around to reading Why Greatness Cannot Be Planned by @kenneth0stanley and @joelbot3000 . I think the book encapsulates something I've felt intuitively for awhile. That the best outcomes rarely come from intense optimization, but rather from following interesting threads. Applies to investing, career decisions, research, building companies, really anything where the path to something great is nonlinear. The most valuable bets, on yourself or others, almost never fit neatly into a thesis. Highly recommend!
Kishen Patel tweet media
English
3
2
30
1.6K
Kenneth Stanley retweetledi
Jeff Clune
Jeff Clune@jeffclune·
Very excited to share this work with all of you! It is the next step in our line of work on the Darwin Gödel Machine and the Automated Design of Agentic Systems, and more generally in open-endedness, AI-generating Algorithms, and recursive self-improvement. Great work @jennyzhangzt! 🚀🚀🚀
Jenny Zhang@jennyzhangzt

Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).

English
7
29
295
35K
Kenneth Stanley retweetledi
Lila Sciences
Lila Sciences@LilaSciences·
We're honored to be named one of Fast Company's Most Innovative Companies and in the top ten most innovative artificial intelligence companies of 2026! Interested in joining our team? Come build the future of science with us: lila.ai/open-roles #FCMostInnovative #Hiring
Lila Sciences tweet media
English
2
7
18
17.9K