Timon Willi

165 posts

Timon Willi

Timon Willi

@TimonWilli

RS @AIatMeta, DPhil w/ @j_foerst, @UniofOxford; Formerly: Research Intern @GoogleDeepMind / PhD @VectorInst / RS at @nnaisense / MSc w/ @SchmidhuberAI

London, United Kingdom Katılım Mayıs 2022
69 Takip Edilen361 Takipçiler
Sabitlenmiş Tweet
Timon Willi
Timon Willi@TimonWilli·
Scaling laws for (self) supervised learning predict: Increase parameter count -> performance goes brrrr. (loosely speaking) Can we get scaling laws for Deep Reinforcement Learning? In this work, we pave the way towards scaling laws for Deep Reinforcement Learning. We show that using Mixture of Experts is a viable path towards scaling DRL whilst improving performance. I contributed to this paper as part of my internship @GoogleDeepMind
Timon Willi tweet media
Pablo Samuel Castro@pcastr

📢Mixtures of Experts unlock parameter scaling for deep RL! Adding MoEs, and in particular Soft MoEs, to value-based deep RL agents results in more parameter-scalable models. Performance keeps increasing as we increase number of experts (green line below)! 1/9

English
3
10
39
10.8K
Timon Willi retweetledi
Bill Shen
Bill Shen@williamfshen·
📣 Our recent work at Meta: Rethinking Rubric Generation for Improving LLM Judge and Reward Modeling for Open-ended Tasks Rubric-based judging and reward modeling are powerful for open-ended tasks, but scaling requires controlled generation, not just free-form lists from an LLM.
Bill Shen tweet media
English
13
24
135
14.6K
Timon Willi retweetledi
Shashwat Goel
Shashwat Goel@ShashwatGoel7·
In case you missed it over the holidays, we @AIatMeta released: Training AI Co-Scientists using Rubric Rewards. - RL training, we open-source the training and eval datasets! - Human study, finding 70% win-rate - GPT-5's lead in science? Our eval quantifies the anecdotes!
Shashwat Goel tweet media
English
12
51
365
25.4K
Timon Willi retweetledi
Jason Weston
Jason Weston@jaseweston·
New work from colleagues at Meta & friends. A step in the co-improving AI direction.
Shashwat Goel@ShashwatGoel7

🚨New paper: Training AI Co-Scientists using Rubric Rewards In my recent internship at Meta Superintelligence Labs, I pursued an opinionated research bet: a general, scalable training recipe to improve AI at helping scientists achieve their research goals. Motivation Existing work on training AI for Science optimizes pre-defined, narrow scientific objectives with execution feedback in specially constructed environments (e.g. RLVR). However, it's infeasible to learn from trial and error in many sciences. For e.g. medical research is hard to simulate digitally, and it is unethical to run clinical trials with suboptimal approaches proposed in early training.😬 Moreover, when pursuing a novel research goal, the primary intellectual challenge often lies in defining the experiment setup and objective itself. In the past year, I have increasingly used AI assistance for this (especially GPT-5) in my own research. Of course, models often fail to follow some explicitly stated requirements, and sometimes propose bad design choices, but that is fine! The generated plans are still useful for brainstorming, and I can implement them with further refinement. Method This made us wonder🤔: how can we train models to be better at this task of generating research plans, given an open-ended research goal? For training, we need to collect a large number of research goals, and obtain fast verification signals. Human experts are expensive to access, and that wouldn't scale. 💡Equipped with the vast corpus of openly licensed scientific literature, and the recent success of RL, Synthetic Data Curation, and Rubrics, we propose a scalable recipe: Extract research goals and goal-specific grading rubrics from existing papers with an LLM, and use them for RL training. Specifically, a frozen copy of the initial model rewards the plans generated during training using the goal-specific rubrics, checking seven general guidelines for parts of the plan relevant to each rubric item. 🤔Won't this lead to reward hacking? It will. At some point. But until then, improvements on the training reward might generalize to better research plans for humans. We are hoping the goal-specific rubrics, provided as privileged information to the grader, create a generator-verifier gap that improves research plan generation without external supervision. The only way to find out? Perform a human study. We ask Machine Learning experts to compare plans generated by the finetuned vs initial Qwen3-30B model for ML research goals. This is slow and expensive, it required 45 minutes per annotation to carefully analyze plans, so we could only do this once at the end of the project for evaluation. Results Individual annotations are still noisy, as evaluating research plans is inherently subjective. But sure enough, there is non-trivial signal. The experts preferred (p < 0.01) our finetuned models plans for 70% research goals extracted from NeurIPS'24 / ICLR'25 Oral papers (top 1%) ✅ But only ML, and finetuned vs initial, is boring. Remember, the goal is generality. So we also finetuned Qwen3-30B on goals extracted from medical research, and new arXiv prerints spanning 8 domains. We use rubric evaluations with a jury of frontier models, which also allows us to compare many frontier models across domains. Notable findings: 1) In-domain finetuning leads to 12-22% relative improvements in scores across the three domains: arXiv, medical, and ML 📈 2) Significant cross-domain generalization, especially with the medical finetune improving on ML and new arXiv research goals. This might be evidence for our "generality" thesis 📊 3) Our 30B finetune matches much larger models like Grok-4-Thinking, but GPT-5-Thinking is a cut above the rest (consistent with my qualitative experience) 🤖 Limitations Now of course, LLM-based evaluations, even with a jury and rubrics, are imperfect. But while the individual sample scoring is noisy, we hope for directionally correct results in aggregate, as the jury has positive alignment with human majority vote in our human study on ML. We think the grading scheme holds promise, as optimizing against a much weaker grader (30B), led to improvements in human preference. This work has many such limitations, so treat it more like an early proof-of-concept. We candidly acknowledge them in our paper, and encourage you to scrutinize the details: 📜 alphaxiv.org/abs/2512.23707 Released Artefacts The paper has many ablations and analyses: - our appendix also has sample outputs across domains for vibe-checks, making it 119 pages! - criteria-wise breakdown of performance evolution during training, thanks to our structured grading - SFT on long-form plans worsened model performance - training also improves Gemma, Llama models 🤗We release our train and test data on @huggingface. At a sample-level the data is noisy, and generated by Llama-4-Maverick. Still human experts approved 84% of the rubric items in ML so there's promise, and the same methodology will lead to better quality data as language models improve. Overall, we think the potential of our approach is high: the scientific method is quite general, deep learning benefits from generality (transfer learning), and language models are amazing (better every month!). We hope approaches like this make LMs better at assisting researchers across diverse problem settings and scientific disciplines. Some cool figures from the paper, and acknowledgements in thread🧵. I'm all ears to feedback on how we could've done things better! 1/3

English
5
14
111
18K
Timon Willi retweetledi
Shashwat Goel
Shashwat Goel@ShashwatGoel7·
🚨New paper: Training AI Co-Scientists using Rubric Rewards In my recent internship at Meta Superintelligence Labs, I pursued an opinionated research bet: a general, scalable training recipe to improve AI at helping scientists achieve their research goals. Motivation Existing work on training AI for Science optimizes pre-defined, narrow scientific objectives with execution feedback in specially constructed environments (e.g. RLVR). However, it's infeasible to learn from trial and error in many sciences. For e.g. medical research is hard to simulate digitally, and it is unethical to run clinical trials with suboptimal approaches proposed in early training.😬 Moreover, when pursuing a novel research goal, the primary intellectual challenge often lies in defining the experiment setup and objective itself. In the past year, I have increasingly used AI assistance for this (especially GPT-5) in my own research. Of course, models often fail to follow some explicitly stated requirements, and sometimes propose bad design choices, but that is fine! The generated plans are still useful for brainstorming, and I can implement them with further refinement. Method This made us wonder🤔: how can we train models to be better at this task of generating research plans, given an open-ended research goal? For training, we need to collect a large number of research goals, and obtain fast verification signals. Human experts are expensive to access, and that wouldn't scale. 💡Equipped with the vast corpus of openly licensed scientific literature, and the recent success of RL, Synthetic Data Curation, and Rubrics, we propose a scalable recipe: Extract research goals and goal-specific grading rubrics from existing papers with an LLM, and use them for RL training. Specifically, a frozen copy of the initial model rewards the plans generated during training using the goal-specific rubrics, checking seven general guidelines for parts of the plan relevant to each rubric item. 🤔Won't this lead to reward hacking? It will. At some point. But until then, improvements on the training reward might generalize to better research plans for humans. We are hoping the goal-specific rubrics, provided as privileged information to the grader, create a generator-verifier gap that improves research plan generation without external supervision. The only way to find out? Perform a human study. We ask Machine Learning experts to compare plans generated by the finetuned vs initial Qwen3-30B model for ML research goals. This is slow and expensive, it required 45 minutes per annotation to carefully analyze plans, so we could only do this once at the end of the project for evaluation. Results Individual annotations are still noisy, as evaluating research plans is inherently subjective. But sure enough, there is non-trivial signal. The experts preferred (p < 0.01) our finetuned models plans for 70% research goals extracted from NeurIPS'24 / ICLR'25 Oral papers (top 1%) ✅ But only ML, and finetuned vs initial, is boring. Remember, the goal is generality. So we also finetuned Qwen3-30B on goals extracted from medical research, and new arXiv prerints spanning 8 domains. We use rubric evaluations with a jury of frontier models, which also allows us to compare many frontier models across domains. Notable findings: 1) In-domain finetuning leads to 12-22% relative improvements in scores across the three domains: arXiv, medical, and ML 📈 2) Significant cross-domain generalization, especially with the medical finetune improving on ML and new arXiv research goals. This might be evidence for our "generality" thesis 📊 3) Our 30B finetune matches much larger models like Grok-4-Thinking, but GPT-5-Thinking is a cut above the rest (consistent with my qualitative experience) 🤖 Limitations Now of course, LLM-based evaluations, even with a jury and rubrics, are imperfect. But while the individual sample scoring is noisy, we hope for directionally correct results in aggregate, as the jury has positive alignment with human majority vote in our human study on ML. We think the grading scheme holds promise, as optimizing against a much weaker grader (30B), led to improvements in human preference. This work has many such limitations, so treat it more like an early proof-of-concept. We candidly acknowledge them in our paper, and encourage you to scrutinize the details: 📜 alphaxiv.org/abs/2512.23707 Released Artefacts The paper has many ablations and analyses: - our appendix also has sample outputs across domains for vibe-checks, making it 119 pages! - criteria-wise breakdown of performance evolution during training, thanks to our structured grading - SFT on long-form plans worsened model performance - training also improves Gemma, Llama models 🤗We release our train and test data on @huggingface. At a sample-level the data is noisy, and generated by Llama-4-Maverick. Still human experts approved 84% of the rubric items in ML so there's promise, and the same methodology will lead to better quality data as language models improve. Overall, we think the potential of our approach is high: the scientific method is quite general, deep learning benefits from generality (transfer learning), and language models are amazing (better every month!). We hope approaches like this make LMs better at assisting researchers across diverse problem settings and scientific disciplines. Some cool figures from the paper, and acknowledgements in thread🧵. I'm all ears to feedback on how we could've done things better! 1/3
Shashwat Goel tweet media
English
15
58
253
42.1K
Timon Willi retweetledi
Alex Goldie
Alex Goldie@AlexDGoldie·
🪩 So excited to reveal DiscoBench: An Open-Ended Benchmark for Algorithm Discovery! 🪩 It addresses the key issues of current evals with its broad task coverage, modular file system, meta-train/meta-test split and emphasis on open-ended tasks! 🧵
GIF
English
1
24
108
29.5K
Timon Willi retweetledi
Foerster Lab for AI Research
🚨🚨Introducing the FLAIR internship program!🚨🚨 We are looking for two talented students to join us for an internship working in FLAIR for 6 months (5th January to 4th July 2026)! For details and eligibility criteria, please check: foersterlab.com/internship/
English
1
19
117
34.6K
Timon Willi retweetledi
abranti
abranti@joaoabrantis·
In an evolving population of models, using model merging as the crossover operation drastically reduces diversity and leads to premature convergence. To address this, we make models compete for limited resources (training datapoints) which benefit models that have unique skills worth preserving and merging into future generations. Additionally, we implement an "attraction" heuristic, so models with complementary strengths are more likely to pair up and merge, accelerating performance gains. Our method evolves MNIST classifiers from scratch, achieving SOTA accuracy among evolutionary approaches at far lower computational cost. The approach scales robustly to both large language models and image diffusion models. I had a lot of fun working on this with @yujin_tang and @RobertTLange - give it a read!
Sakana AI@SakanaAILabs

What if we could evolve AI models like organisms in nature, letting them compete, mate, and combine their strengths to produce ever-fitter offspring? Excited to share our new work: “Competition and Attraction Improve Model Fusion” presented at GECCO’25🦎 where it was a runner-up for best paper! Paper: arxiv.org/abs/2508.16204 Code: github.com/SakanaAI/natur… Summary of Paper At Sakana AI, we draw inspiration from nature’s evolutionary processes to build the foundation of future AI systems. Nature doesn’t create one single, monolithic organism; it fosters a diverse ecosystem of specialized individuals that compete, cooperate, and combine their traits to adapt and thrive. We believe AI development can follow a similar path. What if instead of building one giant monolithic AI, we could evolve a whole ecosystem of specialized models that collaborate and combine their skills? Like a school of fish 🐟, where collective intelligence emerges from the group. This new paper builds on our previous research on model merging, which follows such an evolutionary path. We started by using evolution to find the best “recipes” to merge existing models (our Nature Machine Intelligence paper: nature.com/articles/s4225…). Then, we explored how to maintain diversity to acquire new skills in LLMs (our ICLR 2025 paper: openreview.net/forum?id=Kvdh1…). Now, we're combining these ideas into a full evolutionary system. A key limitation remained in earlier work: model merging required manually defining how models should be partitioned (e.g., by fixed layer or blocks) before they could be combined. What if we could let evolution figure that out too? Our new paper proposes M2N2 (Model Merging of Natural Niches), a more fluid method, which overcomes this with three key, nature-inspired ideas: 1/ Evolving Merging Boundaries 🌿: Instead of merging models using pre-defined, static boundaries (e.g. fixed layers), M2N2 dynamically evolves the “split-points” for merging. This allows for a far more flexible and powerful exploration of parameter combinations, like swapping variable-length segments of DNA rather than entire chromosomes. 2/ Diversity through Competition 🐠: To ensure we have a rich pool of models to merge, M2N2 makes them compete for limited resources (i.e., data points in a training set). This forces models to specialize and find their own “niche,” creating a population of diverse, high-performing specialists that are perfect for merging. 3/ Attraction and Mate Selection 💏: Merging models can be computationally expensive. M2N2 introduces an “attraction” heuristic that intelligently pairs models for fusion based on their complementary strengths—choosing partners that perform well where the other is weak. This makes the evolutionary search much more efficient. Does it work? The results are fascinating: This is the first time model merging has been used to evolve models entirely from scratch, outperforming other evolutionary algorithms. In one experiment, starting with random networks, M2N2 evolved an MNIST classifier that achieves performance comparable to CMA-ES, but is far more computationally efficient. Does it scale? We also showed that M2N2 can scale to large, pre-trained models: We used M2N2 to merge a math specialist LLM with an agentic specialist LLM. M2N2 produced a merged model that excelled at both math and web shopping tasks, significantly outperforming other methods. The flexible split-point was crucial here. Does it work on multimodal models? When we applied M2N2 to text-to-image models, we merged several models by adapting them only for Japanese prompts. The resulting model not only improved on Japanese but also retained its strong English capabilities—a key advantage over fine-tuning, which can suffer from catastrophic forgetting. This nature-inspired approach is central to Sakana AI’s mission to find new foundations for AI based on collective intelligence. Rather than scaling monolithic models, we envision a future where ecosystems of diverse, specialized models co-evolve, collaborate, and combine, leading to more adaptive, robust, and creative AI. 🐙 We hope this work sparks more interest in these under-explored ideas! Published in ACM GECCO’25: Proceedings of the Genetic and Evolutionary Computation Conference. DOI: doi.org/10.1145/371225…

English
1
2
15
1.5K
Timon Willi retweetledi
Jakob Foerster
Jakob Foerster@j_foerst·
I recently had a lunch time conversation with a very senior AI researcher about how are multi-agent problems differ from single agent (their starting point was they do not). One point that made them think: As computers scale, the rest of the world (i.e. no agentic parts) is not going to speed up or get more clever, so compute-scaling methods will succeed (think single agent robotics). In contrast, other agents will also become smarter/faster. So finding successful methods here is not a question of compute alone. No matter how much compute I have for decision making, I will be compute limited if I need to model other agents in the environment with the same budget as part of my inner loop. As a corollary it follows that in the (long term) future almost all flops will be spend on simulating other agents. Not many know this and you are invited to consider the implications for a second.
English
24
17
234
31.4K
Timon Willi retweetledi
Uljad
Uljad@uljadb99·
Unlock real diversity in your LLM! 🚀 LLM outputs can be boring and repetitive. Today, we release Intent Factored Generation (IFG) to: - Sample conceptually diverse outputs💡 - Improve performance on math and code reasoning tasks🤔 - Get more engaging conversational agents 🤖
GIF
English
1
9
37
8.2K
Timon Willi retweetledi
Johan Obando-Ceron 👍🏽
Johan Obando-Ceron 👍🏽@johanobandoc·
🚨 Excited to share our #ICML2025 paper: "The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep RL" We train RL agents to know when to quit, cutting wasted effort and improving efficiency with our method LEAST. 📄Paper: arxiv.org/pdf/2506.13672 🧵Check the thread below👇🏾
Johan Obando-Ceron 👍🏽 tweet media
Pablo Samuel Castro@pcastr

Thrilled to share our #ICML2025 paper “The Courage to Stop: Overcoming Sunk Cost Fallacy in Deep RL”, led by Jiashun Liu and with other great collaborators! We teach RL agents when to quit wasting effort, boosting efficiency with our proposed method LEAST. Here's the story 🧵👇🏾

English
3
16
124
12.7K
Timon Willi retweetledi
Jürgen Schmidhuber
Jürgen Schmidhuber@SchmidhuberAI·
Since 1990, we have worked on artificial curiosity & measuring „interestingness.“ Our new ICML paper uses "Prediction of Hidden Units" loss to quantify in-context computational complexity in sequence models. It can tell boring from interesting tasks and predict correct reasoning.
Vincent Herrmann@idivinci

Excited to share our new ICML paper, with co-authors @robert_csordas and @SchmidhuberAI! How can we tell if an LLM is actually "thinking" versus just spitting out memorized or trivial text? Can we detect when a model is doing anything interesting? (Thread below👇)

English
11
54
362
38.8K
Timon Willi retweetledi
Ola Kalisz
Ola Kalisz@OlaKalisz8·
Antiviral therapy design is myopic 🦠🙈 optimised only for the current strain. That's why you need a different Flu vaccine every year! Our #ICML2025 paper ADIOS proposes "shaper therapies" that steer viral evolution in our favour & remain effective. Work done @FLAIR_Ox 🧵👇
English
2
17
52
10.1K
Timon Willi
Timon Willi@TimonWilli·
congrats to the team!
Sakana AI@SakanaAILabs

The AI Scientist Generates its First Peer-Reviewed Scientific Publication We’re proud to announce that a paper produced by The AI Scientist-v2 passed the peer-review process at a workshop in ICLR, a top AI conference. Read more about this experiment → sakana.ai/ai-scientist-f… To our knowledge, this is the first fully AI-generated paper that has passed the same peer-review process that human researchers go through. The paper was produced by an improved version of the original AI Scientist, called The AI Scientist-v2. We’ll be sharing the full details of v2 in an upcoming release. We conducted this experiment with the full cooperation of both the ICLR leadership and the organizers of the ICLR workshop, @ICBINBWorkshop. We (@_yutaroyamada @cong_ml @shengranhu @RobertTLange) proudly collaborated with UBC (@jeffclune) and Oxford (@FLAIR_Ox) on this exciting project.

English
0
0
1
214
Timon Willi retweetledi
akbir.
akbir.@akbirkhan·
In the spirit of making more real world evals, here is the Factorio Learning Environment (FLE). Spurred by wanting to eval if models are good paperclip maximisers, we check how well agents build factories for other things 🏗️🏭🛠️
English
30
95
998
114.2K
Timon Willi retweetledi
Robert Lange
Robert Lange@RobertTLange·
🎉 Stoked to share The AI-Scientist 🧑‍🔬 - our end-to-end approach for conducting research with LLMs including ideation, coding, experiment execution, paper write-up & reviewing. Blog 📰: sakana.ai/ai-scientist/ Paper 📜: arxiv.org/abs/2408.06292 Code 💻: github.com/SakanaAI/AI-Sc… Work led together with @_chris_lu_, @cong_ml and jointly supervised by @j_foerst, @jeffclune, @hardmaru 🤗
GIF
Sakana AI@SakanaAILabs

Introducing The AI Scientist: The world’s first AI system for automating scientific research and open-ended discovery! sakana.ai/ai-scientist/ From ideation, writing code, running experiments and summarizing results, to writing entire papers and conducting peer-review, The AI Scientist opens a new era of AI-driven scientific research and accelerated discovery. Here are 4 example Machine Learning research papers generated by The AI Scientist. We published our report, The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery, and open-sourced our project! Paper: arxiv.org/abs/2408.06292 GitHub: github.com/SakanaAI/AI-Sc… Our system leverages LLMs to propose and implement new research directions. Here, we first apply The AI Scientist to conduct Machine Learning research. Crucially, our system is capable of executing the entire ML research lifecycle: from inventing research ideas and experiments, writing code, to executing experiments on GPUs and gathering results. It can also write an entire scientific paper, explaining, visualizing and contextualizing the results. Furthermore, while an LLM author writes entire research papers, another LLM reviewer critiques resulting manuscripts to provide feedback to improve the work, and also to select the most promising ideas to further develop in the next iteration cycle, leading to continual, open-ended discoveries, thus emulating the human scientific community. As a proof of concept, our system produced papers with novel contributions in ML research domains such language modeling, Diffusion and Grokking. We (@_chris_lu_, @RobertTLange, @hardmaru) proudly collaborated with the @UniOfOxford (@j_foerst, @FLAIR_Ox) and @UBC (@cong_ml, @jeffclune) on this exciting project.

English
13
66
364
68.7K
Simon
Simon@simongreen4·
@TimonWilli hello I think you left your bag in the back of my taxi. Please get in touch as soon as possible. Many thanks
English
1
0
1
13
Timon Willi retweetledi
Branton DeMoss
Branton DeMoss@BrantonDeMoss·
I’m pleased to announce our work which studies complexity phase transitions in neural networks! We track the Kolmogorov complexity of networks as they “grok”, and find a characteristic rise and fall of complexity, corresponding to memorization followed by generalization. 🧵
Branton DeMoss tweet media
English
31
151
1.2K
198.2K