Alex Laterre

261 posts

Alex Laterre banner
Alex Laterre

Alex Laterre

@AlexLaterre

stealth | ex-Head of Research @ Instadeep

London, UK Katılım Mart 2013
766 Takip Edilen678 Takipçiler
Alex Laterre
Alex Laterre@AlexLaterre·
Our new episode of Let’s Talk Research is live 🎙️ Alex Graves unpacks our approach to Generative AI for Biology, driving our progress in proteomics and antibody design with ProtBFN and AbBFN2 🦙 Coming up next: Atomistic Modelling...
InstaDeep@instadeepai

We’re back with the BFN story in Episode 2 of the Let’s Talk Research podcast. Hear Alex Graves dive into real-world applications of Bayesian Flow Networks, including their use in protein sequencing and antibody design. 🧵⬇️

English
0
1
1
338
Alex Laterre
Alex Laterre@AlexLaterre·
Rust feels built for the era of Agents. Strong typing + strict compiler = the perfect feedback loop LLMs need 🦀
English
0
0
6
260
Edan Toledo
Edan Toledo@EdanToledo·
Very proud of this work! If you're interested in AI agents and their current challenges, give this a read. Thanks to my incredible collaborators and to @Meta and @ucl for enabling me to tackle something of this scale for my first PhD paper. Excited for what's ahead!
Martin Josifoski@MartinJosifoski

Scaling AI research agents is key to tackling some of the toughest challenges in the field. But what's required to scale effectively? It turns out that simply throwing more compute at the problem isn't enough. We break down an agent into four fundamental components that shape its behavior, regardless of specific design or implementation choices: - Environment: The context (infrastructure) in which the agent operates - Search Policy: How the agent allocates resources - Operator Set and Policy: The available actions the agent can take and how it chooses among them - Evaluation Mechanism: How the agent determines whether a particular direction is promising We specifically focus on ML research agents tasked with real-world machine learning challenges from Kaggle competitions (MLE-bench). What we found is that factors like the environment, the agents’ core capabilities (the operator set), and overfitting emerge as critical bottlenecks long before computational limitations come into play. Here are our key insights: 🔹Environment: Agents can't scale without a robust environment that offers flexible and efficient access to computational resources. For instance, simply running the baseline agents in the (open-sourced) AIRA-dojo environment boosts performance by 10% absolute (30% relative)—highlighting just how crucial the environment is. 🔹Agent design and core capabilities: Resource allocation optimization only matters if agents can actually make good use of those resources. Our analysis shows that the agents’ operator set—the core actions they perform—can limit performance gains from more advanced search methods like evolutionary search and MCTS. We achieve SoTA performance by designing an improved operator set that better manages context and encourages exploration, and coupling it with the search policies. 🔹Evaluation: Accurate evaluation of the solution space is critical and reveals a significant challenge: overfitting. Ironically, agents that are highly effective at optimizing perceived values tend to be more vulnerable to overfitting—a problem that intensifies with increased compute resources. We observe up to 13% performance loss due to suboptimal selection of final solutions caused by this issue. 🔹Compute: Providing agents with sufficient compute resources is essential to avoid introducing an additional limitation and bias into evaluations. We demonstrate this through experiments in which we scale the runtime from 24 to 120 hours. In summary, successfully scaling AI research agents requires careful attention to these foundational aspects. Ignoring them risks turning scaling efforts into, at best, exercises in overfitting. These insights set the stage for exciting developments ahead!

English
1
2
25
998
Alex Laterre
Alex Laterre@AlexLaterre·
We made the cover of @NatMachIntell! 🌱 ChatNT is a Conversational Agent analysing genomics sequences to answer key biological questions, assisting scientists in their work 👩‍🔬 Kudos to @deAlmeida_BP, @thomas_pierrot & the @instadeepai research team for this huge milestone!✨
Alex Laterre tweet media
English
0
2
18
531
Alex Laterre retweetledi
InstaDeep
InstaDeep@instadeepai·
Pairing speed 🪽with near-quantum accuracy 🔍—experience both in atomic and molecular behaviour models with mlip, our open-source library for working with Machine Learning Interatomic Potentials (MLIP). 🧵Read the thread below to learn more.
GIF
English
2
4
15
6.2K
Alex Laterre
Alex Laterre@AlexLaterre·
Our latest antibody foundation model, AbBFN2, is now live as an experimental workflow on DeepChain.bio 🌱 It’s one thing to contribute to the scientific community by publishing. It’s another to see your models being deployed. I'm excited to team up with industry partners to explore practical applications of this new model 💥🧬
English
0
2
8
382
Alex Laterre
Alex Laterre@AlexLaterre·
Joining InstaDeep has its perks -- and surprises. Just ask Bora, one of our AI Scientists, who found himself on stage introducing AbBFN2, our new foundation model for antibody design 🎙️ ... and yes, he nailed it 💪
English
0
0
8
373
Alex Laterre
Alex Laterre@AlexLaterre·
AbBFN2 translates our mission into action: modeling the joint distribution of scientific metadata across modalities -- laying the foundation for a generative, holistic view of biology.
GIF
English
1
0
2
229
Alex Laterre
Alex Laterre@AlexLaterre·
Just weeks after releasing ProtBFN/AbBFN in @NatureComms, we're back with AbBFN2 ⚡️ Our new Antibody Foundation Model goes beyond sequence, modeling 45+ data modalities incl. genetic & biophysical properties. This creates a rich grammar for guiding antibody design.
Alex Laterre tweet media
English
1
9
56
3.8K
Alex Laterre
Alex Laterre@AlexLaterre·
@NandoDF Distributed training is one of the biggest blockers to making RL practical in the real world. There’s no one-size-fits-all solution. It depends on your algorithm, sim engine, network, hardware, etc. Here’s how we tackled it with Cloud TPUs cloud.google.com/blog/products/…
English
1
2
15
2.9K
Alex Laterre
Alex Laterre@AlexLaterre·
@NandoDF Even when you know a topic, seeing how others frame it can unlock new perspectives or just make it click in a fresh way. Great read, thanks @NandoDF! Looking forward to the next one! 👀
English
0
0
5
437
Alex Laterre
Alex Laterre@AlexLaterre·
Always refreshing to hear Dave’s perspective on the path to superintelligence. He brings the focus back to core ideas in reinforcement learning, overshadowed lately by generative models, but still essential to move beyond human-level data. The challenge: establishing effective feedback loops. Unlike games or math Olympiads, many real-world scenarios lack clear reward signals.
English
1
0
4
1.7K
Richard Sutton
Richard Sutton@RichardSSutton·
David Silver really hits it out of the park in this podcast. The paper "Welcome to the Era of Experience" is here: goo.gle/3EiRKIH.
Google DeepMind@GoogleDeepMind

Human generated data has fueled incredible AI progress, but what comes next? 📈 On the latest episode of our podcast, @FryRsquared and David Silver, VP of Reinforcement Learning, talk about how we could move from the era of relying on human data to one where AI could learn for itself. Watch now → 00:00 Introduction 01:50 Era of experience 03:45 AlphaZero 10:19 Move 37 15:20 Reinforcement learning and human feedback 24:30 AlphaProof 29:50 Math Olympiads 35:00 Experience based methods 42:56 Hannah's reflections 44:00 Fan Hui joins

English
19
181
1K
181.9K