Muratahan Aykol

460 posts

Muratahan Aykol banner
Muratahan Aykol

Muratahan Aykol

@draykol

AI for science @PeriodicLabs | previously at Google DeepMind, TRI, Rivian, Berkeley Lab, Northwestern

Menlo Park, CA Katılım Ağustos 2018
377 Takip Edilen1.3K Takipçiler
Sabitlenmiş Tweet
Muratahan Aykol
Muratahan Aykol@draykol·
We’re building the AI scientist. @PeriodicLabs is live! Super excited to be part of this journey.
William Fedus@LiamFedus

Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.

English
1
10
100
11.4K
Muratahan Aykol retweetledi
Stephan Hoyer
Stephan Hoyer@shoyer·
I started (and finished) vibe coding a new simulation code yesterday, implemented in a few thousand lines of Python. All I can say is, wow. Claude, Gemini and Codex compressed weeks of research, implementation and experimentation into hours. 🚀
English
17
10
183
17.8K
Muratahan Aykol retweetledi
Stephan Hoyer
Stephan Hoyer@shoyer·
After an incredible decade at Google, it’s time for my next chapter. This week, I joined Periodic Labs, a startup building and training AI scientists with autonomous laboratories.
English
20
21
741
31.9K
Muratahan Aykol retweetledi
Mert Yuksekgonul
Mert Yuksekgonul@mertyuksekgonul·
How to get AI to make discoveries on open scientific problems? Most methods just improve the prompt with more attempts. But the AI itself doesn't improve. With test-time training, AI can continue to learn on the problem it’s trying to solve: test-time-training.github.io/discover.pdf
Mert Yuksekgonul tweet media
English
25
169
752
372.4K
Rishabh Agarwal
Rishabh Agarwal@agarwl_·
The highlight of my year happened right before it ended. Happy new year!
Rishabh Agarwal tweet mediaRishabh Agarwal tweet media
English
86
5
973
76K
Muratahan Aykol retweetledi
William Fedus
William Fedus@LiamFedus·
Periodic Labs and the U.S. Department of Energy are collaborating to accelerate scientific discovery. The 17 US national labs have driven decades of scientific advances. Now, we’re rethinking a scaling-era of science by connecting AI to physical experiments.
Periodic Labs@periodiclabs

We're in Washington at the Genesis Mission event today. 🇺🇸 At Periodic Labs, we see a new era of science emerging where AI systems learn and direct physical scientific experiments. We're excited to partner with @ENERGY on this important endeavor. By uniting the DOE’s deep scientific resources with private industry’s frontier AI, we will accelerate breakthroughs in materials and energy. Close public-private collaboration is essential to secure America’s strategic leadership.

English
6
13
241
29.5K
Muratahan Aykol retweetledi
Periodic Labs
Periodic Labs@periodiclabs·
We're in Washington at the Genesis Mission event today. 🇺🇸 At Periodic Labs, we see a new era of science emerging where AI systems learn and direct physical scientific experiments. We're excited to partner with @ENERGY on this important endeavor. By uniting the DOE’s deep scientific resources with private industry’s frontier AI, we will accelerate breakthroughs in materials and energy. Close public-private collaboration is essential to secure America’s strategic leadership.
U.S. Department of Energy@ENERGY

x.com/i/article/2001…

English
0
7
82
57.1K
Muratahan Aykol retweetledi
Ekin Dogus Cubuk
Ekin Dogus Cubuk@ekindogus·
We are looking for a condensed matter theorist to join our team at @periodiclabs. Consider applying if you are an expert on applying formal condensed matter theory to real quantum materials.
Ekin Dogus Cubuk tweet media
English
3
16
81
48.9K
Muratahan Aykol retweetledi
Rishabh Agarwal
Rishabh Agarwal@agarwl_·
"There are no winners on the losing side." One big cultural difference I've observed working at big tech for several years vs being at @periodiclabs is the shift from "individual impact / promo" to "we all win together". Highly recommend folks in big techs to consider joining a smaller startup / company! And if you are excited by our mission and our team (what's not to like?), we are hiring! On the ML side, actively looking for distributed training, inference, cuda kernels, mid-training, and even pretraining. Please reach out and apply on our website. I mostly focus on RL but tagging some tagging some ML folks here who you might wanna reach out to @xanderai @vwxyzjn @DBahdanau @khoomeik @reiinakano.
Rishabh Agarwal tweet media
English
9
21
261
32.3K
Muratahan Aykol retweetledi
Anjney Midha
Anjney Midha@AnjneyMidha·
the handful of meta folks I’ve spoken to impacted by the layoffs in the last 24 hrs are exceptionally motivated and talented I’m going to make more researcher office hours time this week for anyone affected you can sign up below if that would be helpful to you
English
21
18
180
62.2K
Muratahan Aykol retweetledi
The Nobel Prize
The Nobel Prize@NobelPrize·
BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2025 #NobelPrize in Chemistry to Susumu Kitagawa, Richard Robson and Omar M. Yaghi “for the development of metal–organic frameworks.”
The Nobel Prize tweet media
English
483
8.8K
18.9K
6.1M
Muratahan Aykol retweetledi
William Fedus
William Fedus@LiamFedus·
@periodiclabs is following the natural progression of reward functions since 2022 from human-preference models (in RLHF), from math + code verifiable RMs (in reasoning models), to reward functions grounded in the physical world (in end-to-end optimization against experiment)
a16z@a16z

ChatGPT learned from human feedback. The next generation of AI will learn from the laws of nature. Liam Fedus (co-creator of ChatGPT, now at Periodic Labs) says models will train through “experiment in the loop”, using real-world results, not human preference, as their reward function. @LiamFedus @periodiclabs

English
1
12
61
9.9K
Muratahan Aykol retweetledi
a16z
a16z@a16z·
Building an AI Physicist: ChatGPT Co-Creator’s Next Venture Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we need a new approach. We sat down with Liam Fedus (co-creator of ChatGPT) and Ekin Dogus Cubuk (ex-materials science and chemistry lead at Google DeepMind) to talk about their new startup @PeriodicLabs and their plan to automate discovery in the hard sciences. 00:00 LLMs in physics and chem research 03:53 What is Periodic Labs? 14:45 Building the team 17:29 Superconductivity 27:39 Periodic's mission and applications 35:38 Mid-training and model performance 49:49 What makes a great researcher @AnjneyMidha @LiamFedus @EkinDogus
William Fedus@LiamFedus

Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.

English
11
41
258
217K
Muratahan Aykol retweetledi
a16z
a16z@a16z·
“Foundation models but for quantum mechanics, will be the next frontier for LLMs.” Periodic Labs’ Ekin Dogus Cubuk says logic and math gave AI its first proofs. At the quantum scale, where biology, chemistry and materials converge, models could begin inventing new matter itself. @ekindogus @periodiclabs
a16z@a16z

Building an AI Physicist: ChatGPT Co-Creator’s Next Venture Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we need a new approach. We sat down with Liam Fedus (co-creator of ChatGPT) and Ekin Dogus Cubuk (ex-materials science and chemistry lead at Google DeepMind) to talk about their new startup @PeriodicLabs and their plan to automate discovery in the hard sciences. 00:00 LLMs in physics and chem research 03:53 What is Periodic Labs? 14:45 Building the team 17:29 Superconductivity 27:39 Periodic's mission and applications 35:38 Mid-training and model performance 49:49 What makes a great researcher @AnjneyMidha @LiamFedus @EkinDogus

English
26
66
310
96.4K
Muratahan Aykol retweetledi
Garry Tan
Garry Tan@garrytan·
If new hard science breakthroughs start happening through the use of co-scientists in the form of AI, it's going to accelerate human abundance and we're so here for it
William Fedus@LiamFedus

Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.

English
20
17
173
33.8K