William Fedus

1.2K posts

William Fedus

William Fedus

@LiamFedus

Co-Founder of @periodiclabs Past: VP of Post-Training @OpenAI; Google Brain

San Francisco, CA Katılım Ekim 2012
1.2K Takip Edilen32.9K Takipçiler
Sabitlenmiş Tweet
William Fedus
William Fedus@LiamFedus·
Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.
William Fedus tweet media
English
429
442
4.2K
3.5M
William Fedus retweetledi
Xander Dunn
Xander Dunn@xanderai·
I am so incredibly grateful to be a part of @periodiclabs. Seriously, beast company, beast opportunity, beast colleagues.
English
2
1
75
5.5K
William Fedus
William Fedus@LiamFedus·
@creatine_cycle @periodiclabs Great being on and thank you for the gains. And only now just realizing my blundered opportunity to pitch the workout philosophy of “hunt, kill, drag” a.k.a. cardio, HIIT/sprints, weights
English
3
0
26
2.1K
atlas
atlas@creatine_cycle·
new swole as a service episode coming with the most jacked guy in AI. @LiamFedus co-founder of @periodiclabs
atlas tweet media
English
15
1
73
7.1K
William Fedus
William Fedus@LiamFedus·
@shoyer Welcome, Stephan!! Thrilled to build this with you
English
1
0
4
870
Stephan Hoyer
Stephan Hoyer@shoyer·
After an incredible decade at Google, it’s time for my next chapter. This week, I joined Periodic Labs, a startup building and training AI scientists with autonomous laboratories.
English
20
21
741
31.9K
William Fedus
William Fedus@LiamFedus·
GPT-4 + ChatGPT looked like the start of a Bostrom-style “decisive advantage → singleton” story. In 2022, GPT-4 felt ~18 months ahead of its time, and then it was soon followed by a viral product hit, providing a rich source of feedback for model development. But instead, a few years later, we're in a multipolar world with several competitors vying for the lead, shipping weeks (or days) apart. And, quite happily, this competition led to likely one of the largest consumer surpluses of all time. It's interesting to reconsider this now with the rise of coding agents (Claude, Codex, etc.). These are a more direct contributor to the next generation of models — a recursive self-improvement loop. But with intense market competition, no company seems to withhold its best tech for internal use. Instead, the latest and greatest is rapidly made available to consumers (and their competitors—see Anthropic banning xAI from using its coding models), implying no asymmetric advantage since everyone benefits. In 2026, do coding agents push us towards Bostrom’s singleton story, or does market competition keep the current status quo?
William Fedus tweet media
English
6
3
111
23.6K
William Fedus
William Fedus@LiamFedus·
2026 is the year AI learns directly from physical labs via high-compute RL
TBPN@tbpn

OpenAI's @kevinweil says 24/7 robotic labs could automate scientific discovery using "reinforcement learning with a loop through the real world": "There’s a lot of science that can be totally automated. There’s no reason at this point that you need to have grad students pipetting one thing into another thing." "The idea is to have robotic labs that are online 24/7 and can scale in parallel. You have models reasoning for two days to find the most efficient experiments to run, once they get to a good point, they pass that to a robotic lab which can experiment in parallel at high volume." "The results pass back into a model which reasons about the results and then goes out and runs a different set of experiments. You’re doing reinforcement learning with a loop through the real world."

English
5
10
145
22.8K
William Fedus
William Fedus@LiamFedus·
@MillionInt Congrats on an incredible run! Was so fun partnering with you and your team
English
0
0
68
12.2K
Jerry Tworek
Jerry Tworek@MillionInt·
This is the note I have shared with my team today:
Jerry Tworek tweet media
English
263
100
2.9K
732K
William Fedus retweetledi
Greg Brockman
Greg Brockman@gdb·
two big themes of AI in 2026 will be enterprise agent adoption and scientific acceleration
English
435
585
5.9K
885.9K
Rishabh Agarwal
Rishabh Agarwal@agarwl_·
The highlight of my year happened right before it ended. Happy new year!
Rishabh Agarwal tweet mediaRishabh Agarwal tweet media
English
86
5
974
76K
William Fedus
William Fedus@LiamFedus·
Periodic Labs and the U.S. Department of Energy are collaborating to accelerate scientific discovery. The 17 US national labs have driven decades of scientific advances. Now, we’re rethinking a scaling-era of science by connecting AI to physical experiments.
Periodic Labs@periodiclabs

We're in Washington at the Genesis Mission event today. 🇺🇸 At Periodic Labs, we see a new era of science emerging where AI systems learn and direct physical scientific experiments. We're excited to partner with @ENERGY on this important endeavor. By uniting the DOE’s deep scientific resources with private industry’s frontier AI, we will accelerate breakthroughs in materials and energy. Close public-private collaboration is essential to secure America’s strategic leadership.

English
6
13
241
29.5K
Ying Sheng
Ying Sheng@ying11231·
We've been running @radixark for a few months, started by many core developers in SGLang @lmsysorg and its extended ecosystem (slime @slime_framework , AReaL @jxwuyi). I left @xai in August — a place where I built deep emotions and countless beautiful memories. It was the best place I’ve ever worked, the place I watched grow from a few dozen people to hundreds, and it truly felt like home. What pushed me to make such a hard decision is the momentum of building SGLang open source and the mission of creating an ambitious future, within an open spirit that I learnt from my first job at @databricks after my PhD. We started SGLang in the summer of 2023 and made it public in January 2024. Over the past 2 years, hundreds of people have made great efforts to get to where they are today. We experienced several waves of growth after its first release. I still remember the many dark nights in the summer of 2024, I spent with @lm_zheng , @lsyincs , and @zhyncs42 debugging, while @ispobaoke single-handedly took on DeepSeek inference optimizations, seeing @GenAI_is_real and the community strike team tag-teaming on-call shifts non-stop. There are so many more who have joined that I'm out of space to call out, but they're recorded on the GitHub contributor list forever. The demands grow exponentially, and we have been pushed to make it a dedicated effort supported by RadixArk. It’s the step-by-step journey of a thousand miles that has carried us here today, and the same relentless Long March that will lead us into the tens of thousands of miles yet to come. The story never stops growing. Over the past year, we’ve seen something very clear: The world is full of people eager to build AI, but the infrastructure that makes it possible is not shared. The most advanced inference and training stacks live inside a few companies. Everyone else is forced to rebuild the same schedulers, compilers, serving engines, and training pipelines again and again — often under enormous pressure, with lots of duplicated effort and wasted insight. RadixArk was born to change that. Today, we’re building an infrastructure-first, deep-tech company with a simple and ambitious mission: "Make frontier-level AI infrastructure open and accessible to everyone." If the two values below resonate with you, come talk to us: (1) Engineering as an art. Infrastructure is a first-class citizen in RadixArk. We care about elegant design and code that lasts. Beneath every line of code lies the soul of the engineer who wrote it. (2) A belief in openness. We share what we build. We bet on long-term compounding through community, contribution, and giving more than we take. A product is defined by its users, yet it truly comes alive the moment functionality transcends mere utility and begins to embody aesthetics. Thanks to all the miles (the name of our first released RL framework; see below). radixark.ai
English
112
128
1.1K
538.5K
William Fedus retweetledi
Rohan Pandey
Rohan Pandey@khoomeik·
good news: you no longer need to hunt each of us down individually even better news: we'll buy you boba come chat about AI for Science with the Periodic Labs team! this Friday at 2pm, a 5 min walk from the NeurIPS venue (RSVP below)
Rohan Pandey tweet media
Rohan Pandey@khoomeik

if you’re interested in autonomous science at frontier scale, come find the @periodiclabs team at neurips! look for: @vwxyzjn to discuss training big MoEs @xiangfu_ml for atomic GNNs @mzhangio for AI scientists @VincentMoens for RL systems me for midtraining sample efficiency

English
13
10
171
34.3K
William Fedus retweetledi
Rohan Pandey
Rohan Pandey@khoomeik·
if you’re interested in autonomous science at frontier scale, come find the @periodiclabs team at neurips! look for: @vwxyzjn to discuss training big MoEs @xiangfu_ml for atomic GNNs @mzhangio for AI scientists @VincentMoens for RL systems me for midtraining sample efficiency
Rohan Pandey tweet media
English
8
13
278
59.8K
Azalia Mirhoseini
Azalia Mirhoseini@Azaliamirh·
Thrilled to share that @annadgoldie and I are launching @RicursiveAI, a frontier lab enabling recursive self-improvement through AIs that design their own chips. Our vision for transforming chip design began with AlphaChip, an AI for layout optimization used to design four generations of TPUs, data center CPUs, and smartphones. AlphaChip offered a glimpse into a future where AI designs the silicon that fuels it. Ricursive extends this vision to the entire chip stack, building AI that architects, verifies, and implements silicon, enabling models and chips to co-evolve in a tight loop. We sat down with WSJ’s @berber_jin1 to discuss Ricursive: wsj.com/tech/this-ai-s…
Ricursive Intelligence@RicursiveAI

Introducing Ricursive Intelligence, a frontier AI lab enabling a recursive self-improvement loop between AI and the chips that fuel it. Learn more at ricursive.com

English
125
137
1.5K
225.8K