Mansheej Paul

183 posts

Mansheej Paul banner
Mansheej Paul

Mansheej Paul

@mansiege

@periodiclabs

San Francisco, CA เข้าร่วม Ekim 2017
758 กำลังติดตาม707 ผู้ติดตาม
ทวีตที่ปักหมุด
Mansheej Paul
Mansheej Paul@mansiege·
Check out our new work: Critique-out-Loud (CLoud) reward models where we improve reward models by having them generate a critique for a response before scoring it. Results and details in thread from @ZackAnkner.
Zack Ankner@ZackAnkner

Excited to announce our new work: Critique-out-Loud (CLoud) reward models. CLoud reward models first produce a chain of thought critique of the input before predicting a scalar reward, allowing reward models to reason explicitly instead of implicitly! arxiv.org/abs/2408.11791

English
1
1
24
2.4K
Mansheej Paul
Mansheej Paul@mansiege·
The next frontier of AI is where it meets the physical world, generates new hypotheses, and learns from experiments. Excited to join an incredible team in accelerating science and pushing this frontier.
William Fedus@LiamFedus

Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.

English
2
5
19
2.7K
Mansheej Paul รีทวีตแล้ว
Cody Blakeney
Cody Blakeney@code_star·
It was kinda a movie
Cody Blakeney tweet mediaCody Blakeney tweet mediaCody Blakeney tweet mediaCody Blakeney tweet media
English
2
6
48
5.4K
Mansheej Paul รีทวีตแล้ว
Misha Laskin
Misha Laskin@MishaLaskin·
Engineers spend 70% of their time understanding code, not writing it. That’s why we built Asimov at @reflection_ai. The best-in-class code research agent, built for teams and organizations.
English
99
175
1.5K
365.2K
Mansheej Paul รีทวีตแล้ว
Davis Blalock
Davis Blalock@davisblalock·
Deep learning training is a mathematical dumpster fire. But it turns out that if you *fix* the math, everything kinda just works…fp8 training, hyperparameter transfer, training stability, and more. [1/n]
Davis Blalock tweet media
English
15
148
1.4K
188.9K
Cody Blakeney
Cody Blakeney@code_star·
one of my friends in a group chat said this to another person "You’re like if Larry David thought he was Larry page"
Cody Blakeney tweet media
English
1
0
5
386
Mansheej Paul
Mansheej Paul@mansiege·
@nsaphra You should check out Victor Pelevin. Omon Ra is great!
English
0
0
1
64
Naomi Saphra
Naomi Saphra@nsaphra·
@mansiege Between them an Lem I’m like, is there a secret iceberg of amazing Soviet sci-fi parodies that I don’t even know about or it is basically just those guys?
English
1
0
0
49
Naomi Saphra
Naomi Saphra@nsaphra·
2025 book thread incoming!
English
1
3
27
10.8K
Mansheej Paul
Mansheej Paul@mansiege·
@nsaphra This is such an awesome book. Everything by these authors honestly.
English
1
0
1
76
Naomi Saphra
Naomi Saphra@nsaphra·
Wrapping up my campus satire binge with this old Soviet sci-fi/fantasy about a scientific institute for studying magic! Basically “what if wizards had to deal with Soviet institutional bureaucracy and academic politics?” One wizard is based on Lysenko.
Naomi Saphra tweet media
English
2
0
2
616
Cody Blakeney
Cody Blakeney@code_star·
When people ask me what I do for work
Cody Blakeney tweet media
English
2
7
45
2.1K
Mansheej Paul รีทวีตแล้ว
Dan Biderman
Dan Biderman@dan_biderman·
How can we use small LLMs to shift more AI workloads onto our laptops and phones? In our paper and open-source code, we pair on-device LLMs (@ollama) with frontier LLMs in the cloud (@openai, @together), to solve token-intensive workloads on your 💻 at 17.5% of the cloud cost while maintaining 97.9% of the accuracy. See Gru and the Minions in action below, 🔉on please (h/t @cartesia)!
English
41
171
634
192K
Mansheej Paul รีทวีตแล้ว
Core Francisco Park @ NeurIPS2025
💥New Paper! Algorithmic Phases of In-Context Learning: We show that transformers learn a superposition of different algorithmic solutions depending on the data diversity, training time and context length! 1/n
Core Francisco Park @ NeurIPS2025 tweet media
English
7
59
427
37.2K
Mansheej Paul รีทวีตแล้ว
Zack Ankner
Zack Ankner@ZackAnkner·
Critique out loud reward models made it into the Kimi k1.5 technical report! Super cool to see someone scale it up to 800k inputs and to see how much better reward modeling it led to!
Zack Ankner tweet media
English
2
8
62
4.4K
Mansheej Paul รีทวีตแล้ว
Cody Blakeney
Cody Blakeney@code_star·
If you want to read more about the curriculum training used in OLMo 2 checkout our (@mansiege @_BrettLarsen Sean Owen) paper! Congrats on the release to everyone at AI2! (but especially @soldni and @kylelostat <3 data ) arxiv.org/abs/2406.03476
Cody Blakeney tweet media
Nathan Lambert@natolambert

Super excited to announce our best open-source language models yet. OLMo 2. These instruct models are hot off the press -- finished training with our new RL method this morning and vibes are very good. OLMo 2 introduces a new family of 7B and 13B models trained on up to 5T tokens, representing the best fully-open language models to date. These models sit at the Pareto frontier of performance and training efficiency, with OLMo 2 7B outperforming Llama-3.1 8B and OLMo 2 13B outperforming Qwen 2.5 7B despite lower total training FLOPs. Key improvements include: 1. Enhanced architecture with RMSNorm, QK-Norm, auxiliary Z-loss, and rotary positional embeddings 2. Two-stage curriculum training approach using OLMo-Mix-1124 and Dolmino-Mix-1124 3. Model souping technique for final checkpoints (aka merging) 4. State-of-the-art post-training methodology from Tülu 3 with a three stage training of instruction tuning, preference tuning with DPO, and our new reinforcement learning with verifiable rewards (RLVR) 5. Evaluated on the OLMES suite 6. The Instruct variants are competitive with the best open-weight models, with OLMo 2 13B Instruct outperforming Qwen 2.5 14B instruct, Tülu 3 8B, and Llama 3.1 8B instruct models. The 13B Instruct version builds on our Tulu 3 Recipe with a very finetunable base model and makes for a great user experience that we haven't seen before with the open-source models. Links below :D

English
1
8
50
8.6K
Mansheej Paul
Mansheej Paul@mansiege·
Code and models for our latest work Critique-out-Loud (CLoud) Reward models is now released! Check out our paper (arxiv.org/abs/2408.11791) for more details on using reward models to reason before predicting a reward score.
Zack Ankner@ZackAnkner

Code and models for Critique-out-Loud (CLoud) reward models are finally public! The repo comes with a gradio demo you can run, so hopefully people can mess around with the models 😃 Code: github.com/zankner/CLoud

English
3
2
22
4.4K