Reflection AI

37 posts

Reflection AI banner
Reflection AI

Reflection AI

@reflection_ai

Frontier open intelligence accessible to all.

Tham gia Haziran 2024
2 Đang theo dõi12.1K Người theo dõi
Tweet ghim
Reflection AI
Reflection AI@reflection_ai·
Today we're sharing the next phase of Reflection. We're building frontier open intelligence accessible to all. We've assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion. Why Open Intelligence Matters Technological and scientific progress is driven by values of openness and collaboration. The internet, Linux, and the protocols and standards that underpin modern computing are all open. This isn't a coincidence. Open software is what gets forked, customized, and embedded into systems worldwide. It's what universities teach, what startups build on, what enterprises deploy. Open science enables others to learn from the results, be inspired by them, interrogate them, and build upon them in order to push the frontier of human knowledge and scientific advancement. AI got to where it is today through scaling ideas (e.g. self-attention, next token prediction, reinforcement learning) that were shared and published openly. Now AI is becoming the technology layer that everything else runs on top of. The systems that accelerate scientific research, enhance education, optimize energy usage, supercharge medical diagnoses, and run supply chains will all be built on AI infrastructure. But the frontier is currently concentrated in closed labs. If this continues, a handful of entities will control the capital, compute, and talent required to build AI, creating a runaway dynamic that locks everyone else out. There's a narrow window to change this trajectory. We need to build open models so capable that they become the obvious choice for users and developers worldwide, ensuring the foundation of intelligence remains open and accessible rather than controlled by a few. What We've Built Over the last year, we've been preparing for this mission. We’ve assembled a team who have pioneered breakthroughs including PaLM, Gemini, AlphaGo, AlphaCode, AlphaProof, and contributed to ChatGPT and Character AI, among many others. We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale. We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we're now bringing these methods to general agentic reasoning. We've raised significant capital and identified a scalable commercial model that aligns with our open intelligence strategy, ensuring we can continue building and releasing frontier models sustainably. We are now scaling up to build open models that bring together large-scale pretraining and advanced reinforcement learning from the ground up. Safety and Responsibility Open intelligence also changes how we think about safety. It enables the broader community to participate in safety research and discourse, rather than leaving critical decisions to a few closed labs. Transparency allows independent researchers to identify risks, develop mitigations, and hold systems accountable in ways that closed development cannot. But openness also requires confronting the challenges of capable models being widely accessible. We're investing in evaluations to assess capabilities and risks before release, security research to protect against misuse, and responsible deployment standards. We believe the answer to AI safety is not “security through obscurity” but rigorous science conducted in the open, where the global research community can contribute to solutions rather than a handful of companies making decisions behind closed doors. Join Us There is a window of opportunity today to build frontier open intelligence, but it is closing and this may be the last. If this mission resonates, join us.
English
113
118
1.1K
972.1K
Director Michael Kratsios
American AI companies and open-source models can truly empower partner nations in their pursuit of meaningful AI sovereignty. Huge announcement by @reflection_ai and Shinsegae Group in the Republic of Korea. The American AI Export Program is rolling out 🇺🇸🇺🇸🇺🇸
Reflection AI@reflection_ai

Reflection is partnering with Shinsegae Group to build a 250-megawatt sovereign AI factory for the Republic of Korea. Open intelligence. Built on trust between allies. Owned by the nations that need it most. The future of sovereign AI. Read more in the @WSJ.

English
5
37
180
25.4K
Reflection AI
Reflection AI@reflection_ai·
Reflection is partnering with Shinsegae Group to build a 250-megawatt sovereign AI factory for the Republic of Korea. Open intelligence. Built on trust between allies. Owned by the nations that need it most. The future of sovereign AI. Read more in the @WSJ.
Reflection AI tweet media
English
13
32
187
135.5K
Reflection AI
Reflection AI@reflection_ai·
Most approaches to “agentic AI” focus on post-training fixes. In this conversation, member of our technical staff, @achowdhery argues the bottleneck is pre-training itself. Drawing on her work on PaLM and early Gemini, she explains why next-token prediction breaks down for long-horizon planning -- and how objectives, attention, and training data must evolve to support true agentic behavior.
The TWIML AI Podcast@twimlai

Today, we're joined by @achowdhery, member of technical staff at @reflection_ai, to explore the fundamental shifts required to build true agentic AI. While the industry has largely focused on post-training techniques to improve reasoning, Aakanksha draws on her experience leading pre-training efforts for Google’s PaLM and early Gemini models to argue that pre-training itself must be rethought to move beyond static benchmarks. We explore the limitations of next-token prediction for multi-step workflows and examine how attention mechanisms, loss objectives, and training data must evolve to support long-form reasoning and planning. Aakanksha shares insights on the difference between context retrieval and actual reasoning, the importance of "trajectory" training data, and why scaling remains essential for discovering emergent agentic capabilities like error recovery and dynamic tool learning. 🗒️ For the full list of resources for this episode, visit the show notes page: twimlai.com/go/759. 📖 CHAPTERS =============================== 00:00 - Introduction 02:26 - Reflection 04:54 - Limitations of post-training for building agents 07:31 - Rethinking pre-training in agents 10:51 - Scaling 11:27 - Evolving attention mechanisms for agentic capabilities 12:39 - Memory as a tool 14:13 - Loss objectives and training data 15:50 - Fine-tuning loss in agent performance 19:37 - Training data 21:29 - Augmenting dominant training data source 24:11 - Overcoming challenges in training on synthetic data 25:47 - Benchmarks 30:44 - Scaling laws in large models versus small models 33:20 - Long-form versus short-form reasoning 37:57 - Agent’s ability to recover from failure 40:15 - Hallucinations and failure recovery 43:53 - Tool use in agents 46:38 - Coding agents 48:37 - How researchers can contribute to agentic AI

English
5
15
110
40.8K
Reflection AI
Reflection AI@reflection_ai·
Welcome to the team, Brandon!
Brandon Amos@brandondamos

An update: I have left Meta Superintelligence Labs and joined @reflection_ai in NYC!! Today is my first day. I started in the Fundamental AI Research (FAIR) lab at Meta, then Facebook, over six (!) years ago as my first job out of the PhD. They were some formative years. The group is full of exceptionally talented people that have profoundly shaped my perspective on life and research. I am grateful for everything we have shared and proud of everything we created together. I have decided it's time to try to build a startup and new frontier models with Reflection. Superintelligence will be one of the most significant advancements of our lifetimes, resulting in a computational reflection of ourselves. We believe it should be safe, open, and accessible to all. I am excited to be jumping into the post-training and reinforcement learning pipelines to advance capabilities and alignment. And we are hiring! Please get in touch.

English
1
0
38
12.3K
Brandon Amos
Brandon Amos@brandondamos·
An update: I have left Meta Superintelligence Labs and joined @reflection_ai in NYC!! Today is my first day. I started in the Fundamental AI Research (FAIR) lab at Meta, then Facebook, over six (!) years ago as my first job out of the PhD. They were some formative years. The group is full of exceptionally talented people that have profoundly shaped my perspective on life and research. I am grateful for everything we have shared and proud of everything we created together. I have decided it's time to try to build a startup and new frontier models with Reflection. Superintelligence will be one of the most significant advancements of our lifetimes, resulting in a computational reflection of ourselves. We believe it should be safe, open, and accessible to all. I am excited to be jumping into the post-training and reinforcement learning pipelines to advance capabilities and alignment. And we are hiring! Please get in touch.
Brandon Amos tweet media
English
70
21
840
90.7K
Reflection AI
Reflection AI@reflection_ai·
@HanchungLee Thank you for flagging. We're in contact with the X support team.
English
0
0
1
134
Reflection AI
Reflection AI@reflection_ai·
Welcome to the team @_ghorbani. We’re excited to have you leading the Science of Scaling team.
Behrooz Ghorbani@_ghorbani

Hi friends, after three incredible years at OpenAI I am excited to share that I am starting a new chapter at @reflection_ai, where I will be leading the Science of Scaling team. Our mission is to deepen the scientific understanding of large scale learning and to turn compute into intelligence as efficiently and predictably as possible.

English
1
3
71
14.6K
Reflection AI
Reflection AI@reflection_ai·
Reflection will be at NeurIPS San Diego next week. Say hello to the team at the Reflection booth and come along to our panel on open AI ecosystems. You’ll hear from: @real_ioannis (Reflection) @joespeez (Meta, Llama & PyTorch) @natolambert (AI2, ATOM Project) @robertnishihara (Ray, Berkeley) @ying11231 (SGLang, former xAI) We’ll be discussing: What does it actually take to build open AI ecosystems? What can we learn from China's approach? What do transparency and sovereignty mean in practice? Where is frontier intelligence heading? Find the Luma link in the thread to mark it in your calendar.
English
4
17
132
39.6K
Reflection AI đã retweet
🇺🇦 Alex Polozov
🇺🇦 Alex Polozov@Skiminok·
🎉 Next week, I am excited to join @reflection_ai as a Member of Technical Staff to help build the open intelligence ecosystem of the Western world. It's the most exciting opportunity to help software builders in our time, and will shape many years of AI Engineering in the medium-term before AGI. Not just about Western vs Eastern open models, but more about how AI-driven software will look like in 2030. I spent some time articulating my thoughts about where we're going as a community and why... which became a whole blog post. Take a look, hope it interests you! (And if it really does, we are hiring in NYC, SF, and London 😉) alexpolozov.com/blog/reflectio…
🇺🇦 Alex Polozov tweet media🇺🇦 Alex Polozov tweet media🇺🇦 Alex Polozov tweet media🇺🇦 Alex Polozov tweet media
English
30
15
180
76.2K
Reflection AI đã retweet
The Information
The Information@theinformation·
Today on The Information’s TITV: -CEO of @Qualtrics, @Zserafin, makes $6.75 billion bet on health data -@MishaLaskin, CEO of @reflection_ai, notches $8 billion valuation for open-source AI -Co-Founder of AI startup @useplumb, @aarondignan, announces business closure -E-commerce startups back from the dead | @MikeDuda, Bullish; @anngehan, The Information -@Yueqi_Yang, Crypto Reporter, on new stable coin law 📺 Tune in at 10 am PT / 1 pm ET on thein.fo/4nn33AK
English
2
2
9
6.5K
Reflection AI đã retweet
Brian Zhan
Brian Zhan@brianzhan1·
Excited to have backed @reflection_ai from the seed. Today’s $2b raise pushes frontier open intelligence, with open models + advanced RL at scale, accessible to all. And most importantly, with @nvidia as their key partner. It’s a big day for open AI. Thanks NYTimes for covering nytimes.com/2025/10/09/bus…
Brian Zhan tweet media
English
12
4
164
27.7K
Reflection AI
Reflection AI@reflection_ai·
Today we're sharing the next phase of Reflection. We're building frontier open intelligence accessible to all. We've assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion. Why Open Intelligence Matters Technological and scientific progress is driven by values of openness and collaboration. The internet, Linux, and the protocols and standards that underpin modern computing are all open. This isn't a coincidence. Open software is what gets forked, customized, and embedded into systems worldwide. It's what universities teach, what startups build on, what enterprises deploy. Open science enables others to learn from the results, be inspired by them, interrogate them, and build upon them in order to push the frontier of human knowledge and scientific advancement. AI got to where it is today through scaling ideas (e.g. self-attention, next token prediction, reinforcement learning) that were shared and published openly. Now AI is becoming the technology layer that everything else runs on top of. The systems that accelerate scientific research, enhance education, optimize energy usage, supercharge medical diagnoses, and run supply chains will all be built on AI infrastructure. But the frontier is currently concentrated in closed labs. If this continues, a handful of entities will control the capital, compute, and talent required to build AI, creating a runaway dynamic that locks everyone else out. There's a narrow window to change this trajectory. We need to build open models so capable that they become the obvious choice for users and developers worldwide, ensuring the foundation of intelligence remains open and accessible rather than controlled by a few. What We've Built Over the last year, we've been preparing for this mission. We’ve assembled a team who have pioneered breakthroughs including PaLM, Gemini, AlphaGo, AlphaCode, AlphaProof, and contributed to ChatGPT and Character AI, among many others. We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale. We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we're now bringing these methods to general agentic reasoning. We've raised significant capital and identified a scalable commercial model that aligns with our open intelligence strategy, ensuring we can continue building and releasing frontier models sustainably. We are now scaling up to build open models that bring together large-scale pretraining and advanced reinforcement learning from the ground up. Safety and Responsibility Open intelligence also changes how we think about safety. It enables the broader community to participate in safety research and discourse, rather than leaving critical decisions to a few closed labs. Transparency allows independent researchers to identify risks, develop mitigations, and hold systems accountable in ways that closed development cannot. But openness also requires confronting the challenges of capable models being widely accessible. We're investing in evaluations to assess capabilities and risks before release, security research to protect against misuse, and responsible deployment standards. We believe the answer to AI safety is not “security through obscurity” but rigorous science conducted in the open, where the global research community can contribute to solutions rather than a handful of companies making decisions behind closed doors. Join Us There is a window of opportunity today to build frontier open intelligence, but it is closing and this may be the last. If this mission resonates, join us.
English
113
118
1.1K
972.1K
B Capital
B Capital@BCapitalGroup·
@reflection_ai Congratulations to the team at @reflection_ai on this next chapter. We’re excited to partner with you as you build accessible frontier open intelligence.
English
2
1
17
11.1K
Reflection AI đã retweet
clem 🤗
clem 🤗@ClementDelangue·
Let’s go open-source and open science AI!
Reflection AI@reflection_ai

Today we're sharing the next phase of Reflection. We're building frontier open intelligence accessible to all. We've assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion. Why Open Intelligence Matters Technological and scientific progress is driven by values of openness and collaboration. The internet, Linux, and the protocols and standards that underpin modern computing are all open. This isn't a coincidence. Open software is what gets forked, customized, and embedded into systems worldwide. It's what universities teach, what startups build on, what enterprises deploy. Open science enables others to learn from the results, be inspired by them, interrogate them, and build upon them in order to push the frontier of human knowledge and scientific advancement. AI got to where it is today through scaling ideas (e.g. self-attention, next token prediction, reinforcement learning) that were shared and published openly. Now AI is becoming the technology layer that everything else runs on top of. The systems that accelerate scientific research, enhance education, optimize energy usage, supercharge medical diagnoses, and run supply chains will all be built on AI infrastructure. But the frontier is currently concentrated in closed labs. If this continues, a handful of entities will control the capital, compute, and talent required to build AI, creating a runaway dynamic that locks everyone else out. There's a narrow window to change this trajectory. We need to build open models so capable that they become the obvious choice for users and developers worldwide, ensuring the foundation of intelligence remains open and accessible rather than controlled by a few. What We've Built Over the last year, we've been preparing for this mission. We’ve assembled a team who have pioneered breakthroughs including PaLM, Gemini, AlphaGo, AlphaCode, AlphaProof, and contributed to ChatGPT and Character AI, among many others. We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale. We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we're now bringing these methods to general agentic reasoning. We've raised significant capital and identified a scalable commercial model that aligns with our open intelligence strategy, ensuring we can continue building and releasing frontier models sustainably. We are now scaling up to build open models that bring together large-scale pretraining and advanced reinforcement learning from the ground up. Safety and Responsibility Open intelligence also changes how we think about safety. It enables the broader community to participate in safety research and discourse, rather than leaving critical decisions to a few closed labs. Transparency allows independent researchers to identify risks, develop mitigations, and hold systems accountable in ways that closed development cannot. But openness also requires confronting the challenges of capable models being widely accessible. We're investing in evaluations to assess capabilities and risks before release, security research to protect against misuse, and responsible deployment standards. We believe the answer to AI safety is not “security through obscurity” but rigorous science conducted in the open, where the global research community can contribute to solutions rather than a handful of companies making decisions behind closed doors. Join Us There is a window of opportunity today to build frontier open intelligence, but it is closing and this may be the last. If this mission resonates, join us.

English
7
15
186
28.6K