Ronan Farrow

2.5K posts

Ronan Farrow banner
Ronan Farrow

Ronan Farrow

@RonanFarrow

Pulitzer-winning investigative journo @newyorker. Documentaries @hbo. Ex-diplomat. Bad lawyer. Disused PhD. Tips: [email protected].

Genovia Katılım Temmuz 2011
5.1K Takip Edilen933.6K Takipçiler
Sabitlenmiş Tweet
Ronan Farrow
Ronan Farrow@RonanFarrow·
(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:
Ronan Farrow tweet media
English
165
2.4K
10.2K
786.7K
Ronan Farrow retweetledi
Greg
Greg@gregdunaway·
So the OpenAI article by @RonanFarrow is absolutely as good as people have been saying it is - holy hell. Stop what you're doing and read it. newyorker.com/magazine/2026/…
English
11
64
237
21.3K
Ronan Farrow
Ronan Farrow@RonanFarrow·
(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:
Ronan Farrow tweet media
English
165
2.4K
10.2K
786.7K
Ronan Farrow
Ronan Farrow@RonanFarrow·
@aut0m8d Thanks for this. Incredibly complex piece where we really strived to be very, very fair at all times.
English
3
1
83
1.6K
Ronan Farrow
Ronan Farrow@RonanFarrow·
This announcement arrives hours after our investigation (newyorker.com/magazine/2026/…) described how OpenAI dissolved its superalignment and AGI-readiness teams and dropped safety from the list of its most significant activities on its IRS filings—and how, when we asked to speak with researchers, working on existential safety, a representative replied "What do you mean by 'existential safety'? That's not, like, a thing."
OpenAI@OpenAI

Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment—and the next generation of talent. openai.com/index/introduc…

English
31
683
3.2K
246.3K
Erika
Erika@erikaxtc·
Just read this article about Open AI and it's SO GOOD.
Ronan Farrow@RonanFarrow

(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:

English
1
0
8
1.6K
Ronan Farrow retweetledi
Ronan Farrow
Ronan Farrow@RonanFarrow·
(11/11) There is much more in the piece—on the saga of Altman's firing and return; a history of alleged similar complaints earlier in his career; gifts from foreign leaders and a security-clearance vetting process that turned up what one official described as "a lot of red flags," and more. And it looks at wider critiques from industry insiders of the current moment's anti-regulation trajectory—something that stands to affect all of us. I hope you take the time for a long-read in this case, and subscribe to @NewYorker to support this kind of investigative reporting: newyorker.com/magazine/2026/…
English
21
98
828
46.8K
Ronan Farrow
Ronan Farrow@RonanFarrow·
(10/11) Why does all of this matter? A.I. does already have life-saving applications, from medical research to weather warnings. Altman has supported OpenAI's growth with promises of a superabundant future. But the dangers are also no longer a fantasy. A.I. is already being deployed in military operations around the world. Researchers have documented its power to rapidly identify chemical warfare agents. OpenAI faces seven wrongful-death lawsuits alleging ChatGPT prompted suicides and a murder. A.I. could soon cause severe labor disruption, perhaps eliminating millions of jobs. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies and some experts warn of a bubble and recession risks. OpenAI has one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. A board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.) If the bubble pops, much more than one company is at stake.
English
5
67
471
60.1K
Ronan Farrow
Ronan Farrow@RonanFarrow·
@krishnanrohit @ohryansbelt This is very specifically explained in the story, please read it (particularly noting the part about Thrive and its relationship with Altman). If you don’t have a subscription, the New Yorker offers several free articles a month.
English
1
0
25
1.2K
rohit
rohit@krishnanrohit·
@ohryansbelt But if Sam is the problem then why couldn't a different CEO come in and help with that? Mira clearly has no issue raising money now. It feels too convenient.
English
6
0
30
5.2K
rohit
rohit@krishnanrohit·
Something I find missing from these discussions is, sure yes they make it sound like everyone thought he was untrustworthy. So why did like 99% of the OpenAI team quit after he was fired and agitate for him to come back? Seems like an important piece of evidence.
Ryan@ohryansbelt

The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and rapid reinstatement at OpenAI. Here's the breakdown: > Ilya compiled ~70 pages of Slack messages, HR documents, and photos taken on personal phones to avoid detection on company devices. He sent them to board members as disappearing messages. The first memo begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying." > Dario kept detailed private notes for years under the heading "My Experience with OpenAI" (subheading: "Private: Do Not Share"), totaling 200+ pages. His conclusion: "The problem with OpenAI is Sam himself." > Sam reportedly told Mira his allies were "going all out" and "finding bad things" to damage her reputation after the firing. Thrive put its planned $86B investment on hold and implied it would only close if Sam returned, giving employees financial incentive to back him. > Sam texted Satya Nadella directly to propose the new board composition: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation." The two new members selected to oversee an independent inquiry into Sam were chosen after close conversations with Sam himself. > Before OpenAI, senior employees at Loopt asked the board to fire Sam as CEO on two separate occasions over concerns about leadership and transparency. At Y Combinator, partners complained to Paul Graham about Sam's behavior, and Graham privately told colleagues "Sam had been lying to us all the time." > OpenAI's superalignment team was promised 20% of the company's compute. Four people who worked on or with the team said actual resources were 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission. > Sam told the board that safety features in GPT-4 had been approved by a safety panel. Helen Toner requested documentation and found the most controversial features had not been approved. Sam also never mentioned to the board that Microsoft released an early ChatGPT version in India without completing a required safety review. > Sam made a secret pact with Greg and Ilya where he agreed to resign if they both deemed it necessary, essentially appointing his own shadow board. The actual board was alarmed when they learned about it. > Sam struck a deal with Greg to become CEO while simultaneously telling researchers that Greg's authority would be diminished, and telling Greg something different. > A board member described Sam as having "two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone." Multiple sources independently used the word "sociopathic." > OpenAI is reportedly preparing for an IPO at a potential $1 trillion valuation while securing government contracts spanning immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.

English
87
22
614
135.9K