

Ronan Farrow
2.5K posts

@RonanFarrow
Pulitzer-winning investigative journo @newyorker. Documentaries @hbo. Ex-diplomat. Bad lawyer. Disused PhD. Tips: [email protected].








(9/11) In the cut-throat race for A.I. dominance, these more substantive critiques of Altman commingle with no-holds-barred opposition efforts in which rivals have weaponized his personal life. Intermediaries directly connected to—and in at least one case compensated by—Elon Musk have circulated dozens of pages of salacious and unsubstantiated opposition research reflecting extensive surveillance: shell companies, personal contacts, interviews about a purported sex worker conducted at gay bars. In the course of our reporting, multiple people within rival companies reached out to insinuate to us that Altman sexually pursues minors, a narrative persistent in Silicon Valley which appears to be untrue. We spent months looking into the matter and could find no evidence to support it.

Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment—and the next generation of talent. openai.com/index/introduc…

(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:

(10/11) Why does all of this matter? A.I. does already have life-saving applications, from medical research to weather warnings. Altman has supported OpenAI's growth with promises of a superabundant future. But the dangers are also no longer a fantasy. A.I. is already being deployed in military operations around the world. Researchers have documented its power to rapidly identify chemical warfare agents. OpenAI faces seven wrongful-death lawsuits alleging ChatGPT prompted suicides and a murder. A.I. could soon cause severe labor disruption, perhaps eliminating millions of jobs. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies and some experts warn of a bubble and recession risks. OpenAI has one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. A board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.) If the bubble pops, much more than one company is at stake.





