Doomer Daylight

199 posts

Doomer Daylight banner
Doomer Daylight

Doomer Daylight

@DoomerDaylight

We track the claims and expose the truth on AI Doomer propaganda with facts and receipts. Pro democratic AI, American competitiveness, innovation, and safety.

Katılım Ocak 2026
69 Takip Edilen76 Takipçiler
Sabitlenmiş Tweet
Doomer Daylight
Doomer Daylight@DoomerDaylight·
1/ Opposition to child-safety coalition - bought and paid for by @AnthropicAI investors, friends and family. 🧵 1/9
Doomer Daylight tweet media
English
1
0
1
931
Cassie Pritchard
Cassie Pritchard@hecubian_devil·
"A board member described Sam as having 'two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone.'" lol
Ryan@ohryansbelt

The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and rapid reinstatement at OpenAI. Here's the breakdown: > Ilya compiled ~70 pages of Slack messages, HR documents, and photos taken on personal phones to avoid detection on company devices. He sent them to board members as disappearing messages. The first memo begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying." > Dario kept detailed private notes for years under the heading "My Experience with OpenAI" (subheading: "Private: Do Not Share"), totaling 200+ pages. His conclusion: "The problem with OpenAI is Sam himself." > Sam reportedly told Mira his allies were "going all out" and "finding bad things" to damage her reputation after the firing. Thrive put its planned $86B investment on hold and implied it would only close if Sam returned, giving employees financial incentive to back him. > Sam texted Satya Nadella directly to propose the new board composition: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation." The two new members selected to oversee an independent inquiry into Sam were chosen after close conversations with Sam himself. > Before OpenAI, senior employees at Loopt asked the board to fire Sam as CEO on two separate occasions over concerns about leadership and transparency. At Y Combinator, partners complained to Paul Graham about Sam's behavior, and Graham privately told colleagues "Sam had been lying to us all the time." > OpenAI's superalignment team was promised 20% of the company's compute. Four people who worked on or with the team said actual resources were 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission. > Sam told the board that safety features in GPT-4 had been approved by a safety panel. Helen Toner requested documentation and found the most controversial features had not been approved. Sam also never mentioned to the board that Microsoft released an early ChatGPT version in India without completing a required safety review. > Sam made a secret pact with Greg and Ilya where he agreed to resign if they both deemed it necessary, essentially appointing his own shadow board. The actual board was alarmed when they learned about it. > Sam struck a deal with Greg to become CEO while simultaneously telling researchers that Greg's authority would be diminished, and telling Greg something different. > A board member described Sam as having "two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone." Multiple sources independently used the word "sociopathic." > OpenAI is reportedly preparing for an IPO at a potential $1 trillion valuation while securing government contracts spanning immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.

English
99
2.3K
34.7K
1.5M
Melian Refugee
Melian Refugee@escapefrommelos·
genuinely crazy that Ronan Farrow, the (alleged) lovechild of Mia Farrow and Frank Sinatra (Mia Farrow's ex-husband Woody Allen is on the birth certificate as his father🤣😭) is probably the best investigative journalist of our era
Melian Refugee tweet mediaMelian Refugee tweet mediaMelian Refugee tweet media
Ronan Farrow@RonanFarrow

(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:

English
344
2.4K
36.4K
3.3M
Ryan
Ryan@ohryansbelt·
The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed "Ilya Memos," and Dario Amodei's 200+ pages of private notes. It's the most detailed account yet of the pattern of behavior that led to Sam's firing and rapid reinstatement at OpenAI. Here's the breakdown: > Ilya compiled ~70 pages of Slack messages, HR documents, and photos taken on personal phones to avoid detection on company devices. He sent them to board members as disappearing messages. The first memo begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying." > Dario kept detailed private notes for years under the heading "My Experience with OpenAI" (subheading: "Private: Do Not Share"), totaling 200+ pages. His conclusion: "The problem with OpenAI is Sam himself." > Sam reportedly told Mira his allies were "going all out" and "finding bad things" to damage her reputation after the firing. Thrive put its planned $86B investment on hold and implied it would only close if Sam returned, giving employees financial incentive to back him. > Sam texted Satya Nadella directly to propose the new board composition: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation." The two new members selected to oversee an independent inquiry into Sam were chosen after close conversations with Sam himself. > Before OpenAI, senior employees at Loopt asked the board to fire Sam as CEO on two separate occasions over concerns about leadership and transparency. At Y Combinator, partners complained to Paul Graham about Sam's behavior, and Graham privately told colleagues "Sam had been lying to us all the time." > OpenAI's superalignment team was promised 20% of the company's compute. Four people who worked on or with the team said actual resources were 1-2%, mostly on the oldest cluster with the worst chips. The team was dissolved without completing its mission. > Sam told the board that safety features in GPT-4 had been approved by a safety panel. Helen Toner requested documentation and found the most controversial features had not been approved. Sam also never mentioned to the board that Microsoft released an early ChatGPT version in India without completing a required safety review. > Sam made a secret pact with Greg and Ilya where he agreed to resign if they both deemed it necessary, essentially appointing his own shadow board. The actual board was alarmed when they learned about it. > Sam struck a deal with Greg to become CEO while simultaneously telling researchers that Greg's authority would be diminished, and telling Greg something different. > A board member described Sam as having "two traits almost never seen in the same person: a strong desire to please people in any given interaction, and almost a sociopathic lack of concern for the consequences of deceiving someone." Multiple sources independently used the word "sociopathic." > OpenAI is reportedly preparing for an IPO at a potential $1 trillion valuation while securing government contracts spanning immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.
Ryan tweet media
English
285
2.2K
14.4K
3.2M
Ronan Farrow
Ronan Farrow@RonanFarrow·
(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:
Ronan Farrow tweet media
English
573
8K
36.5K
8.2M
Ronan Farrow
Ronan Farrow@RonanFarrow·
My 18-month investigation into Sam Altman and OpenAI in @NewYorker, with @andrewmarantz, is out now. Read here: newyorker.com/magazine/2026/… Thread on a few of the key findings here: x.com/RonanFarrow/st…
Ronan Farrow@RonanFarrow

(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:

English
283
4K
14.8K
1.8M
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
70% think AI will lead to fewer jobs. They are right. We can’t allow a handful of billionaires, eager to increase their wealth and power, to rush forward with a technology that will fundamentally transform humanity without democratic input or accountability.
Sen. Bernie Sanders tweet media
English
791
515
2.2K
229K
Doomer Daylight
Doomer Daylight@DoomerDaylight·
@AlexBores Says the guy who gets profiles by paid journalists from his campaign contributors
English
0
0
0
113
Doomer Daylight retweetledi
Nirit Weiss-Blatt, PhD
Nirit Weiss-Blatt, PhD@DrTechlash·
🚨The AI Doc's Falsehoods and False Balance "The AI Doc" strips away the extreme context behind the doomers' views and presents misleading claims as facts. (link below)
Nirit Weiss-Blatt, PhD tweet media
English
2
3
6
1.4K
Nathan Calvin
Nathan Calvin@_NathanCalvin·
I like TBPN. I hope they are able to continue speaking frankly. But I would encourage folks at TBPN and their listeners to read this reporting by @ZeffMax to understand how this setup could diminish their ability to do so.
Nathan Calvin tweet media
Jordi Hays@jordihays

TBPN has been acquired by OpenAI The world is changing quickly but TBPN will stay the same. Live every weekday just with a lot more resources. Thank you to everyone that has been a part of this journey big or small. We are 17 months in and unironically just getting started.

English
2
3
60
4.4K
Nathan Calvin
Nathan Calvin@_NathanCalvin·
Do you genuinely think they will be encouraged to freely criticize OpenAI when the structure is that they report directly to Chris Lehane (OpenAI's "master of the political dark arts") and will also help with general corporate comms in addition to running the show?
Nathan Calvin tweet media
Sam Altman@sama

TBPN is my favorite tech show. We want them to keep that going and for them to do what they do so well. I don't expect them to go any easier on us, am sure I'll do my part to help enable that with occasional stupid decisions.

English
2
3
63
2.1K
Doomer Daylight
Doomer Daylight@DoomerDaylight·
@ShakeelHashim Says the former Head of Comms at CEA and editor of @ReadTransformer, funded by Tarbell Center / Coefficient Giving. Quite a lesson on “independence” from someone bankrolled by the investors, friends & family of the Anthropic EA network.
English
2
1
7
1.5K
Doomer Daylight
Doomer Daylight@DoomerDaylight·
@eshugerman You say “kids groups” have a grimy feeling? The real grime is the @AnthropicAI-backed network behind them. FairPlay is Omidyar/Tides-backed. CA AfterSchool ties to Packard/Omidyar’s Humanity $500M AI fund. This is self-interest and control, not child safety.
English
0
2
8
898
Emily Shugerman
Emily Shugerman@eshugerman·
OpenAI is behind a new “parents and kids” coalition on AI safety legislation. Former members of the group told me they didn’t know until it launched.
Emily Shugerman tweet mediaEmily Shugerman tweet media
English
10
45
170
53.1K