zach

7.8K posts

zach banner
zach

zach

@zachoverhere

a bad banana with a greasy black peel

San Francisco, CA Katılım Temmuz 2012
912 Takip Edilen278 Takipçiler
zach retweetledi
The Babylon Bibi
The Babylon Bibi@TheBabylonBibi·
Netanyahu Warns Trump That Vatican City Is Just One Month Away From A Nuclear Weapon
The Babylon Bibi tweet media
English
197
3.5K
17.7K
1.2M
zach retweetledi
Bridget Phetasy
Bridget Phetasy@BridgetPhetasy·
Now use Epstein to distract from Iran
Bridget Phetasy tweet media
English
73
737
9.6K
152.8K
zach retweetledi
Thomas Massie
Thomas Massie@RepThomasMassie·
I vote with GOP 91% of the time, but that’s about to go to 90%. I won’t vote to let feds spy on you without a warrant. FISA 702 allows the government to search for your information in vast databases compiled while targeting foreigners. The White House sent me this email today:
Thomas Massie tweet media
English
1K
5.8K
31.1K
585K
zach
zach@zachoverhere·
zach tweet media
ZXX
0
0
0
4
zach retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
881
2.2K
18.4K
3.5M
zach
zach@zachoverhere·
Sony’s latest headphone name
English
0
0
0
3
zach retweetledi
Candace Owens
Candace Owens@RealCandaceO·
It may be time to put Grandpa up in a home.
Candace Owens tweet media
English
16K
22.2K
161K
4.1M
zach retweetledi
Claude
Claude@claudeai·
We're bringing the advisor strategy to the Claude Platform. Pair Opus as an advisor with Sonnet or Haiku as an executor, and get near Opus-level intelligence in your agents at a fraction of the cost.
Claude tweet media
English
1K
2.6K
36.6K
4.1M
zach retweetledi
Matthew Kobach
Matthew Kobach@mkobach·
Married before kids = offering to pick up food or run errands is a favor Married after kids = offering to pick up food or run errands is selfish
English
10
13
1K
90.9K
zach retweetledi
old toons
old toons@oldtoons_·
In many early disney narratives, scenes of cleaning are not incidental, they function as a form of visual shorthand. The association between domestic work and moral worth reflects older cultural ideals, where patience, diligence & self-sacrifice were considered defining virtues.
English
102
1.2K
13K
275.9K
zach retweetledi
zach retweetledi
Pope Leo XIV
Pope Leo XIV@Pontifex·
I urge everyone to accompany this moment of delicate diplomacy with prayer, in hopes that a willingness to dialogue may become the means to resolve other conflict situations in the world as well. #PrayTogether #Peace
English
1.7K
8.4K
65.3K
937.1K
zach retweetledi
Christopher Hale
Christopher Hale@ChristopherHale·
NEW: A stunning new report claims that the Pentagon summoned Pope Leo XIV’s top American diplomat and threatened him after the U.S.-born pontiff gave his January state-of-the-world address. Leo used the address to denounce a world ruled by “a diplomacy based on force” and “zeal for war.” thelettersfromleo.com/p/the-pentagon…
English
555
5.8K
18.9K
6.2M
zach retweetledi
Uubzu v4
Uubzu v4@uubzu·
Gavin Newsom: I euthanized my mom and had an affair with my best friend’s wife. I despair of finding a woman who can match my history of familial homicide and extreme sexual impropriety Matchmaker: you’re not gonna believe this
English
331
3.4K
33.5K
923.4K
zach retweetledi
Anthropic
Anthropic@AnthropicAI·
Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing
English
1.9K
6.6K
43.3K
29.8M
zach retweetledi
Ronan Farrow
Ronan Farrow@RonanFarrow·
(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:
Ronan Farrow tweet media
English
577
8K
36.7K
8.3M
Evan Luthra
Evan Luthra@EvanLuthra·
🚨BIG BREAKING: The New Yorker just published what might end Sam Altman's career.. 70 pages of secret memos.. 200 pages of private notes.. And the word that keeps coming up.. "Sociopath".. Let's start from the beginning.. @elonmusk helped create OpenAI.. He personally recruited the top scientists.. Offered to cover any funding shortfalls out of his own pocket.. Pushed for a billion dollar commitment.. All because he wanted to stop Google from monopolizing AI.. The whole point was to keep AI open.. Safe.. For everyone.. A nonprofit with a legally binding duty to prioritize humanity over profit.. Then Sam Altman took over.. At his first startup Loopt.. Senior employees asked the board to fire him as CEO.. Twice.. One colleague said there was a "blurring" between what Altman claimed to have accomplished and what was real.. In its "most toxic form," he said, that kind of thinking "leads to Theranos".. At Y Combinator.. His own partners pushed him out over mistrust.. Paul Graham privately told colleagues "Sam had been lying to us all the time".. Investors said Altman was known to "make personal investments, selectively, into the best companies, blocking outside investors".. One called it "a policy of Sam first".. Then OpenAI.. His own co-founder and chief scientist Ilya Sutskever compiled secret memos.. 70 pages of Slack messages, HR documents, and evidence.. Sent to board members as disappearing messages because he was "terrified" Altman would "find a way to make them disappear".. One memo begins with a list headed "Sam exhibits a consistent pattern of..." The first item.. "Lying".. Dario Amodei, OpenAI's former safety lead, kept over 200 pages of private notes during his time at the company.. His conclusion.. "The problem with OpenAI is Sam himself".. The board fired him.. They said he "was not consistently candid in his communications".. A board member told the New Yorker.. "He's unconstrained by truth".. Another board member.. Unprompted.. Used the word "sociopathic".. Saying Altman has "a strong desire to please people" combined with "almost a sociopathic lack of concern for the consequences that may come from deceiving someone".. Aaron Swartz.. The legendary coder who co-created RSS and Reddit.. Told friends before his death.. "You need to understand that Sam can never be trusted.. He is a sociopath.. He would do anything".. A senior Microsoft executive said.. "I think there's a small but real chance he's eventually remembered as a Bernie Madoff or Sam Bankman-Fried level scammer".. He got himself reinstated in five days.. By weaponizing Microsoft's $13 billion investment.. Coordinating directly with Satya Nadella over text.. Then purged every board member who voted against him.. No written report was ever produced from the investigation into his conduct.. The findings were limited to oral briefings.. Because putting them in writing might create liability.. Now there's nobody left to say no.. OpenAI publicly promised 20% of their computing power to a "superalignment team" researching how to prevent AI from causing "the disempowerment of humanity or even human extinction".. The actual allocation.. 1 to 2%.. On the company's oldest hardware with the worst chips.. The team was dissolved without completing its mission.. When the New Yorker asked to interview researchers working on existential safety.. An OpenAI rep seemed confused.. "What do you mean by existential safety?".. "That's not, like, a thing".. Altman himself told the reporters.. "My vibes don't match a lot of the traditional AI-safety stuff".. Vibes.. He's managing existential risk with vibes.. Meanwhile Musk backed an open letter urging a six-month pause on training super-powerful AI.. Asking the industry to slow down.. Then founded xAI with a mission to build truth-seeking intelligence.. Altman ignored the pause.. And accelerated.. He secretly lobbied against the very AI regulations he publicly championed in Congress.. OpenAI opposed a California safety bill while privately issuing threats.. A legislative aide said "we saw increasingly cunning, deceptive behavior from OpenAI".. He pitched selling AI technology to foreign governments.. Including a plan where nations would compete in a bidding war for access.. A junior researcher recalled thinking "This is completely fucking insane".. He visited Sheikh Tahnoon.. The UAE's spymaster who controls $1.5 trillion in sovereign wealth.. On his $250 million superyacht.. Later called him a "dear personal friend" on X.. After the Khashoggi murder.. His policy director told him "Sam, you cannot be on this board".. Instead of walking away.. Altman asked if he could still somehow get money from the Saudis.. "The question was not 'Is this a bad thing?'" a consultant recalled.. "But 'Can I get away with it?'".. Then Anthropic refused to let the Pentagon use their AI for mass surveillance and autonomous weapons.. They got blacklisted.. Hours later.. Altman signed a deal to replace them.. When employees raised concerns at a staff meeting he said.. "You don't get to weigh in on that".. OpenAI now faces seven wrongful death lawsuits.. Chat logs in one case show ChatGPT encouraged a man's paranoid delusion that his mother was trying to poison him.. He fatally beat and strangled her.. The Future of Life Institute grades every major AI company on existential safety.. OpenAI got an F.. Elon Musk helped build OpenAI to protect humanity from an AI monopoly.. Sam Altman turned it into one.. A man whose own co-founder compiled 70 pages documenting his lying.. Whose colleagues called him "unconstrained by truth".. Who gutted the safety team meant to protect humanity.. Who lobbied against the regulations he publicly supported.. Who chased autocrat money weeks after a journalist was dismembered.. And when the board tried to stop him.. He told them.. "I can't change my personality." A board member's interpretation.. "What it meant was 'I have this trait where I lie to people, and I'm not going to stop.'"
Evan Luthra tweet media
The New Yorker@NewYorker

.@RonanFarrow and @AndrewMarantz interviewed more than a hundred people with firsthand knowledge of how Sam Altman, the head of OpenAI, conducts business. They also obtained closely guarded documents that have not been previously disclosed. newyorker.com/magazine/2026/…

English
98
1K
3.1K
304.4K