Ray Lillywhite

488 posts

Ray Lillywhite

Ray Lillywhite

@LillywhiteRay

🇹🇼

انضم Mart 2019
871 يتبع39 المتابعون
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@ZeffMax Oh, sorry, I thought you were implying his views were shaped by groups like METR, but you’re just pointing out that out to back up the statement about him being deeply embedded in AI discourse. Makes sense
English
0
0
1
17
Max Zeff
Max Zeff@ZeffMax·
It seems much more likely this guy was deeply embedded in the AI discourse for a while, reading stuff from industry groups like METR and chatting in AI safety activism forums
English
1
1
0
130
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@firstadopter @sama Dario also knew that the likely outcome would be not having enough compute. But you don’t risk going bankrupt when things take a bit longer than you expect. x.com/lillywhiteray/…
Ray Lillywhite@LillywhiteRay

@deanwball @austinc3301 That’s the expected outcome. If you put all your money into S&P call options for a year, the most likely outcome is that your investments do great. That doesn’t mean that it was a good idea to risk losing everything

English
0
0
0
599
tae kim
tae kim@firstadopter·
It’s obvious Anthropic vastly underestimated compute growth needs, which is expanding much faster than expected. Dario is on the record multiple times describing OpenAI as YOLO, recklessly buying too much capacity. But now it looks like @sama was right all along.
tae kim tweet mediatae kim tweet mediatae kim tweet media
Marc Andreessen 🇺🇸@pmarca

“This raises an obvious question: how much of Anthropic’s reluctance to make Mythos widely available is due to security concerns, as opposed to the more prosaic reality that Anthropic simply doesn’t have enough compute?” @stratechery @benthompson

English
21
17
296
68.7K
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@jachiam0 This is a wild conclusion you came to. It could only make sense if AI safety concerns were completely unfounded, which is clearly not true, by admission of nearly every AI leader.
English
0
0
4
119
Joshua Achiam
Joshua Achiam@jachiam0·
"But I said violence was wrong and it doesn't work!" You are smart enough to know the psychological impact of your claims about existential risk on people who have different levels of restraint.
English
5
3
45
3.4K
Joshua Achiam
Joshua Achiam@jachiam0·
When you deny over and over that there might be a potential causal link between the extremity of your side's rhetoric and violence against people working in AI, when you try to minimize and obfuscate about it, you invite it.
English
1
2
27
1.8K
Ray Lillywhite أُعيد تغريده
Jordan Braunstein
Jordan Braunstein@jbraunstein914·
I'm sorry, but this is ridiculous. The implication is that people should censor sincere assessments of our situation bc it might indirectly incite unwell individuals to violence? By that standard, nothing could ever be called an emergency, as it might cause panic, so to avoid panic, we must pretend emergencies don’t happen.
English
0
1
14
170
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@firstadopter Here’s a table you can use to figure it out: Time | Nerfing models? ———————————— Any | No
English
0
0
1
201
Ray Lillywhite أُعيد تغريده
Rob Wiblin
Rob Wiblin@robertwiblin·
AI companies are notorious for refusing to release AI models as a clever marketing ploy to build hype. In fact it's hard to get them to release models at all. We're still waiting for ones they trained years ago. Crazy but true.
English
4
3
25
2.4K
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@JonathanDBos @carl_feynman when you're already committing chart crimes of this severity, might as well throw in a descending axis to avoid a second shape
English
0
0
2
20
JB
JB@JonathanDBos·
@carl_feynman that's not true, you can have top left/bottom right or top right/bottom left. so the graph still gives you a single bit of information.
English
1
0
4
73
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@sisyphus808 @profjoeyg @AnthropicAI How can it even be assumed that the creators of this product were aware of the research (which is not even for the same setup)? And notice that there are no citations for the myriad of other technologies that this feature relies on. Because this is not a research paper.
English
0
0
0
24
sisyphus
sisyphus@sisyphus808·
@LillywhiteRay @profjoeyg @AnthropicAI weird attitude to have against someone who is releasing public research for all our benefit with no proper credit. if everyone had your attitude, we wouldn't have good things at all. good day.
English
1
0
0
23
Joey Gonzalez
Joey Gonzalez@profjoeyg·
I am excited to see @AnthropicAI exploring our work on advisor models (arxiv.org/abs/2510.02453). It's worth noting that we actually examined the opposite setting in which the advisor is the weaker (but parametrically trainable) model. This shows that we can actually lift the performance of frontier models. We actually also examined the opposite settings but found that the strong advisor often tries to do the task (I can relate to this). That said, can we at least get a citation :-)? cc: @pgasawa, @aczhu1326, @abby_k_oneill, @AlexGDimakis , and @matei_zaharia
Claude@claudeai

We're bringing the advisor strategy to the Claude Platform. Pair Opus as an advisor with Sonnet or Haiku as an executor, and get near Opus-level intelligence in your agents at a fraction of the cost.

English
15
32
397
61.8K
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@xw33bttv They aren't releasing to enterprises. They're specifically releasing it to companies and orgs that can help find and patch vulnerabilities before models of this capability are available to bad actors. Have you even considered what it'd be like if they didn't do this?
English
0
0
0
11
Lex
Lex@xw33bttv·
Anthropic have done so many things right. Gatekeeping model access though to essentially the wealthy and powerful (which is what enterprise is) doesnt really align with their mission statement or even the general good of their corporate structure (PBC). I feel like when access is now walled and gardened, that's how memes like the permanent underclass truly come to fruition.
English
29
15
140
4.6K
Ray Lillywhite أُعيد تغريده
“paula”
“paula”@paularambles·
i think they misunderstood gas fees
“paula” tweet media
English
1
3
91
2.7K
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@twbuilds @Nrcoope It’s fair because they invested in something that needed an insane amount of capital to become successful, and they were not the ones that invested that insane amount of capital
English
0
0
0
32
Tom W
Tom W@twbuilds·
@Nrcoope If my math is right, a $10M seed valuation would be a 100X per billion, so the pre-dilution return would be 8420x? If this math is correct how is this fair to angel investors?
English
3
0
4
4.9K
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@thenanyu @AkanshaDugad Except Linear can reasonably guess what you might be looking for. Chat/prompt/search can be the most prominent thing, but intelligent information surfacing should be incorporated too
English
0
0
2
52
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@deanwball @austinc3301 That’s the expected outcome. If you put all your money into S&P call options for a year, the most likely outcome is that your investments do great. That doesn’t mean that it was a good idea to risk losing everything
English
0
0
4
1.1K
Dean W. Ball
Dean W. Ball@deanwball·
Seems like, for all Dario’s recent implicit mockery, the OpenAI “yolo” approach to the AI infrastructure buildout is performing better than the somewhat more cautious strategy of Anthropic. As a whole, the U.S. is probably under-building both data centers and fabs.
Herbie Bradley@herbiebradley

@doodlestein they have no choice but to do some amount of demand destruction since they are heavily compute bottlenecked for at least the next year or so, if they want to keep training new models as well as increasing revenue

English
23
29
619
113.1K
Miranda Nazzaro
Miranda Nazzaro@mirandanazzaro·
News: Anthropic forms PAC called "AnthroPAC," per FEC filing this morning. The PAC is funded by employees and will donate to candidates from both parties ahead of midterms.
Miranda Nazzaro tweet media
English
89
48
265
1.6M
Ray Lillywhite
Ray Lillywhite@LillywhiteRay·
@thsottiaux Yes, even small multipliers (0.8-1.2) would probably be enough to incentivize behavior changes when possible. And the people using a single codex instance during work hours are probably not hitting limits anyway
English
0
0
0
44
Tibo
Tibo@thsottiaux·
With Codex the there is quite the gulf in load between peak and off-peak times, and we would like to achieve more of a smoother traffic pattern as that would be a more optimal use of our compute. We have ideas, but curious what you all think we should do? Would more usage during off-peak and surge multiplier during peak times make sense?
English
795
42
1.7K
205.3K
Ray Lillywhite أُعيد تغريده
Cosmos Raj
Cosmos Raj@cosmos_raj·
Breaking news: Anthropic buys the All in podcast just to shut it down Dario quoted as saying: “this isn’t even about new media I just want to stop seeing them on my timeline”
Cosmos Raj tweet mediaCosmos Raj tweet media
English
88
191
6.5K
580.7K
David Krueger 🦥 ⏸️ ⏹️ ⏪
Interesting argument. I'll think about it more, but I'm still not very sold on such concerns. Partially, I just expect that it wouldn't be so extreme and some people will still learn and invent and create. Or AI can also create new knowledge (a bit weird to assume otherwise). And it doesn't seem so bad if we get slower at creating new knowledge, actually, I think a big issues for humanity has been that we're too slow at disseminating knowledge relative to how quickly it's created (and thus we end up with people who have vastly different understandings of the world simply due to ignorance; there's too much to know). Mostly, though, I think it just fails the smell test to me.
Muhammad Ayan@socialwithaayan

MIT's Nobel Prize-winning economist just published a model with one of the most alarming conclusions in the AI literature so far. If AI becomes accurate enough, it can destroy human civilization's ability to generate new knowledge entirely. Not gradually degrade it. Collapse it. The paper is called AI, Human Cognition and Knowledge Collapse. Authors: Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar. MIT. Published February 20, 2026. Acemoglu won the Nobel Prize in Economics in 2024. He is not a doomer blogger. He is the most cited economist of his generation, and his models tend to be taken seriously by the people who set policy. Here is the argument in plain terms. Human knowledge is not just a collection of facts stored in individuals. It is a living system that requires continuous reproduction. People learn things. They apply them. They teach others. They build on prior work to generate new work. The entire engine of science, medicine, technology, and innovation runs on this cycle of active human cognition. What happens when AI provides personalized, accurate answers to every question people would otherwise have to learn themselves? Individually, each person is better off. They get correct answers faster. They make fewer errors. Their immediate outcomes improve. But they stop doing the cognitive work that sustains the collective knowledge base. Acemoglu's model shows this produces a non-monotone welfare curve. Modest AI accuracy: net positive. AI helps at the margin, humans still do enough learning to sustain collective knowledge, everyone gains. High AI accuracy: net catastrophic. AI is accurate enough that learning yourself feels unnecessary. Human learning effort collapses. The knowledge base that AI was trained on is no longer being refreshed or extended. Innovation stalls. Then stops. The model proves the existence of two stable steady states. A high-knowledge steady state where human learning and AI assistance coexist productively. A knowledge-collapse steady state where collective human knowledge has effectively vanished, individuals still receive good personalized AI recommendations, but the shared intellectual infrastructure that enables new discoveries is gone. And the transition between them is not gradual. It is a threshold effect. Below a certain level of AI accuracy, society stays in the high-knowledge equilibrium. Above that threshold, the system tips. And once it tips, the collapse is self-reinforcing. Because the people who would have learned the things that would have pushed the frontier forward never learned them. And the AI cannot push the frontier on its own. It can only recombine what humans already knew when it was trained. The dark irony at the center of the model: The AI does not fail. It keeps giving accurate, personalized, useful answers right through the collapse. From the individual's perspective, nothing looks wrong. You ask a question, you get a correct answer. But the collective capacity to ask questions nobody has asked before, to build the frameworks that generate new knowledge rather than retrieve existing knowledge, that capacity is quietly disappearing. Acemoglu has been the most prominent mainstream economist skeptical of transformative AI productivity claims. His prior work found that AI's actual measured productivity gains were much smaller than the technology industry projected. This paper is a different kind of warning. Not that AI will fail to deliver promised gains. But that if it succeeds too completely, it will undermine the human cognitive infrastructure that makes long-run progress possible at all. The welfare effect is non-monotone. That is the sentence worth sitting with. Helpful until it is not. Beneficial until it crosses a threshold. And past that threshold, the same accuracy that made it so useful is precisely what makes it devastating. Every student who uses AI instead of working through a problem is a data point. Every researcher who uses AI instead of developing intuition is a data point. Every generation that grows up with accurate AI answers and no incentive to develop deep domain knowledge is a data point. Individually rational. Collectively catastrophic. Acemoglu proved this is not just a cultural concern or a vague anxiety about screen time. It is a mathematically coherent equilibrium that a sufficiently accurate AI system will push society toward. And there is no visible warning sign before the threshold is crossed.

English
6
1
24
4.6K