John-Clark Levin

3.6K posts

John-Clark Levin banner
John-Clark Levin

John-Clark Levin

@JohnClarkLevin

Against self-summary for philosophical reasons.

Ojai, California Katılım Nisan 2009
3K Takip Edilen945 Takipçiler
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
Why don't mathematicians have gas fireplaces? You might guess it's because they prefer to use a natural log... But the real reason is that they just got hired out of academia by a frontier #AI lab and are living out of an Airbnb studio while their SF townhouse is in escrow.
English
0
0
2
51
Greg Burnham
Greg Burnham@GregHBurnham·
“Just enjoy the summer” sort of applies to the other RSI too, though
English
2
0
10
615
Greg Burnham
Greg Burnham@GregHBurnham·
RSI? (For some reason I have interpreted your question as being about MIT’s prestigious high school summer program, attendance at which is often considered a ticket to top colleges.) Idk man, I think the effects are pretty confounded. Only apply if you’d really enjoy it.
English
2
0
27
3.6K
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
Also hilarious that a significant fraction of the “introspection is weak and gay” crowd has for years been pretending to read Marcus Aurelius.
English
0
0
0
13
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
“Introspection is actually bad” is such a wild place for the discourse to have gone. A dystopian speculative fiction author would get absolutely ROASTED for something that on-the-nose.
English
1
0
2
51
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
Let’s stop arguing over which frontier #AI lab is the best. Each has its own specialty: Google leads on biology. 🧬 OpenAI leads on math. 🧮 Anthropic leads on coding. 💻 xAI leads on praising Hitler. 🤦‍♂️
English
0
0
5
70
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
First night in a long time that I wasn’t up working at 2:00 AM! 😅
English
0
0
1
32
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
@S_OhEigeartaigh But it’s maximally broad under the law. It just looks narrow compared to the outrageously illegal action asserted in the tweet.
English
0
0
0
4
Seán Ó hÉigeartaigh
Seán Ó hÉigeartaigh@S_OhEigeartaigh·
More importantly, the DoW's supply chain risk application is on the narrower end. This is still a hugely inappropriate use of this measure, though, and Anthropic are right to challenge it legally IMO.
English
1
0
14
410
John-Clark Levin retweetledi
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
Post DPA threat against @AnthropicAI, I assess the chances that the first #AGI will be deployed under each of the following scenarios are: National project: 7% Full nationalization: 16% (+1%) Soft nationalization: 45% (+7%) Private control: 30% (-8%) International project: 2%
English
1
0
2
75
John-Clark Levin retweetledi
Ben (no treats)
Ben (no treats)@andersonbcdefg·
let me put this in terms you might understand better: the DoD is telling anthropic they have to bake the gay cake
English
4
7
127
6.5K
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
An all-timer by @TheZvi: "If they really are asking to also be given special no-safeguard models, I don’t think that’s something Anthropic or any other lab should be agreeing to do for reasons well-explained by, among others, @deanwball, Benjamin Franklin and James Cameron."
English
0
0
1
72
John-Clark Levin retweetledi
Seán Ó hÉigeartaigh
Seán Ó hÉigeartaigh@S_OhEigeartaigh·
My own thought: the Pentagon's supply chain risk threat (significance detailed well by Dean, below) to Anthropic should be seen as a Rubicon crossing moment by the AI industry. The other companies should be saying no: this development transcends commercial competition and we oppose it. Where this leads if followed through doesn't seem good for any of them. If none of them speak up, it seems to me the prospects of meaningful cooperation between them on safe development of superintelligence (whether for America's best interests, or the world's) can almost be ruled out.
Dean W. Ball@deanwball

If DoW and Anthropic can’t agree on terms of business, then… they shouldn’t do business together. I have no problem with that. But a mere contract cancellation is not what is being threatened by the government. Instead it is something broader: designation of Anthropic as a “supply chain risk.” This is normally applied to foreign-adversary technology like Huawei. In practice, this would require *all* DoW contractors to ensure there is no use of Anthropic models involved in the production of anything they offer to DoW. Every startup and every Fortune 500 company alike. This designation seems quite escalatory, carrying numerous unintended consequences and doing potential significant damage to U.S. interests in the long run. I hope the two organizations can work out a mutually agreeable deal. If they can’t, I hope they agree to peaceably part ways. But this really needn’t be a holy war. Anthropic isn’t Google in 2018; they have always cared about national security use of AI. They were the most enthusiastic AI lab to offer their products to the national security apparatus. Is Anthropic run by Democrats whose political messaging sometimes drives me crazy? Sure. But that doesn’t mean it’s wise to try to destroy their business. This administration believes AI is the defining technology competition of our time. I don’t see how tearing down one of the most advanced and innovative AI startups in America helps America win that competition. It seems like it would straightforwardly do the opposite. The supply chain risk designation is not a necessary move. Cheaper options are on the table. If no deal is possible, cancel the contract, and leverage America’s robustly competitive AI market (maintained in no small part by this administration’s pro-innovation stance) to give business to one or more of Anthropic’s several fierce competitors.

English
5
14
123
9.4K
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
As of 2026, humanoid #robotics is bottlenecked much more by #AI than hardware. That is, 2031 AI in 2026 hardware would be vastly superior to 2026 AI in 2031 hardware, and would likely be sufficient to do enormously valuable physical work at acceptable reliability.
English
0
0
1
46
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
As I've said before, after so many years of being in the extreme short-timelines tail of the distribution, it feels **deeply** weird that "#AGI is coming in 2029" is now a skeptical, hype-deflating take.
English
0
0
1
99
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
What it means for a problem to become better theorized: 2.5 years ago, we were still talking about AI recursive self-improvement in the abstract. Now, thanks to work by @TomDavidsonX, @EpochAIResearch, and @METR_Evals, we have detailed mechanistic models of AI R&D automation.
English
0
0
1
104
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
@glennmid10001 Yes, political destabilization is likely. And we aren’t very stable to begin with.
English
1
0
1
21
Glenn Middleton
Glenn Middleton@glennmid10001·
@JohnClarkLevin I worry people will panic in an age of advanced AI. Think of all the disgruntled ex employees laid off due to AI
English
1
0
0
11
John-Clark Levin
John-Clark Levin@JohnClarkLevin·
@larryelder Larry, this is crass ragebait. You're so much better than this. Please stop.
English
0
0
0
10