Michael 🔸

1.3K posts

Michael 🔸

Michael 🔸

@mjkerrison

Executive Director, @aisafetyanz | Let's make sure this goes well, alright?

Katılım Ocak 2022
393 Takip Edilen104 Takipçiler
Sabitlenmiş Tweet
Michael 🔸
Michael 🔸@mjkerrison·
Let's tackle poverty, animal suffering, and the disenfranchisement of future generations!
Michael 🔸 tweet media
English
0
0
16
1.5K
Michael 🔸
Michael 🔸@mjkerrison·
@NateWitkin If we're talking strictly about speed, sure, it'll be limited by how fast it can acquire all this currently distributed knowledge. But if your argument is "there is stuff AI will never be able to learn despite existence proof from humans", then the Hayek stuff is irrelevant
English
0
0
0
5
Nathan Witkin
Nathan Witkin@NateWitkin·
@mjkerrison Not sure of your point. In a gazillion different ways, exceedingly few of which we have reason to think AI can replicate?
English
1
0
0
15
Michael 🔸 retweetledi
FleetingBits
FleetingBits@fleetingbits·
some thoughts on claude mythos 1) claude mythos is expected to be a model tier above opus, both more expensive and with greater capabilities 2) the model is supposedly going to be rolled out to a small number of early customers with a focus on cybersecurity 3) the model is going to be high risk or critical risk equivalent for cybersecurity 4) i believe this makes it the first model to really begin to show how labs will behave around models with actually dangerous capabilities 5) in addition, mythos is likely one of what will be a number of much larger models being trained and released over the next year 6) i think there has been some trepidation to train larger models at least since the commercial failure of gpt-4.5 7) however, there is 8x as much compute available in the world today as at the start of 2024 so there is much more capacity 8) and scaling is not dead and larger models are probably going to be more sample efficient in rl so i think we should expect more reasoning gains 9) also though, once these capabilities exist, they tend to be very distillable, so we should also expect very capable small models to follow (like o3 to o3-mini) 10) i think it is going to be a wild 2026; so much more compute will come online fortune.com/2026/03/26/ant…
English
22
21
543
110.9K
Nathan Calvin
Nathan Calvin@_NathanCalvin·
I am confused - I thought the 130 days was 130 days in a 365 day period? So his role didn't necessarily have to end eventually as long as he kept below that number of days each year he was in the role.
Nathan Calvin tweet media
Diego Areas Munhoz@Dareasmunhoz

DAVID SACKS no longer AI and Crypto Czar, he said in an interview with Bloomberg. Sacks ran out of his 130 days as a special government employee. He didn't count the days consecutively, but they were set to end eventually. He'll now only be co-chair of science and tech council

English
2
2
18
3.7K
Michael 🔸
Michael 🔸@mjkerrison·
@menhguin Wait, are you Trump or Coke or is Anthropic Trump or Coke?
English
0
0
2
224
Luis Garicano 🇪🇺🇺🇦
Again the apocalypse from Amodei. Why don't you describe instead how wonderful it will be to have agents navigate bureaucracy for us, do our taxes, book our holidays, help us find fraudulent clauses in contracts, keep us healthy? Why this dumb emphasis in jobs lost?
English
9
15
138
18.2K
Michael 🔸
Michael 🔸@mjkerrison·
@soumitrashukla9 Do you consider they are simply telling the truth and it may be meaningfully bad for people to just hear and believe the rosy view?
English
0
0
1
279
Soumitra Shukla
Soumitra Shukla@soumitrashukla9·
I generally think this is also a massive missed PR opportunity. If AI Lab CEOs just talk about all the wonderful benefits that AI will bring to society instead of all the doom-and-gloom talk, I feel like people outside of Silicon Valley will also be much more supportive of AI, which they currently are not.
Luis Garicano 🇪🇺🇺🇦@lugaricano

Again the apocalypse from Amodei. Why don't you describe instead how wonderful it will be to have agents navigate bureaucracy for us, do our taxes, book our holidays, help us find fraudulent clauses in contracts, keep us healthy? Why this dumb emphasis in jobs lost?

English
4
4
18
4.9K
Michael 🔸 retweetledi
Nate Soares ⏹️
Every top AI exec is worried about what happens if any other top AI exec creates superintelligence. I just go one further.
English
8
24
424
13.5K
Michael 🔸
Michael 🔸@mjkerrison·
I Acted Only By That Maxim I Would See Universalised (And I Liked It) - Katygorical Imperrytive
English
2
0
6
54
Michael 🔸
Michael 🔸@mjkerrison·
@daviddiviny Let me immediately put it back on you and ask why you think that and what implications it has (here)? (Not as like a gotcha just as genuine interest)
English
1
0
0
18
David Diviny
David Diviny@daviddiviny·
@mjkerrison Isn’t it also a corollary that more and more performance is coming from RL rather than pre-training?
English
1
0
1
14
Michael 🔸
Michael 🔸@mjkerrison·
Speaking as someone who lives, works, and does fieldbuilding in a middle power, their "compromise alignment" sounds pretty good to me. I think some of the ideas will be pretty familiar to e.g. people in the cooperative AI space, but a good single piece to point to.
William MacAskill@willmacaskill

Due to Claude’s Constitution and OpenAI’s model spec, more people are paying attention to the characters of the AI’s that companies are building, and the rules they follow. Should AIs be wholly obedient, or have their own ethical code? What should they refuse to help with? Should they tell you what you want to hear, or push back when you’re off base? I think the nature of frontier AIs’ characters is among the most important features of the transition to a post-superintelligence world. In a new article with @TomDavidsonX, I explain why. History shows the importance of individual character. Stanislav Petrov chose to ignore a false nuclear alarm when protocol demanded he report it; the world avoided nuclear armageddon that day. Churchill refused to negotiate with Hitler after the fall of France, despite some strongly pushing him to do so. And, as capabilities improve, AI systems will become involved in almost all of the world's most important decisions: advising leaders, drafting legislation, running organisations, and researching new technologies. AI character — how honest, cooperative, and altruistic these systems are, and the hard rules they follow — will affect all of it. A general, aiming to stage a coup, instructs an AI to build a military unit loyal only to him. Does it comply, or refuse? Two countries are on the brink of conflict, each advised by AI systems. Do those AIs search for de-escalatory options, or are they bellicose? The cumulative effect of AIs’ character traits across hundreds of millions of interactions, and in rare but critical moments, will have an enormous impact on the course of society. The main counterargument to the importance of AI character is that competitive dynamics and human instructions will determine the range of AI characters we get, so there’s little we can do today to affect it one way or the other. This is partly true, but the constraints are not binding. At the crucial moment, there might be just one leading AI company, facing none of the usual competitive pressures. Some decisions may have path-dependent outcomes, due to stickiness of training or user expectations. And there will, predictably, be many future conflicts over AI character. It’s a safer world if we work through these tradeoffs ahead of time, before a crisis forces it. AI character is most important in worlds where alignment gets solved. But it can affect the chance of AI takeover, too. Some styles of character training may make alignment easier; and some characters are more likely to make deals rather than foment rebellion, even if they have misaligned goals. Given how neglected the area is, too, I think work on AI character is among the most promising ways to help the intelligence explosion go well.

English
0
0
1
43
Michael 🔸
Michael 🔸@mjkerrison·
@_NathanCalvin I feel like > because they could slay 10000 men with the jawbone of an ass is a little bit facetious - more in line with your original interpretation!
English
0
0
0
27
Nathan Calvin
Nathan Calvin@_NathanCalvin·
You may have heard the term "jawboning" used in reference to Anthropic's allegations of informal government coercion in its lawsuit with the DOW. Now seems as good a time as ever to share that jawboning has perhaps my favorite/most unexpected etymology of any word in the English language. First, what does jawboning mean? Here is a definition from the NYT in 1970 that still holds up today: "The word ‘jawboning,’ as used by most Government officials and businessmen these days, refers to public exhortations and/or implied threats by the Administration as a means of convincing business or labor to adopt certain attitudes and policies" But where does the term come from? Initially, I suspected the origin was that it came from the fact that the government was engaged in speech, and moving its proverbial jawbone up and down to talk. But this is not the case! The term actually comes from the biblical story of Samson, where Samson uses the jawbone of a donkey as a weapon to kill a thousand men. As Samson proclaims: "with the jaw of an ass have I slain a thousand men." As a reporter for Barrons wrote in the 1970s about Jimmy Carters admonition of businessmen to dissuade them from engaging in a price-hike "It was said of Jimmy Carter, as of other presidents and their tame economists, that they were like Samson in the Bible, because they could slay 10,000 businesses with the jawbone of an ass." I suppose the idea is that because of Samsons strength and the governments strength, they can use a small tool (a donkeys jaw, mere speech) to get much larger outcomes than expected. I hope other people found this as amusing and interesting as I did! Including a clip from a Cato blog post that lays out the history well, link in reply.
Nathan Calvin tweet media
English
3
0
16
944
Michael 🔸
Michael 🔸@mjkerrison·
@AaronBergman18 No, I think that's right! And IME many args to the contrary are implicitly or explicitly rejecting the premise or denying the antecedent
English
0
0
1
76
Aaron Bergman 🔍 ⏸️ (in that order)
This is prob very dumb 101-level half question/half observation but: In the story where AI replaced ~all human labor and there’s no redistribution and just a few fantastically wealthy capital owners, won’t there not *be* an economy in the way we think about it today…
English
13
0
38
3.1K
Michael 🔸
Michael 🔸@mjkerrison·
@croissanthology I mean OAI was originally founded because none of them trusted Hassabis, right? I wonder if this was the Hassabis they were exposed to lol
English
0
0
3
60
croissanthology
croissanthology@croissanthology·
This reads like an onion joke about mad scientist stereotypes, like youtube.com/watch?v=-Uq9pp… I would not have expected this from Hassabis. I thought he just liked games. Between this and "Grok will be aligned because it'll be curious about everything" I am not reassured by the stated motivations of most AI lab heads. I can't see this leading anywhere good. I want my AI lab heads firmly attached to keeping onto their mortal coil.
YouTube video
YouTube
Jama⏸️@feruell

I don't like that at all. I'm not happy to shuffle off my mortal coil, fuck that. I assumed Hassabis was more pragmatic.

English
2
1
23
2.1K
Michael 🔸 retweetledi
Nathan Labenz
Nathan Labenz@labenz·
I agree with @TheZvi: it's flagrant, shameful defection to focus one's effort on escaping the "permanent underclass" And it won't work anyway! Do you really think a few database records will save you when most humans aren't economically productive? x.com/labenz/status/…
Nathan Labenz@labenz

"The reason people think of this as the end game is that they don't believe in the actual end game." @TheZvi says that the Anthropic vs DoW conflict marks the beginning of the middle of the AI story, but the real end-game will be much crazier still. 🔗 ↓

English
9
14
125
12.1K
Michael 🔸
Michael 🔸@mjkerrison·
To be clear, I basically agree with Ord but like holy moly timelines could be SHORT and we're really not covering those bases yet
English
0
0
0
14