Rogs 🔍🔸

12.7K posts

Rogs 🔍🔸 banner
Rogs 🔍🔸

Rogs 🔍🔸

@ESRogs

Curious optimist. Sincerity over sarcasm. https://t.co/YyJXMnCCxN

San Francisco, CA Katılım Mayıs 2008
3.8K Takip Edilen1.4K Takipçiler
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
@peterwildeford The closer this is to the White House specifically being involved (vs gov generally), the worse it seems to me. Do you disagree?
English
1
0
3
109
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
"The administration is discussing [...] an AI working group that would bring together tech executives and government officials to examine potential oversight procedures" 👀 Big deal! great to see the White House leading on this!
Peter Wildeford🇺🇸🚀 tweet media
English
7
11
84
3.6K
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
@HaydnBelfield Compute available per dollar grows exponentially though, so you should be able to re-label the x axis to linear time with a mapping like "date at which this much compute cost $X", and the y axis and shape of the curve would be unchanged, right?
English
0
0
3
57
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
@adrusi > i asked him how they would have held the pressure and he was like "huh." I am also like "huh"... does "they" refer to the Greeks or the Romans?
English
0
0
7
3.2K
Tannor Manson
Tannor Manson@Futurenvesting·
Anthropic is now showing off $44 BILLION in annual recurring revenue. This is up $14 billion (+46.6%) since last month! BULLISH for AI Infrastructure $NVDA $AMD
Tannor Manson tweet media
English
119
204
2.3K
1.1M
Rogs 🔍🔸 retweetledi
Ricardo Olmedo
Ricardo Olmedo@rdolmedo_·
We fine-tuned Alec Radford’s 1930 vintage LLM to solve SWE-bench issues. After just ‼️250‼️ training examples, the model solves its first issue, a simple patch to the xarray library. 🧵👇
Ricardo Olmedo tweet media
English
24
84
1.2K
293.6K
Rogs 🔍🔸 retweetledi
Lawrence Chan
Lawrence Chan@justanotherlaw·
A recent viral paper claims to reverse-engineer the parameter counts of frontier models: GPT-5.5 = 9.7T, Opus 4.7 = 4.0T, o1 = 3.5T, etc. @ben_sturgeon and I investigated and found serious issues in the paper; fixing them gives GPT-5.5 as ~1.5T (90% CI: 256B-8.3T).
Lawrence Chan tweet media
English
29
96
951
204.8K
Rogs 🔍🔸 retweetledi
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Although alignment does, yes, often end-up dual-use with capabilities, if it's not going to advance capabilities a LOT, I don't currently advocate that you should quit a job at Anthropic or Google working on alignment.
English
6
8
277
14.7K
Rogs 🔍🔸 retweetledi
Andrew Critch (🤖🩺🚀)
Andrew Critch (🤖🩺🚀)@AndrewCritchPhD·
PSA: I developed adult-onset lactose intolerance in the 2000's. I was told this was genetic and incurable. But I thought maybe if I ate bacteria that ate lactose it would cure me. It did. Apparently since ~2021 this is scientifically well established: themultiplicity.ai/room/0b34c7d7-…
English
2
2
28
1.1K
Rogs 🔍🔸 retweetledi
david rein
david rein@idavidrein·
The better Claude models are at decision theory questions, the more into evidential decision theory they are. Major win for one-boxers
david rein tweet media
English
5
5
47
2.1K
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
@skepticalsports You're assuming no one's decision is correlated with yours. In reality, others will think similarly. (This suggests you should decide as though you're determining the votes of a cohort of like-minded people.)
English
2
0
1
97
Benjamin Morris
Benjamin Morris@skepticalsports·
I'm sure others have said this, but it seems very relevant that picking red condemns half the population to die, but only in the cases where the vote is perfectly split, while picking blue condemns YOU to die in every case where the majority picks red. Easy to calculate EVs.
English
2
0
3
1.9K
Rogs 🔍🔸 retweetledi
Andrew Curran
Andrew Curran@AndrewCurran_·
New update from Scott Aaronson. He continues: 'And I’d say that that makes my own moral duty right now ironically simple and clear: namely, to use my unique soapbox, as the writer of The Internet’s Most Trusted Quantum Computing Blog Since 2005TM, to sound the alarm. So, here it is: if quantum computers start breaking cryptography a few years from now, don’t you dare come to this blog and tell me that I failed to warn you. This post is your warning. Please start switching to quantum-resistant encryption, and urge your company or organization or blockchain or standards body to do the same.'
Andrew Curran tweet media
Andrew Curran@AndrewCurran_

Scott Aaronson on his blog talking about Shor's, quantum, and crypto: 'When I got an early heads-up about these results—especially the Google team’s choice to "publish" via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold? But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said: we have decades of experience with this, and the answer is that you publish. And, they said, if publishing causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.'

English
13
67
537
52.5K
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
Correct me if I'm wrong, but isn't it the case that everyone ends up with the same orientation regardless of which button they pushed, and the buttons are just a vote on whether everyone ends up gay or straight? Logic: >90% green = <10% purple + you green -> gay + you purple -> gay <90% green = >10% purple + you green -> straight + you purple -> straight
English
2
0
36
3.8K
Danielle Fong 🔆
Danielle Fong 🔆@DanielleFong·
everyone in the world has to press one of two buttons. 🟢 🟣 if you press green, you will be straight, unless >90% of people press it, in which case you will be gay. if you press purple, you will be gay, unless >10% of people press it, in which case, you will be straight.
English
218
32
529
73.3K
Rogs 🔍🔸
Rogs 🔍🔸@ESRogs·
@deanwball I think you are the first person to ever accuse me of having a cynicism bias. Usually it's the opposite. But it seems plausible you're right in this case. You'd have a lot more context than me on the admin's thinking.
English
1
0
10
263
Dean W. Ball
Dean W. Ball@deanwball·
@ESRogs I don’t think they are doing this to stick it to Anthropic and that is a good example of how a cynicism bias can actually be both wrong and naive
English
1
0
15
1K
Dean W. Ball
Dean W. Ball@deanwball·
Okay, jokes aside, my thoughts about the WSJ’s reporting that the White House is asking Anthropic not to disseminate Mythos any further: 1. Assuming the story is true, I suspect the White House is making the right call. But this is the opposite of a tenable strategy, like trying to erect a dam against a tsunami. There is no way to stop the diffusion of capabilities like Mythos within the next 6-18 months. 2. We should be clear that the government restricting the release of AI models is a type of licensing regime. It is an informal, highly improvised licensing regime, but a licensing regime nonetheless. This isn’t going to be the last such model we see of this capability tier, and cyber vulnerability discovery is very far from the only type of dangerous capability. If the government is going to insist on restricting frontier capabilities for the foreseeable future, it will need to formalize the rules for those restrictions—how long must you delay, what objective factors generate a “green light,” etc. I know this will feel even more regulatory to some, but the alternative is an unpredictable, inconsistent, improvisatory licensing system, and this is both bad for business and the rule of law. 3. I have been critical of the Trump admin for being TOO libertarian with regard to major AI risks. I stand by those criticisms. But I am also infinitely grateful that there will be people advising the President who truly do understand the risks of regulatory overreach, and fear them even more than I do. There wouldn’t have been in a hypothetical Biden/Harris administration. I wish them fortitude and luck. 4. A thing that would be better than an improvised licensing regime would be to bolster technical model and system safeguards. Imagine a version of Mythos that was just as capable, but had been specifically neutered in cyber vulnerability discovery. This a longstanding area of technical AI safety research! There are tradeoffs with this specific approach (as there all with all approaches), but the broader point is that bolstering technical safeguards would mean we could disseminate Mythos-level models more quickly than we can today. 5. If you think clearly about (4), you will understand that technical AI safety research can be profoundly accelerationist rather than evil, decelerationist, whatever other pejorative you have seen hurled at “AI safety.” This does not mean “all AI safety research is good,” but it does mean that technical safety work is an essential part of actually achieving AI takeoff while maintaining societal order. 6. I cannot emphasize enough how much the training wheels have come off on AI policy. The trial runs are over. Many of the heuristics people adopted during the training-wheel period will not be useful (“AI safety is decelerationist” is one of those heuristics, btw). If you want to contribute usefully to the cause of making AI go well, you will need to increase the IQ of your speech. 7. Dealing with risks of this kind should be nonpartisan and technocratic. Catastrophic risk mitigation is not the thing to negatively polarize on partisan lines, as some, especially on the accelerationist side, have been doing. Let’s have partisan fights about things like AI/labor—that’s healthy! But not catastrophic risk management, please.
The Wall Street Journal@WSJ

Exclusive: The White House opposes a plan from Anthropic to expand access to its powerful artificial-intelligence model Mythos on.wsj.com/4cHiUY5

English
25
45
383
56.7K
Rogs 🔍🔸 retweetledi
Alex Turner
Alex Turner@Turn_Trout·
I spent the last 2 months trying to prevent this. If OpenAI offered a fig leaf, Google said "imagine we offered a fig leaf." Google affirms it can't veto usage, commits to modify safety filters at government request, & aspirational language with no legal restrictions. Shameful.
Alex Turner tweet media
English
42
110
871
128.9K
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
To be clear I don't think either of them were significant leaders in the movement. Peter Thiel spoke at the EA summit 2013 and funded several organizations that were pretty involved. Elon Musk spoke at EA Global 2015 and was involved in a number of other events. And these were good contributions on net, in my opinion.
English
2
0
14
934
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
I have some serious disagreements with both Elon Musk and Peter Thiel, but I think it's stupid to dismiss them just because they helped get the Effective Altruist movement off the ground
English
3
0
23
3K
Dean W. Ball
Dean W. Ball@deanwball·
Who are the best thinkers on AI rights? I do not mean “who is the most prominent person who has expressed broad support for AI rights?,” but rather “who has done the best work elaborating on the details of what ‘AI rights’ should or could entail?”
English
60
9
206
23.6K