Alex Amadori

39 posts

Alex Amadori

Alex Amadori

@testdrivenzen

Policy research at @ConjectureAI

London Sumali Ekim 2024
31 Sinusundan116 Mga Tagasunod
Naka-pin na Tweet
Alex Amadori
Alex Amadori@testdrivenzen·
We explore how a coalition of middle powers may prevent development of ASI by any actor, including superpowers. We design an international agreement that may enable middle powers to achieve this goal, without assuming initial cooperation by superpowers.
Alex Amadori tweet mediaAlex Amadori tweet mediaAlex Amadori tweet mediaAlex Amadori tweet media
English
11
28
88
18.9K
Alex Amadori nag-retweet
ControlAI
ControlAI@ControlAI·
We've just published our 2025 Impact Report! At a glance: ~1 in 2 UK lawmakers we briefed supported our campaign, for a total of 110+ supporters 2 House of Lords debates on superintelligence & extinction risk A series of hearings at the Canadian Parliament (+ more in thread)
ControlAI tweet media
English
4
12
43
6.8K
Alex Amadori
Alex Amadori@testdrivenzen·
@TheZvi It does mean that, provided said human can think many times faster than normal humans and can be cloned an arbitrary number of times given enough hardware
English
0
0
0
96
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
We can go a step further. The Anthropic constitution is a set of principles for how Claude should act and what its character should be. Does this mean that no human whose parents instructed them to have values other than the literal US Constitution should be in the supply chain?
Aaron Rupar@atrupar

DoD official Emil Michael on designating Anthropic a supply chain risk -- "Their model has a soul, a 'constitution' -- not the US Constitution. The other day their model was 'anxious' and they believe it has a 20% chance of being sentiment and having its own ability to make decisions. Does the Dept of War want something like that in their supply chain?"

English
28
10
258
12.4K
Alex Amadori
Alex Amadori@testdrivenzen·
@slimer48484 Just FYI, the title is clickbait, Max Harms is a promoter of corrigibility. It would be a stretch to call corrigibility "the exact opposite of making AIs good".
English
0
0
1
12
deckard⏩
deckard⏩@slimer48484·
The nominative determinism too powerful
deckard⏩ tweet media
English
4
2
63
1.7K
Alex Amadori
Alex Amadori@testdrivenzen·
Very good post. Anthropic is not acting like you'd expect a responsible org to act. If you take their theory of change at face value, you should expect them not to fight regulation, or at the very least to loudly and consistently make public declarations that bluntly describe the situation with respect to extinction risk. "Help! We're trapped in a death race!" Or even "We've changed our mind, alignment is easy, we think we can navigate an intelligence explosion" But they make no such statement. If you think Anthropic is serious about avoiding extinction risk, you should be very confused. Anthropic's behavior is what you'd expect from an organization that is almost entirely dedicated to building ASI as fast as possible. So they won't seriously advocate for any measure that slows them down. At the same time, they can't just pivot their PR toward dismissing doom in the foreseeable future, since they've built a reputation of being safety-minded that gives them easy access to the smartest researchers. The minor differences between them and other major AI companies are routinely used raw thought-stopper material for employees and shills to justify, to themselves, what they're doing with their time and social capital. However, these differences can only persist on the condition that they are at most a minor inconvenience toward building ASI as fast as possible. The more time passes, the harder things will get. If Anthropic is dropping the ball when we're playing in easy mode, they will do much worse later on.
Mikhail Samin@Mihonarium

Many in my community hold Anthropic in high regard. Sadly, they should not. I wrote a post showing why. Anthropic in its current form is not trustworthy. The leadership is sometimes misleading and deceptive; they contradict themselves and lobby against regulations just like everyone else, while not really being accountable to anyone except perhaps their investors. The post discloses a number of facts that had not previously been reported on and combines them with publicly available information in an attempt to paint an image of Anthropic more accurate than the picture Anthropic’s leadership likes to present. Read: anthropic.ml

English
1
2
11
447
Santiago
Santiago@svpino·
Is it just me, or has ChatGPT become dumber? It might just be that I'm using it more and asking more complex questions, but my feeling over the last couple of weeks hasn't been good. It has become more verbose, repetitive, and it constantly flip-flops to the point where I need to fact-check every single thing it says because I can't trust it. I'm considering moving to Gemini as my daily driver and see what's up.
English
400
24
665
63K
Alex Amadori nag-retweet
Andrea Miotti
Andrea Miotti@andreamiotti·
We built a coalition of 100+ UK lawmakers who are taking a stance against the extinction risk from superintelligent AI and back regulating the most powerful AIs! From the former AI Minister to the former Defence Secretary, cross-party support is crystal clear. Time to act!
Andrea Miotti tweet media
English
11
37
75
25.6K
Rob Wiblin
Rob Wiblin@robertwiblin·
Who's the best guest to speak to what middle powers — UK, Australia, Japan, Germany, Korea, Netherlands, Canada — should be doing in light of AI advances and the possible radical impact of AGI or recursively self-improving AI?
English
18
4
47
3.6K
Alex Amadori
Alex Amadori@testdrivenzen·
We explore how a coalition of middle powers may prevent development of ASI by any actor, including superpowers. We design an international agreement that may enable middle powers to achieve this goal, without assuming initial cooperation by superpowers.
Alex Amadori tweet mediaAlex Amadori tweet mediaAlex Amadori tweet mediaAlex Amadori tweet media
English
11
28
88
18.9K
Alex Amadori nag-retweet
Peter Barnett
Peter Barnett@peterbarnett_·
We at the MIRI Technical Governance Team just put out a report describing an example international agreement to prevent the creation of superintelligence. 🧵
Peter Barnett tweet media
English
10
17
125
31.3K
Alex Amadori nag-retweet
Liron Shapira
Liron Shapira@liron·
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️ Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality. 🧵 Here's the proof:
Liron Shapira tweet mediaLiron Shapira tweet media
English
33
90
541
340.4K
Alex Amadori
Alex Amadori@testdrivenzen·
the idea of having a chat interface is cute, but it makes it so that we can't see the messages until they have been fully typed which is annoying (and i guess unnecessary unless the AI is backtracking?) the blue bubble color also makes it impossible to select text in the user's messages should have a UI + hotkey way to switch between auren and seren instead of having to ask I can see that it's using web search (or lying about having used web search), but i'd like to see the sources it consulted from 5 minutes trial i like the personalities. the mean one gave useful advice on how not to get stuck on a doomed approach without being patronizing.
English
0
0
1
390