Michael Hay 🇬🇧 ⏸️

4.1K posts

Michael Hay 🇬🇧 ⏸️ banner
Michael Hay 🇬🇧 ⏸️

Michael Hay 🇬🇧 ⏸️

@MikeFHay

Scottish software guy. Very concerned about AI existential risk.

Edinburgh, Scotland Katılım Ekim 2010
740 Takip Edilen259 Takipçiler
Seventh
Seventh@7seventhseth7·
Nice For What - Cloud SSBU Montage
English
5
19
346
21K
Torchbearer Community
Torchbearer Community@JoinTorchbearer·
"This is not the will of the people. This is not what I or the general public here in America, or abroad, want." Connor Leahy (@NPCollapse), speaking with @romanyam, on why the development of superintelligence is a grave national security risk that demands government action now.
English
3
13
37
6.3K
FaZe Sparg0
FaZe Sparg0@Sparg0ssb·
So close 😭😭 but I can’t be too mad at myself 3rd place at Kagaribi 15!! I wanted to win but still I’m super proud of my performance, I felt mind blocked all year but I didn’t want to give up, I prepared so much for this tournament and I’m so happy to know I can still win ! Thank you everyone 🥹❤️
English
116
670
8.6K
225.1K
Michael Hay 🇬🇧 ⏸️
@CoughsOnWombats "More" invites the question "more than what/when?", "More and more" clearly means increasing over time to this day. Not saying that makes sense really but it is my intuition and probably that of other readers too.
English
0
0
1
16
FaZe Sparg0
FaZe Sparg0@Sparg0ssb·
I made top 24 Winners side without dropping a single game, beat Shinda Lion, Ataru, 33PeranBox and Carmelo. I play Leo for top 8 tomorrow
English
49
203
4.7K
133.2K
Jerusalem
Jerusalem@JerusalemDemsas·
many such cases
Jerusalem tweet media
English
11
5
480
55.3K
Michael Hay 🇬🇧 ⏸️
@AndyMasley Bezos likes to ask "what will still be true in 5-10 years time?" to inform what to invest in. Unfortunately for AI reasonable estimates include total human disempowerment/extinction, so it is genuinely very hard to know what skills will be useful, if any!
English
0
3
10
146
Andy Masley
Andy Masley@AndyMasley·
I don't believe at all that people need to "use AI because otherwise they'll fall behind on it." Capabilities are moving so fast that we'll all be novices in 3 years regardless of what we're doing now. My poking around GPT-3 papers gave me basically no leg up on actually using AI compared to people who just dove straight into coding agents.
English
22
9
183
9.4K
Michael Hay 🇬🇧 ⏸️ retweetledi
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
The existential risk of artificial intelligence.
Sen. Bernie Sanders tweet media
English
965
1K
4.7K
964.2K
Michael Hay 🇬🇧 ⏸️
@kevinlotto Just listened to Tristan Harris on the Sam Harris podcast. He says part of the problem is the perceived inevitability of ASI, meaning developers don't feel morally to blame for building it. All the more reason to establish a credible path to a global ban!
English
0
1
1
21
Kevin Lotto
Kevin Lotto@kevinlotto·
The development of superintelligence is only a race from the perspective of the for-profit AI companies because they seem to believe they can win. However, if they achieve their goal, they & everyone else actually lose. We are not ready, and may never be, for superintelligence.
Connor Leahy@NPCollapse

A "race" implies that it is something that is worth "winning", which is not the case for superintelligence. Whoever finished first is just the first to trigger irrecoverable catastrophe.

English
2
3
7
115
Corey Walker 🇺🇸
Corey Walker 🇺🇸@CoreyWriting·
Gaza is the only place where the overall population can increase, where 70% of casualties are combat-aged males, where the opposition sends in hundreds of humanitarian aid trucks, and there's still apparently "genocide" occurring.
Kylie Cheung@kylietcheung

it's infuriating that so many pro-Palestine voices have to fear being smeared as misogynists, rape apologists etc if you call out misinfo that justifies genocide. this has allowed those lies to firmly take hold...

English
297
511
3.1K
169.4K
Michael Hay 🇬🇧 ⏸️
@aedison Not sure what threat model would call for more than 12 characters for this. You're not securing against billions of guesses, the answer isn't even hashed.
English
0
0
0
573
Avery Edison
Avery Edison@aedison·
just got a security call from my bank and had to answer a real human being asking me “what was the name of your first childhood pet?” with a 64-character alphanumeric password. I apologized eight separate times
Avery Edison@aedison

I read one blog post ten years ago and now whenever I want to log in to my email on a new computer I have to stick a physical key into it. feels at once both very secure and also like I am the victim of a prank. like I’m the only person alive doing this.

English
15
74
7.4K
394.3K
Michael Hay 🇬🇧 ⏸️ retweetledi
MIRI
MIRI@MIRIBerkeley·
Is it possible to coordinate with China on AI governance? Critics of our proposed international agreement say no. But statements from Chinese government officials and academic figures paint a more optimistic picture:
MIRI tweet media
English
9
38
204
16.4K
Michael Hay 🇬🇧 ⏸️
@cafreiman No they didn't? The article cites many experts and industry leaders warning of AI dangers that are not at all analogous to mechanised agriculture.
English
0
0
1
76
𝖦𝗋𝗂𝗆𝖾𝗌 ⏳
I didn’t see the doc yet - but I do think the best “don’t panic” people don’t rly do interviews because most of the best arguments against doomers (who I think are very logical except with regards to their own branding) essentially come down to opinions on the nature of life that the vast majority of people will not like or be ok with. Like I’ve had people make very strong arguments to me about why zero care should be spent addressing doomer concerns and it basically comes down to things like human life isn’t particularly special in the context of intelligence, or the philosophies of the people building ai are based on such and such superior cultural approach that I trust more than the current one etc etc obviously that’s extremely basic but - I think the reason we don’t hear these arguments in public is because they tend to end up being lik well a bunch of ppl r gna be poor in the short term and it’ll be awful and it’s gna be a bad bottleneck time or the cybernetic system deserves priority over individuals hence a certain amount of suffering death and merging and possibly even extinction is ok I don’t see these types of arguments ever risking being subjected to actual debate or rigor from the opposition. It’s pretty crazy there aren’t more well documented well planned earnest formal debates between the best doomers and the best optimists with fact checking - and mebe it’s not a totally formal debate cuz I want people to debate their side and I want optimists to have to stand up to the most meticulous scrutiny and if it still stands then awesome, Same for doomers idk It’s insane that it never actually comes down to being person v person. There’s almost no reason to do any more docs or have any more discussions if we can’t do that because it’s just ppl yelling unchallenged arguments back and forth This is too long sorry
English
33
8
143
22.1K
Aella
Aella@Aella_Girl·
Just saw the AI doc and came away pissed at the optimists. I sort of expected them to have any argument that actually addressed the x-risk side, but they were basically like 'historically tech is good, people have been worried before but it was fine!' They didn't address at ALL the extremely entry-level concerns of like 'building something smarter than us is a categorically new type of threat'. They just repeated that tech would help humanity. It's especially infuriating cause the most lifelong techno optimists I know ARE the doomers. The x-risk community are the ones who grew up on epic sci-fi fiction and have thought long and hard about what the singularity might bring. One of my friends (who was in the doc) once spent all night carrying ice into a hospital room to preserve the corpse of his friend in a desperate attempt to get him into a cryonics lab. It's real for them! But "AI has promise" is not even close to an adequate response to the extinction threat on the table. Even the AI CEOs in the movie - the ones that are *actually* doing the most acceleration - seemed to at least understand the gravity of the arguments they were engaging with. The optimists in the doc seemed to have domain expertise in their technical fields, but were amateurs. They both are insufficiently visionary and also fail to engage with the actual risk in a practical way. I think they pattern match the "ai might kill us" people onto general woke anti-tech movement, and shout against them from a place of ego. That's the only good explanation I can think of for why they must be beating an activist drum that's so damn empty.
Aella tweet media
English
93
55
785
137.7K
Michael Hay 🇬🇧 ⏸️
@justinpoir @ramez I'd be interested in hearing this in more specifics and detail. That's what people have been expressing, but you've chosen to instead respond that you don't have the burden of proof!
English
0
0
0
35
Justin Poirier
Justin Poirier@justinpoir·
@MikeFHay @ramez The particularly extraordinary claims have not convinced me at all and all seem to skip steps or make assumptions that can't be backed up by evidence. In some cases they assume something as known which I think is not just unknown but unknowable.
English
1
0
0
29
Justin Poirier
Justin Poirier@justinpoir·
The onus of doom prediction arguments being on the not doomers doesn't make sense. Not doom is the default. Doom is the has always been wrong historically prediction. This time is different? Ok prove it but not with assigning a vibes based doom number.
English
4
1
11
1.2K
Michael Hay 🇬🇧 ⏸️
@justinpoir @ramez You're saying that the warnings of many experts, Nobel prize winners and leaders in the field, including detailed arguments can be simply ignored? That seems an extraordinarily high burden of proof.
English
1
0
0
33
Justin Poirier
Justin Poirier@justinpoir·
@MikeFHay @ramez I'm not claiming they aren't making an argument, I was responding to recent claims that the non-doomers "have no argument" which I think is both untrue and also goes against who has the burden of proof here
English
1
0
0
31