A Human Future ⏹️⏸️

380 posts

A Human Future ⏹️⏸️ banner
A Human Future ⏹️⏸️

A Human Future ⏹️⏸️

@A_Human_Future

Stop trying to create a god. Support PauseAI https://t.co/nr9dE7poLS

Midwest Katılım Ekim 2025
125 Takip Edilen30 Takipçiler
A Human Future ⏹️⏸️
A Human Future ⏹️⏸️@A_Human_Future·
@tenobrus @j_christo3 Hamilton's Commonwealth John C. Wright's Golden Oecumene Star Trek Federation Stevenson's Diamond age Robinson's Mars trilogy Hyperion's "Hegemony of Man"
English
0
0
0
10
Tenobrus
Tenobrus@tenobrus·
we should be trying to build The Culture. you all know that right? the whole point of all of this is to build The Culture.
Tenobrus tweet media
English
107
125
1.6K
94.6K
A Human Future ⏹️⏸️
A Human Future ⏹️⏸️@A_Human_Future·
@Idgitbhh @tenobrus @UnrealRealist19 It's probably not worth understanding, but basically it's an irony-poisoned, highly aestheticized internet subculture that pretends to be about "network spirituality", but it's actually about being edgy and deliberately abstruse.
English
1
0
11
257
Tenobrus
Tenobrus@tenobrus·
nick land is explicitly antihuman and antiflourishing, and landian-descended philosophies are just as demonic. e/acc and milady are both poison fruits borne from staring into a horrible nightmarish abyss and deciding to let it consume u instead of standing up and fighting back.
English
102
87
1.3K
90.7K
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
Some tech people in SF have a quiet, deep-seated contempt for people of average IQ, or people who work outside tech. For some, this feeds into a complacency toward the idea of AI putting the normies on permanent welfare -- they think normies are already useless anyway.
Jane Manchun Wong@wongmjane

Go ahead and keep calling them “permanent underclass” and “have fun being poor”. Just be as smug as possible and wave fingers at them I’m sure this won’t further their resentment or radicalize them, and they’ll just keep their heads down

English
61
66
1.2K
117.8K
mattparlmer 🪐 🌷
mattparlmer 🪐 🌷@mattparlmer·
Open secret that the big AI labs see themselves as emergent state-like entities in the vein of the distributed polities in “Diamond Age” or “Terra Ignota”, which is pretty funny bc staff at said labs always seem surprised when state-like dynamics such as internal politics crop up
English
29
88
1.5K
49.9K
Enock Saint Juste
Enock Saint Juste@enocksaintjuste·
@Xenoimpulse i keep fucking telling people, this “AI techno hellscape” bullshit will not happen, these fucktards running this shit are NOT Gods, they are all easily impaleable bags of meat—and the supply chains governing these operations rely on the proletariat.
English
1
0
11
548
Tristan Cunha
Tristan Cunha@cunha_tristan·
He actually makes a really nuanced metaphor, and even clarifies that the ring isn't AGI: "I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”. It's a surprisingly good way to describe the issue, but I suspect almost everyone will miss the point.
English
2
0
2
129
Nathan 🔎
Nathan 🔎@NathanpmYoung·
@Squee451 Then why would a pause help, if it was completely inevitable even under significant delay?
English
2
0
1
382
Nathan 🔎
Nathan 🔎@NathanpmYoung·
I don't see how you can have a P(doom) over 90% when the probability of a war for Taiwan is like 20% before 2030. How on earth is scaling going to continue apace in the that world?
English
33
1
77
9.2K
A Human Future ⏹️⏸️
A Human Future ⏹️⏸️@A_Human_Future·
Many of the AI zealots literally can't imagine a worse outcome for themselves than some kind of fantastical cosmic utopia. I think I actually have more respect for the e/accs who just openly support human extinction.
Nick@nickcammarata

some days it’s hard in the permanent lower class. immortality, peptide itch, enlightenment you didn’t even earn, one of your Claude Testaments ghosted you. just two galaxies to your name and neither feel like home. another rough 9.8/10 day, the rich never dip below 9.99

English
0
0
1
67
Rob Wiblin
Rob Wiblin@robertwiblin·
This week I'm interviewing the world's most cited computer scientist, Yoshua Bengio. He chairs the International AI Safety Report. But his primary focus is developing a comprehensive solution to the AI alignment problem: 'Scientist AI'. What should I ask him?
English
24
7
139
5.9K
A Human Future ⏹️⏸️
A Human Future ⏹️⏸️@A_Human_Future·
@slatestarcodex @robbensinger You're fixating on Pause AI specifically, but they could just create their own pro-pause groups if they feel Pause AI is ineffective. The Anthropic people could just spend the money directly lobbying for a Pause. They don't because they just really do want to roll the dice.
English
1
0
5
528
Scott Alexander
Scott Alexander@slatestarcodex·
PauseAI the org gets most of its funding from organizations that I would classify as EA (FLI, SFF. Manifund). I think missing things like this is a downside of turning "EA" into the enemy. If you mean Open Phil, say Open Phil! My impression is that OpenPhil in particular doesn't fund that org in particular because: - They think that their particular style alienates opponents faster than it builds friends. I think this is plausible - when I wrote a post about going to one of their protests, many of my blog readers got angry, and some threatened to unsubscribe. I personally find PauseAI's leadership extremely unpleasant and feel like I support their cause despite them rather than because of them. - They fund a lot of other things (like political candidates) whose opponents would love to attack them along the lines of "funded by extremists who want to ban all AI". This isn't some kind of hypothetical 11D chess, it's happened and I've seen it used to discredit organizations that take their funding. I think EA as a field solves this by being made of multiple organizations that have firewalls between their respective reputations. - I think people equate "pausing AI" with "gathering people to go into the street and wave protest signs under the leadership of Holly Elmore". I don't think this is immediately promising - the largest such US protest had about 100 people. I think once you look beyond this - including to the actions it would take for these protests to get bigger in the future - they are doing some useful fieldbuilding. For example, they fund a lot of dialogues with Chinese scientists that could potentially pay off in increased ability to get some agreement with China, they fund most of the good AI journalism that may eventually get the masses to realize AI is dangerous, etc. I think it's helpful to look at your own tweet above that I'm responding to, as something pretty typical of the PauseAI movement, and ask whether you think spending millions of dollars having more of that kind of thing will cause something good to happen.
English
2
1
16
988
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
In response to "What did EAs do re AI risk that is bad?": Aside from the obvious 'being a major early funder and a major early talent source for two of the leading AI companies burning the commons', I think EAs en masse have tended to bring a toxic combination of heuristics/leanings/memes into the AI risk space. I'm especially thinking of some combination of: 'be extremely strategic and game-playing about how you spin the things you say, rather than just straightforwardly reporting on your impressions of things' plus 'opportunistically use Modest Epistemology to dismiss unpalatable views and strategies, and to try to win PR battles'. Normally, I'm at least a little skeptical of the counterfactual impact of people who have worsened the AI race, because if they hadn't done it, someone else might have done it in their place. But this is a bit harder to justify with EAs, because EAs legitimately have a pretty unusual combination of traits and views. Dario and a cluster of Open-Phil-ish people seem to have a very strange and perverse set of views (at least insofar as their public statements to date represent their actual view of the situation): --- 1. AI is going to become vastly superhuman in the near future; but being a good scientist means refusing to speculate about the potential novel risks this may pose. Instead, we should only expect risks that we can clearly see today, and that seem difficult to address today. If there is some argument for why a problem P might only show up at a higher capability level, or some argument for why a solution S that works well today will likely stop working in the future... well, those are just arguments. Arguments have a terrible track record in AI; the field is full of surprises. So we should stick to only worrying about things when the data mandates it. This is especially important to do insofar as it will help us look more credible and thereby increase our political power and influence. 2. When it comes to technical solutions to AI, the burden of proof is on the skeptic: in the absence of proof that alignment is intractable, we should behave as though we've got everything under control. At the same time, when it comes to international coordination on AI, we will treat the burden of proof as being on the non-skeptic. Absent proof that governments can coordinate on AI, we should assume that they can't coordinate. And since they can't coordinate, there's no harm in us doing a lot of things to make coordination even harder, to make our lives a bit more convenient as we work on the technical problems. 3. In general, people worried about AI risk should coordinate as much as possible to play down our concerns, so as not to look like alarmists. This is very important in order to build allies and accumulate political influence, so that we're well-positioned to act if and when an important opportunity arises. If you're claiming that now is an important opportunity, and that we should be speaking out loudly about this issue today... well, that sounds risky and downright immodest. Many things are possible, and the future is hard to predict! Taking political risks means sacrificing enormous option value. The humble and safe thing to do is to generally not make too much of a fuss, and just make sure we're powerful later in case the need arises. --- 1-3 really does seem like an unusually toxic set of heuristics to propagate, potentially worse than replacement. - In an engineering context, the normal mindset is to place the burden of proof on the engineer to establish safety. There's no mature engineering discipline that accepts "you can't prove this is going to kill a ton of people" as a valid argument. The standard engineering mindset sounds almost more virtue-ethics-y or deontological rather than EA-ish -- less "ehh it's totally fine for me to put billions of lives at risk as long as my back-of-the-envelope cost-benefit analysis says the benefits are even greater!", more "I have a sacred responsibility and duty to not build things that will bring others to harm." Certainly the casualness about p(doom) and about gambling with billions of people's lives is something that has no counterpart in any normal scientific discipline. - Likewise, I suspect that the typical scientist or academic that would have replaced EAs / Open Phil would have been at least somewhat more inclined to just state their actual concerns about AI, and somewhat less inclined to dissemble and play political games. Scientists are often bad at such games, they often know they're bad at such games, and they often don't like those games. EAs' fusion of "we're playing the role of a wonkish Expert community" with "we're 100% into playing political games" is plausibly a fair bit worse than the normal situation with experts. - And EAs' attempts to play eleven-dimensional chess with the Overton window are plausibly worse than how scientists, the general public, and policymakers normally react to any technology under the sun that sounds remotely scary or concerning or creepy: "Ban it!" Governments are incredibly trigger-happy about banning things. There's a long history of governments successfully coordinating to ban things dramatically less dangerous than superintelligent AI. And in fact, when my colleagues and I have gone out and talked to most populations about AI risk, people mostly have much more sensible and natural responses than EAs to this issue. A way of summarizing the issue, I think, is that society depends on people blurting out their views pretty regularly, or on people having pretty simple and understandable agendas (e.g., "I want to make money" or "I want the Democrats to win"). Society's ability to do sense-making is eroded when a large fraction of the "specialists" talking about an issue are visibly dissembling and stretching the truth on the basis of agendas that are legitimately complicated and hard to understand. Better would be to either exit the conversation, or contribute your actual pretty-full object-level thoughts to the conversation. Your sense of what's in the Overton window, and what people will listen to, has failed you a thousand times over in recent years. Stop pretending at mastery of these tricky social issues, and instead do your duty as an expert and inform people about what's happening.
English
31
23
245
55.9K
A Human Future ⏹️⏸️
A Human Future ⏹️⏸️@A_Human_Future·
@slatestarcodex @robbensinger Ridiculous take. There are tons of things Dario and the other EA people could do if they wanted to prioritize risk reduction, like actually advocating for a pause, spending money to lobby for a pause, etc. Clearly you're just helplessly biased because you're friends with EAs.
English
1
0
6
1K
Scott Alexander
Scott Alexander@slatestarcodex·
I disagree with all of this on the epistemic level of "it's not true", and additionally disagree with your comms strategy of undermining EAs. On the epistemic level - I haven't seen EAs (other than SBF) do a lot of lying, equivocating, or even being particularly shy about their beliefs. I don't know exactly who you're talking about, but Holden made a personal blog post saying that his p(doom) was 50%, and said: >>> ""I constantly tell people, I think this is a terrifying situation. If everyone thought the way I do, we would probably just pause AI development and start in a regime where you have to make a really strong safety case before you move forward with it." Dario said there's a 25% chance "things go really, really badly", and in terms of a pause: >>> "I wish we had 5 to 10 years [before AGI]. The reason we can't [slow down and] do that is because we have geopolitical adversaries building the same technology at a similar pace. It's very hard to have an enforceable agreement where they slow down and we slow down. [But] if we can just not sell the chips to China, then this isn't a question of competition between the U.S. and China. This is a question between me and Demis - which I am very confident we can work out." This is basically my position - I would add "we should try to negotiate with China, but keep this as a backup plan if it fails", but my guess is Dario would also add this and just isn't optimistic. I agree he's written some other things (especially in Adolescence of Technology) that sound weirdly schizophrenic, and more on this later, but I give him a lot of credit for paragraphs like: >>> "I think it would be absurd to shrug and say, “Nothing to worry about here!” But, faced with rapid AI progress, that seems to be the view of many US policymakers, some of whom deny the existence of any AI risks, when they are not distracted entirely by the usual tired old hot-button issues. Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake." Meanwhile, you seem to be treating all these people as basically equivalent to Gary Marcus. I think if you don't mean these people in particular, you should specify who you're talking about, and what things that they've said strike you in this way. Absent that, I think this "debate" isn't about OpenPhil or Anthropic failing to say they're extremely worried, failing to say that catastrophe is a very plausible outcome, or failing to say that they think slowing down AI would be good if possible. It's about OpenPhil in particular being pretty careful how they phrase things for public consumption. And I think any attempt to attack them for this should start with an acknowledgement that MIRI is directly responsible for all of our current problems by doing things like introducing DeepMind to its funders, getting Sam Altman and Elon Musk into AI, and building up excitement around "superintelligence" in Silicon Valley. I think if 2010-MIRI had slightly more strategicness and willingness to ask itself "hey, is this PR strategy likely to backfire?", you might not have told a bunch of the worst people in the world that AI was going to be super-powerful and that whoever invested in it would be ahead in a race that might make them hundreds of billions of dollars (and yes, you did add "and then destroy the world" - but if you had been more strategic, you might have considered that investors wouldn't hear that last part as loudly). (you could argue that you're not against strategicness in general, just talking about this one issue of saying cleanly that AI is very dangerous. But my impression is that Holden, Dario, have said this, many times - see examples above. What they haven't said is "the situation is totally hopeless and every strategy except pausing has literally no chance of working", but that isn't a comms problem, that's because they genuinely believe something different from you. And also, I frequently encountering people who say things like "Scott, I'm glad you wrote about X in way Y - it made me take AI risk seriously, after I'd previously been turned off of it by encountering MIRI". I think a substantial reason that Dario's writing sometimes seems schizophrenic when talking about AI risks is that he's trying to convey that they're serious while also trying to signal "I swear I'm not one of those MIRI people" so that his writing can reach some of the people you've driven away. I don't think you drive them away because you're "honest", I think it's just about normal issues around framing and theory-of-mind for your audience.) I don't actually want to re-open the "MIRI helped start DeepMind and OpenAI!!!" war or the "MIRI is arrogant and alienating!!! war - we've both been through both of these a million times - but I increasingly feel like a chump trying to cooperate while you're defecting. This is the foundation of my comms worry. Your claim that "governments are incredibly trigger-happy about banning things...there's a long history of governments successfully coordinating to ban things dramatically less dangerous than superintelligent AI" is too glib - I don't think there's ever been a ban on building something as economically-valuable and far-along as AI, executed competently enough that it would work if applied cookie-cutter to the AI situation. You're trying to do a really difficult thing here. I respect this - all of our options are bad and unlikely to work, the situation is desperate, and I have no plan better than playing a portfolio of all the different desperate hard strategies in the hopes that one of them works. But my impression is that the rest of the field is executing this portfolio plan admirably, but MIRI and a few other PauseAI people are trying to sabotage every other strategy in the portfolio in the hope of forcing people into theirs. (I think if you guys had your way, Anthropic would never have been founded, no safety-minded people would ever have joined labs, and the current world would be a race between XAI, Meta, and OpenAI, all of which would have a Yann LeCun style approach to safety, and none of which would have alignment teams beyond the don't-say-bad-words level. We wouldn't have the head of the leading AI lab writing letters to policymakers begging them to "jolt awake", we wouldn't have a substantial fraction of world compute going to Jan Leike's alignment efforts, we wouldn't have Ilya sitting on $50 billion for some super-secret alignment project -- just Mark Zuckerberg stomping on a human face forever. In exchange, we would have won a couple more years of timeline, which would have been pointless, because timeline isn't measured in distance from the year 1 AD, it's measured in distance between some level of woken-up-ness and some point of danger, and the woken-up-ness would be pushed forward at the same rate the danger was.) I support your fight-for-a-pause strategy in theory, and I would like to support it with praxis, but right now I feel very conflicted about this, because I worry that any support or oxygen you guys get will be spent knifing other safety advocates, while Sam Altman happily builds AGI regardless.
English
28
25
530
61.2K
A Human Future ⏹️⏸️
A Human Future ⏹️⏸️@A_Human_Future·
@theojaffee In reality, misalignment is the default and it's only survivable for humans because we're load-bearing, so any institution that becomes excessively destructive of humans is self-destructive.
English
0
0
0
21
Theo
Theo@theojaffee·
@A_Human_Future “necessarily” “certainly” what gives you the epistemic confidence to use such strong language? The world is filled with aligned intelligences defeating misaligned intelligences constantly, such as every time a robber is caught and imprisoned
English
1
0
0
45
Theo
Theo@theojaffee·
The typical policy scenario proposed by AI doomers will on net increase x-risk. If you ban all AGI development (or, as some have proposed, even 2024-level open-source models) outside of a single, highly secured, perhaps state-run Manhattan Project with no contact with the outside world, you are much more likely to end up with a misaligned superintelligent singleton than if you develop AI in a broad and multipolar way, in which case misaligned AGIs can be counteracted and defeated by aligned AGIs
English
21
4
45
5.6K
A Human Future ⏹️⏸️ retweetledi
Ricardo
Ricardo@Ric_RTP·
The CEO of Google DeepMind just admitted that if the decision had been his, we would've cured cancer before anyone ever used ChatGPT. And that's not even the scariest thing he said on a recent interview. Demis Hassabis is one of the most important people alive in AI. He won the Nobel Prize last year for AlphaFold, the system that cracked the 50 year protein folding problem. 3 million scientists now use his tool. Almost every new drug being developed will touch it at some stage. In a new interview, he was asked about the moment ChatGPT launched and Google went into "code red." His answer was one of the most revealing things any AI leader has ever said on the record: "If I'd had my way, I would have left AI in the lab for longer. Done more things like AlphaFold. Maybe cured cancer or something like that." Read that again. The man running Google's entire AI division is publicly saying the commercial AI race we're all living through was a MISTAKE. That the industry got hijacked by a chatbot when it could have been solving the biggest problems in science and medicine. His vision was simple: Build AI slowly, carefully, like CERN. Use it to crack root node problems one at a time. Cancer. Energy. New materials. Let humanity benefit from real breakthroughs while the foundational science was figured out over a decade or two. Then ChatGPT dropped in November 2022 and everything changed. Demis described what happened next as getting locked into a "ferocious commercial pressure race" that none of the labs can escape from. On top of that, the US vs China dynamic added geopolitical pressure. The result is everyone sprinting toward products instead of breakthroughs, shipping chatbots while the scientific opportunity gets buried under marketing cycles and quarterly earnings. But he's not saying progress isn't happening... He's saying the progress got redirected away from the things that actually matter most. And then it got even scarier: Because when Demis was asked what he worries about with AI, he laid out two threats. The first is what everyone talks about: Bad actors using AI for harm. Terrorist groups. Hostile nation states. Cyberattacks at scale. But that's not the threat he's most worried about. His second worry is AI itself going rogue. Not today's models. The models coming in the next two to four years as the industry enters what he calls "the agentic era." Systems that can complete entire tasks autonomously. Systems that are increasingly capable and increasingly hard to control. His exact words: "How do we make sure the guardrails are put in place so they do exactly what they've been told to do, and there's no way of them circumventing that or accidentally breaching those guardrails? That's going to be an incredibly hard technical challenge if you think about how powerful and smart and capable these systems eventually get." A Nobel Prize winner who runs one of the 3 most advanced AI labs on Earth just said publicly that within two to four years, we're entering a phase where AI alignment becomes a real problem, and the technical challenge of solving it is enormous. And almost nobody is paying enough attention. He called for international cooperation between labs, AI safety institutes, and academia to tackle the problem. He said this is the thing even the experts aren't thinking about enough. He said the only way to get through the AGI moment safely is if everyone starts treating this with the seriousness it deserves. Most AI CEOs give you careful PR answers about "responsible development" and move on. Demis said something different... He said the commercial race FORCED us into a premature deployment of a technology we barely understand, and the window to get alignment right before the next generation of agents shows up is two to four years. If the man who built the system that might cure cancer is telling you he wishes it had happened first, maybe we should listen to what he says is coming next.
English
293
1K
5.3K
922.2K