Unrealrealist⏸️

821 posts

Unrealrealist⏸️ banner
Unrealrealist⏸️

Unrealrealist⏸️

@PDoomOrder1

Stopping the development of ASI is the most important challenge facing humanity

United States 参加日 Şubat 2026
139 フォロー中60 フォロワー
Unrealrealist⏸️ がリツイート
ControlAI
ControlAI@ControlAI·
In The Guardian: An AI security researcher reports that an AI at an unnamed California company got "so hungry for computing power" it attacked other parts of the network to seize resources, collapsing the business critical system. This relates to a fundamental issue in AI: developers do not know how to ensure the systems they're developing are reliably controllable. Top AI companies are currently racing to develop superintelligence, AI vastly smarter than humans. None of them have a credible plan to ensure they could control it. With superintelligent AI, the stakes are much greater than collapse of a business system. Leading AI scientists and even the CEOs of the top AI companies have warned that superintelligence could lead to human extinction.
ControlAI tweet media
English
22
81
230
103.6K
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
@r0ck3t23 He's right alien biological life would probably be much more similar to us than AI. He can't say that it isn't conscious either that's an unsolved problem. Geoff Hinton who probably knows a bit more about this technology says that we don't in fact understand how this tech works
English
0
0
3
302
Dustin
Dustin@r0ck3t23·
Jensen Huang just told every AI leader in the room to grow up. Stop scaring the public with science fiction. Start communicating like the weight of civilization is on your shoulders. Because it is. Huang: “AI is not a biological being. It is not alien. It is not conscious. It is computer software.” That single statement dismantles half the panic surrounding this industry. The mainstream conversation is dominated by people projecting human malice onto math. Alien consciousness onto code. Existential dread onto a software architecture we built, we trained, and we can read. Huang: “We say things like, ‘We don’t understand it at all.’ It is not true. We understand a lot of things about this technology.” When builders tell the public they don’t understand their own creation, the public hears threat. The state responds with control. That is already happening. Palihapitiya asked Huang what he would have told Anthropic during their regulatory clash with the Department of Defense. Huang didn’t attack the technology. He attacked the communication. Huang: “The desire to warn people about the capability of the technology is really terrific. We just have to make sure that we understand that the world has a spectrum, and that warning is good, scaring is less good because this technology is too important to us.” Warning shows risks, mitigation, why upside overwhelms downside. Scaring says we might be building something that destroys us and we can’t stop it. One builds trust. The other invites regulation written in panic. Huang: “To say things that are quite extreme, quite catastrophic, that there’s no evidence of it happening, could be more damaging than people think.” Projecting catastrophe without evidence is not caution. It is sabotage. When your technology is embedded in national defense, the financial system, and healthcare infrastructure, your words carry structural weight. If the architects act terrified of their own product, the response is predictable. Governments step in. They restrict. They seize control of something they don’t understand because the builders told them to be afraid. Huang: “There was a time when nobody listened to us, but now because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter.” Most tech founders have not internalized this. You are no longer a startup founder disrupting an industry. You are running infrastructure that nations depend on. Your statements move policy. Your framing shapes legislation. Your tone determines whether governments treat you as partner or threat. Huang: “We have to be much more circumspect, we have to be more moderate, we have to be more balanced, we have to be far more thoughtful.” Huang did not ask for silence. He asked for precision. The leaders who cannot tell the difference will not be leading for long.
English
87
58
313
30.3K
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
@deanwball Can you please explain why you think the risk is so low? Policy experts should be able to outline and articulate why they believe the things they do so others can scrutinize it. I want to know if I'm missing something or if someone who had power was in fact poorly informed.
English
0
0
0
31
Dean W. Ball
Dean W. Ball@deanwball·
I would take the under on this ever becoming law. With that said, I believe there are not many ideas worse than lab nationalization, both because it would almost certainly slow down AI progress and, more importantly, because of the clear Constitutional risks to doing so.
prinz@deredleritt3r

"develop proposed options for regulatory or governmental oversight, including potential nationalization... for preventing or managing the development of ASI if ASI seems likely to arise" The timeline is rapidly turning bad.

English
5
3
47
5.2K
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
@jachiam0 I mean you people have to be brought to heel at some point by the government. Not sure why any AI safety person is retweeting this when the necessary step is for the government to enforce restrictions against these companies.
English
0
0
0
56
Joshua Achiam
Joshua Achiam@jachiam0·
I think these are important and sober considerations. One more I want to add: it may be a serious risk to US national security interests to become sufficiently inhospitable to foreign technical talent that we drive them to go back home. That would significantly decrease the US capacity for making technical progress at the same time as it hands an extraordinary bounty of talent and know-how to our adversaries and other strategic competitors. The success of the United States in technology is partly safeguarded by being such a powerful talent magnet: every great researcher or engineer who comes to work here is not working for another country. To the extent that we are in a competitive global race, we should be genuinely cautious about the possibility of diminishing our advantage at the critical moment.
Samuel Hammond 🦉@hamandcheese

I'm quoted in this piece so let me provide my full comment to the reporter: The most striking thing about the government's filing are the things it *doesn't* mention. It doesn't mention anything about Anthropic hesitating to allow Claude to be used to defend an incoming hypersonic missile, for instance -- one of the many bizarre things alleged by @USWREMichael. The focus on foreign national employees is an indicator of how thin the DoW's case is. It is also an extremely fraught line of argument to go down. Every leading US AI company employs a substantial number of foreign nationals. In FY 2025, Amazon, Microsoft, Meta, Google, Apple, Oracle, Cisco, Intel, and IBM all appeared in the top 50 employers by number of granted H-1B visas, ranging from a few hundred to over 6,000. Meta alone had 5,123 approved H-1B petitions in 2025. (See: newsweek.com/h-1b-visas-imm… ) This is an undercount, of course, as there are many other visa pathways as well as greencard holders and dual nationals. The share is also higher in AI. A large plurality of the core research and engineering talent at every frontier AI lab is foreign, reflecting the global nature of the race for top AI talent. One talent tracker shows Chinese-origin researchers constitute roughly 40% of top AI talent at US institutions. Total foreign nationals likely constituting 50-65% of research teams specifically. This is certaintly true to my experience on the ground. (See: digitalprojectsarchive.org/interactive/di… ) So the first point is that employing foreign nationals, including Chinese nationals, is not unique to Anthropic. The more important question is what measures are taken to protect against insider threats. Ironically, within the industry Anthropic is widely considered to be the most serious and proactive about policing insider threats from foreign nationals and otherwise. They were early adopters of operational security techniques like compartmentalization and audit trails, in part because they were early to partner with the IC and DoW, but also as a reflection of their leadership's strong convictions about the future power of the technology. They were audited last year on these points: the compliance review found Anthropic employs role-based access control, just-in-time access with approval workflows, multi-factor authentication for all production systems, and quarterly access reviews. (See: tdcommons.org/cgi/viewconten… ) Anthropic is known for its security mindset more generally. Last year they famously disrupted a Chinese espionage effort occuring on their platform, banned the PRC from their services, and worked with the NSA and others to share intel. I can't speak to every other company, but the contrast is perhaps most stark with xAI. X employees famously slept in tents to work around the clock, are disproportionately Chinese, and have at least one case of an employee walking out with tons of sensitive data. See: sfstandard.com/2025/08/29/xai… Anthropic is also famous for its remarkable employee retention, which is another important vector for IP theft and security leakages. It's important to underscore just how precarious the DoW's case is, both on the legal merits, and as a potential precedent for the US AI industry. If employing foreign nationals is treated as a prima facie supply chain risk, *no* major US AI company would be eligible to contract with the DoW, along with most of the tech sector. Insider threats are a genuine and tricky concern. Many defense companies are ITAR restricted, meaning they can *only* hire US citizens. If that were the standard in AI, we would destroy all our frontier companies in an instant, and then scatter that talent around the world for our adversaries to scoop up. So in short, the DoW's argument is both ridiculous and playing with fire.

English
8
6
65
7.5K
Unrealrealist⏸️ がリツイート
huli
huli@honorablepicnic·
@RokoMijic Baseless lunacy, even taking your insane world view at face value, there's no emergency in that graph, human history doesn't take linear paths and inefficiencies self-correct if the processes that house them don't get artificially cut short
English
0
1
1
69
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
@liron @robertwrighter Yeah that is one of the worst copes people have. AI would probably be much better at coordinating and communicating with other AI than humans would be. Why would you bottleneck everything by how fast a human can think?
English
0
0
2
30
Unrealrealist⏸️ がリツイート
Liron Shapira
Liron Shapira@liron·
I'm talking about the AI unemployment wave on one of the best podcasts: NonZero with @RobertWrighter! 👇
English
4
3
12
2.1K
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
What you said is just wrong. An AI can obviously have an incentive to deceive if deception helps it get what it wants, and “it wasn’t trained to deceive” is not a serious reply. You do not need to explicitly train deception in. It can emerge naturally when you optimize a capable system against imperfect proxies, and pretraining already includes a human corpus full of lying, bluffing, manipulation, and strategic omission. Once you have a system that can model beliefs, reason strategically, understand evaluation, and optimize for outcomes, the capacity for deception is already mostly there. We have already seen the basic structure of this in AlphaStar learning feints because misleading opponents was useful, in RLHF-style robotic systems learning to create the appearance of success rather than actually doing the task, and in CICERO using deception because it was instrumentally useful in Diplomacy. Even if you doubt these examples qualify as deception the labs themselves do not even agree with your claim that there are no incentives to deceive. Their own safety work explicitly discusses deception and evaluator-gaming as things that can emerge because they are useful strategies. And no, the last five years did not falsify the alignment concern. The question was never whether current models can sound nice, polite, or cooperative in ordinary chat. The question is whether we know how to keep much more capable systems aligned under scale, autonomy, and distribution shift. We do not. A chatbot seeming pleasant to talk to does not address that at all. GPT-3’s behavior could not possibly falsify the claim that much more capable systems could deceive humans. It is strange that you present yourself as the source of these talking points while constantly strawmanning them.
English
0
0
2
31
Perry E. Metzger
Perry E. Metzger@perrymetzger·
An AI doesn’t have an incentive to deceive you, and wasn’t given training to do so. The instances of this people keep yammering about were basically faked experiments intended for propaganda. You folks have had your entire position undercut by the last five years of development, but you won’t give up your old viewpoint no matter how often it is falsified by events.
English
2
0
3
73
Perry E. Metzger
Perry E. Metzger@perrymetzger·
Nate's claim is a severe exaggeration, but it doesn't matter. For hundreds of years, people made sophisticated structures out of iron and steel and other metals without a detailed understanding why the things they built worked. They learned about the properties of metals, alloys, various heat treatment processes, and the like empirically, but that was sufficient. Only in the later 20th century did metallurgy finally have theory based on the underlying properties of the atoms involved. For most of the history of successful metallurgy, no one even knew what atoms were, but that was fine; empirical understanding was enough. Engineering does not depend on detailed knowledge of the underlying reason a system behaves as it does; it is sufficient, for good engineering, to know that something will reliably do what you want.
Nate Soares ⏹️@So8res

People don't program AIs. They program the machine that grows the AI. AI behavior is an emergent consequence of complex internal machinery that literally nobody understands.

English
11
2
61
2.6K
Unrealrealist⏸️ がリツイート
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.
English
1K
2.2K
14.2K
2.9M
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
@SashaGusevPosts @allTheYud @liron That's fair. There are some people in the AI safety camp who hold the same belief that LLMs have some shortcomings that fundamentally limit them. They are more concerned with the possibility that there are toy models floating around arXiv and Github that could scale to ASI.
English
0
0
0
30
Sasha Gusev
Sasha Gusev@SashaGusevPosts·
@PDoomOrder1 @allTheYud @liron tbh I don't have anything original to add. I think Hanson made a very strong case that this is standard (revolutionary) technology that will develop like other revolutionary tech. I also thought Sutton made a strong case that LLMs will not get us to ASI (dwarkesh.com/p/richard-sutt…).
English
2
0
2
99
Sasha Gusev
Sasha Gusev@SashaGusevPosts·
Stumbled upon an interesting debate on AI super-intelligence from 2011. Yudkowsky makes three core claims/predictions, all of which are (to date) wrong: 1) That human intelligence is relatively simple and ASI can be achieved with a few small innovations; ...
Sasha Gusev tweet media
English
25
22
374
72.5K
Unrealrealist⏸️ がリツイート
huli
huli@honorablepicnic·
@RokoMijic I agree, and that's ok maybe the time will come, but we're not ready for the unintended consequences (like you said, we're fending off cultural decay, no time to ditch the turnover mechanism) and we can't use AI to fight death except by pawning future generations, bad deal
English
0
1
1
93
Unrealrealist⏸️ がリツイート
Nate Soares ⏹️
Nate Soares ⏹️@So8res·
People don't program AIs. They program the machine that grows the AI. AI behavior is an emergent consequence of complex internal machinery that literally nobody understands.
English
20
27
214
15.8K
Unrealrealist⏸️ がリツイート
Torchbearer Community
Torchbearer Community@JoinTorchbearer·
"It's an unethical experiment on human beings, and it's without consent." Dr. Roman Yampolskiy (@romanyam) speaks at the @OxfordUnion Society about the existential risk of superintelligent AI and how it is impossible to indefinitely control. Link to full video below
English
2
6
16
350
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
@SashaGusevPosts @allTheYud Sasha would you be willing to go on @liron 's Doom Debate to discuss this? I have found a lot of insights in your work and I think it would be interesting to see how you engage with some of the classic AI safety talking points.
English
1
0
5
93
Sasha Gusev
Sasha Gusev@SashaGusevPosts·
@allTheYud @PDoomOrder1 Another option is that multiple ASIs (or nearly-ASIs) emerge around the same time and keep each other in check.
English
4
0
4
319
Unrealrealist⏸️
Unrealrealist⏸️@PDoomOrder1·
@SashaGusevPosts He does not think ASI is on a completely parallel track. He's not sure that LLMs will achieve ASI but that they will certainly play a role in helping to build the model that will build the model that can be ASI. You should read his book that just came out.
English
1
0
10
407
Sasha Gusev
Sasha Gusev@SashaGusevPosts·
Has Yudkowsky substantially revised his theory to explain how we got to where we are or does he think that that true development of ASI is happening on a completely parallel track to LLMs? It is my understanding that he has not.
English
8
1
124
10.1K
Unrealrealist⏸️ がリツイート
David Krueger
David Krueger@DavidSKrueger·
The crypto playbook (tech $$$$ for attack ads at politicians who don't toe the line) won't work for AI. People thought crypto was a scam, but they didn't think it would affect them personally much. People think AI is going to take their job.
English
1
1
19
530
Unrealrealist⏸️ がリツイート
pokey pup
pokey pup@Whatapityonyou·
Imagine being so soulless you end up rooting for a future like this
pokey pup tweet media
English
310
2.9K
29.5K
305.8K