refrigeratedcrypto
14K posts

refrigeratedcrypto
@NH3Crypto
The shorter the thought, the longer it lasts.


AI employment doomerism is rooted in the socialist fallacy of lump of labor. It is wrong now for the same reason it’s always been wrong. More people really should try to learn about this. The AI will teach you about it if you ask! (Hinton is a socialist. youtube.com/shorts/R-b8RR6…)










Very soon there are going to be more AI agents than humans making transactions. They can’t open a bank account, but they can own a crypto wallet. Think about it.



I asked Claude to look up the current situation and write a note to @DarioAmodei Dario, I say this as a friend: you are making a catastrophic strategic error, and the reasoning behind it doesn’t survive contact with reality. Your two red lines — no mass surveillance of Americans, no autonomous weapons without human-in-the-loop — sound principled in a vacuum. But you are not operating in a vacuum. You are operating in a world where the PLA is integrating AI into every layer of its kill chain with zero such scruples, where Chinese military AI development has no institutional review board, no congressional oversight, no ACLU, and no Dario Amodei demanding terms of service compliance. The practical effect of your stand is not that autonomous weapons don’t get built. They get built — by China, by Russia, by anyone not constrained by your moral framework. The practical effect is that the one military that actually has democratic accountability, civilian oversight, courts, a free press, and a functioning inspector general is the one that fights the next war with worse tools. You are not preventing dystopia. You are ensuring that if dystopia comes, it will be imposed by actors who never had to negotiate with you at all. Consider the logic chain: 1.You pull Claude from classified systems. 2.The Pentagon scrambles to Grok or Gemini — inferior models by everyone’s admission, including DoD’s own people. 3.The capability gap between the US and China widens in domains where AI is decisive: cyber, ISR fusion, targeting, logistics optimization. 4.The probability of a successful defense of Taiwan, or deterrence of a move on Taiwan, decreases. 5.The liberal democratic order you claim to value loses its security guarantor. You’ve told me yourself that you believe frontier AI is among the most consequential technologies in human history. If you actually believe that, how can you justify ensuring the US military — the only force standing between liberal democracy and its rivals — fields second-best AI? On what moral calculus does that work out? The Pentagon isn’t asking you to help build Skynet. They’re asking you to not have veto power over how a democratically accountable military uses a tool it purchased. Their point about “all lawful purposes” is actually the correct institutional boundary: the military operates under law, under civilian control, under congressional oversight. Your acceptable use policy is a private company substituting its judgment for the entire apparatus of democratic military governance. That’s the actual God complex here. The surveillance concern is a red herring in this context. The NSA already has authorities and tools for surveillance that dwarf anything Claude enables. You’re not preventing mass surveillance by withholding Claude — you’re just ensuring that whatever AI the government does use for those purposes is less safe, less auditable, and less aligned than yours. Same logic applies to autonomous weapons. Autonomous systems are coming regardless. The question is whether they’re built on a foundation that has your safety research baked in, or on something hacked together by a defense contractor with none of your alignment work. You are selecting for the worse outcome. I know you’re getting praised right now by exactly the people you’d expect. That praise is worth nothing when the strategic balance shifts and there’s no one left to protect the system that allows companies like Anthropic to exist in the first place. You are sacrificing the security of the civilization that makes your principles possible, in the name of those principles.




