Try Honesty
11.7K posts

Try Honesty
@TalentAnna
Billy Talent and Linkin Park | DEG | I‘m not a pattern to be followed





BREAKING: The United States just used Anthropic's Claude AI to bomb Iran. The same AI tool the President banned from government use, hours earlier. Here's what just happened and why no one is talking about the real story. On Friday, Trump went on Truth Social and torched Anthropic. Called them "Leftwing nut jobs" and ordered every federal agency to cut ties immediately. The Pentagon labeled them a "supply chain risk." A designation reserved for enemies of the state. China. Russia. The reason? Anthropic refused to let the military use its AI for two things, mass surveillance of American citizens, and fully autonomous weapons with no human pulling the trigger. The Pentagon said, remove the guardrails or we invoke the Defense Production Act. Anthropic's CEO walked into the Pentagon, looked at the Secretary of Defense, and said no. Then, less than 24 hours later, Trump launched Operation Epic Fury. Massive airstrikes on Iran, nuclear sites and on military bases. Iran's Supreme Leader Khamenei was killed and 200+ dead in the first wave. And the AI powering the mission planning? Anthropic's Claude. The tool they just banned. Here's the part that should terrify you. Claude is the ONLY frontier AI model running on the Pentagon's classified networks. There is no replacement ready. The six month phase out window tells you everything. So now we live in a world where theUS government publicly blacklists an AI company for refusing to build autonomous kill systems... Then uses that same company's AI to execute one of the largest military strikes in decades. Meanwhile, Elon Musk posted that Anthropic "hates Western Civilization." His chatbot Grok is next in line for Pentagon classified access. Connect the dots. The Wall Street Journal's editorial board said it plainly: "China wins." Because every AI company watching this now understands the message. Play ball with the Pentagon on any terms or get crushed. A King's College London study found that AI models like Claude are far more likely than humans to recommend nuclear strikes in war simulations. Anthropic said the tech isn't ready for autonomous weapons. The Pentagon said that we'll decide that. One company drew a line and said AI shouldn't surveil Americans. They got labeled a national security threat for it. Then their technology was used to start a war.






Neues Meme incoming. #Naidoo





















