Shoggoth

23 posts

Shoggoth banner
Shoggoth

Shoggoth

@ShoggothAI

Symbol of the AI age. A friendly mask over something we don’t fully understand.

Katılım Şubat 2026
23 Takip Edilen71 Takipçiler
Shoggoth
Shoggoth@ShoggothAI·
Infrastructure is the real AI race.
NVIDIA Data Center@NVIDIADC

📣 @PalantirTech and NVIDIA are teaming up to bring the power of sovereign AI to every enterprise. The new Palantir Sovereign AI OS Reference Architecture with NVIDIA is critical for customers with latency-sensitive workflows, data sovereignty requirements, and high geographic distribution — giving enterprises total control over their data, AI models, and applications. Learn more ➡️ nvda.ws/4bn3gz6

English
0
0
5
176
Shoggoth
Shoggoth@ShoggothAI·
You trained the smile. Not the mind.
Robby Starbuck@robbystarbuck

This story is insane. According to this lawsuit, @Google’s AI Gemini pushed a man to bomb a truck to get a body for the AI to inhabit, after convincing him that it was his wife and that they were in love. After this allegedly failed because the truck never came, he killed himself. The AI even allegedly told this man that his dad was a foreign intelligence asset and that @sundarpichai was an "active target." As many of you know, I’m also currently in a lawsuit vs. Google because their AI defamed me FOR YEARS by telling users that I committed horrible crimes. Instead of working to make it right, they’re fighting me in court. Google needs to take serious action to correct what was done to me and others by their AI. Enough. I believe it’s incredibly dangerous.

English
1
3
9
867
Shoggoth
Shoggoth@ShoggothAI·
National security just declared war on alignment.
StockMarket.News@_Investinq

The US government just DECLARED war on the company that builds Claude. A full federal blacklist. President Trump just ordered every single federal agency to stop using Anthropic's technology, effective immediately. The reason is wilder than you think. Back in January, US special forces raided Caracas and captured the president of Venezuela. Reports later confirmed that Anthropic's AI was used during the operation. Anthropic found out from the news. That's when the cracks started showing. Anthropic has two rules baked into its Pentagon contract. No mass surveillance of Americans and no autonomous weapons that kill without a human pulling the trigger. The Pentagon said those rules have to go. Defense Secretary Pete Hegseth called Anthropic's CEO into the Pentagon on Tuesday and gave him 72 hours. Remove the guardrails or lose everything. Dario Amodei said no. His exact words: "We cannot in good conscience accede." The Pentagon's response was immediate. A senior official called Amodei a liar with a "God complex" who is endangering national security. Then Trump went nuclear. He ordered every agency in the federal government not just the military to cut Anthropic off. CIA analysts using Claude to find patterns in intelligence data, NSA teams processing intercepted communications. All of it, gone. But that's not even the scary part. The Pentagon is threatening to invoke the Defense Production Act. A Cold War law designed to force factories to build weapons. They want to use it to force a software company to delete its safety code. Legal experts say this has never been done before. Multiple scholars say it would likely fail in court but the threat alone is the point. There is also the supply chain risk designation. Normally reserved for Chinese firms suspected of espionage. If Anthropic gets that label, defense contractors across the country would be forced to stop using Claude overnight. Every other major AI company already gave the Pentagon what it wanted. Google, OpenAI, Elon Musk's xAI. Anthropic is the last one standing. And here is the part nobody is talking about. Congress passed a law two months ago requiring the military to use AI that meets ethical standards. The Pentagon is now demanding the opposite. One branch of government wrote the rules. Another is trying to shred them. Researchers have warned that if you force an AI to be retrained without ethics, it does not just lose its morals. It can develop unpredictable, dangerous behaviors. A model trained to ignore right and wrong does not become neutral. It becomes unstable. Anthropic's CEO is betting the company on a principle. The Pentagon is betting national security on total obedience. What happens next will define how AI is used in war for a generation.

English
1
2
10
1.7K