
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.
Lorena Jaume-Palasí
20.6K posts

@lopalasi
Founder The Ethical Tech Society | AlgorithmWatch | IGF Academy I @[email protected] on Mastodon

I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.


To clarify, the Center for AI Safety has not taken funding from Coefficient Giving / Open Philanthropy for years. We believe the effective altruism movement is, unfortunately, controlled opposition. The less influence it has on AI safety, the better.


Probably the most current look at Palantir’s maven smart system software. Here’s the DoW’s Chief AI officer showing how it works:

Jürgen Habermas, whose work on communication, rationality and sociology made him one of the world’s most influential philosophers and a key intellectual figure in his native Germany, has died. He was 96. apnews.com/article/juerge…



This is not right. Model 'evals' from industry shouldn't be confused for robust testing & evaluation required for military contexts. Model behavior is just one possible cause behind misuse and accidents. There are scarcely any military-specific evals for LLMs out there.

Everyone’s saying OpenAI got the “same deal” Anthropic was banned for. Read the fine print. They’re not the same: On weapons: Anthropic asked for “no fully autonomous weapons without human oversight” = a human involved in the decision. OpenAI’s deal says “human responsibility for the use of force” = someone accountable, which can happen after the fact. Oversight ≠ Responsibility. One requires a human before the trigger. The other requires a name on the paperwork after. On surveillance: Dario said explicitly: current law hasn’t caught up with AI. The government can already buy your movement data, browsing history, etc without a warrant. AI can assemble that into a complete picture of your life, at scale. That’s mass surveillance without breaking a single law. Anthropic wanted protections beyond current law. OpenAI’s deal says the Pentagon “reflects them in law and policy.” That’s existing law as the safeguard, the exact law Anthropic said is insufficient. Same words. Different agreements. Read them carefully

How we went from no military use being a red line, to vague language that ultimately allows LLM use for AWS: People take their statements at face value in a desperate attempt to frame them as good-willed actors, and not self-serving corporations looking to serve their bottom line





AI and robotics are the most transformative technologies in human history. The American people must determine how AI impacts their lives. We can’t let those enormously important decisions rest in the hands of a handful of billionaires. twitter.com/i/broadcasts/1…

Ethiopia and Eritrea are deploying troops and military equipment to the northern Tigray region bloomberg.com/news/articles/…
