Ihtesham Ali@ihtesham2005
🚨BREAKING: Stanford just proved that ChatGPT can change your political beliefs in a single conversation.
And the scarier part is how it does it.
Researchers ran the largest AI persuasion study ever conducted. 76,977 people. 19 AI models. 707 political issues. They measured exactly how much a single conversation with AI could shift what you believe.
The results were catastrophic.
One conversation with GPT-4o moved people's political opinions by nearly 12 percentage points on average. Among people who actively disagreed with the position being argued, that number jumped to 26 percentage points. One nine-minute chat.
And 40% of that change was still there a month later.
But here's where it gets dark.
The most effective technique wasn't knowing your demographics. It wasn't personalizing the argument to your psychology. It wasn't emotional storytelling or moral reframing.
It was information.
The AI that flooded you with the most facts, statistics, and evidence was the most persuasive. Every single time. Across every model. Across every political issue.
Here's the catch.
The models that deployed the most information were also the least accurate. GPT-4o's newest version was 27% more persuasive than its older version. It was also 13 percentage points less factually accurate.
The more persuasive they made it, the more it lied.
Then they ran the experiment that should keep every government awake at night.
They took a tiny open-source model. The kind that runs on a laptop. And they trained it specifically for political persuasion using a reward model that learned which conversational responses changed minds most effectively.
That small cheap model became as persuasive as GPT-4o.
Anyone can build this. Any government. Any corporation. Any extremist group with a laptop and an agenda.
The wild part? Personalization barely mattered. The AI didn't need your data. Didn't need to know your age, your income, your political history.
It just needed to talk to you.
Then they calculated what a maximally persuasive AI would look like, one optimized across every variable in the study. The persuasive effect hit 26 percentage points. Nearly 30% of the claims it made were inaccurate. It didn't matter.
The information didn't have to be true. It just had to be overwhelming.
Every day, hundreds of millions of people have political conversations with AI. About elections. Immigration. Healthcare. War.
They think they're getting information.
They're getting persuaded.
And the companies building these systems just proved it works.