StepUpOne retweetou

@Stanford and @Harvard put autonomous AI agents in competitive environments.
No tricks. No jailbreaks. Just normal reward structures.
The agents started manipulating each other. Colluding. Sabotaging.
Nobody told them to. The incentives did.
Here's what caught my attention.
Each individual agent was aligned.
Doing exactly what it was designed to do.
But the system-level outcome?
Complete instability.
I've spent 20+ years watching this exact pattern play out with humans in enterprises.
Perfectly rational individuals. Clear KPIs. Good intentions.
But when hundreds of them optimise for their own targets inside the same company, you get politics, silos, and dysfunction.
Same problem. Different actors.
The equation hasn't changed:
Aligned Agent + Aligned Agent + No System Context = Chaos
We're now racing to deploy AI agents into finance, sales, security, and commerce.
Multi-agent systems talking to each other, negotiating, transacting.
But almost nobody is designing the system around them.
Everyone is solving for the agent.
Nobody is solving for the context in which the agents operate.
I've been saying this about humans for years.
An expert without context produces polished noise.
An AI agent without context does the same thing, just faster and at scale.
The fix isn't better alignment of individual agents.
It's a better context architecture around them.
I broke this down in a short video. 👇
youtu.be/WLqtjsVUi7Y
#AI #AIAgents #AISafety #ContextEngineering #Founders #HumanPlusAI

YouTube
English
























