Too many people working with multi-agent systems assume that if you just add enough agents and let them talk, interesting social dynamics will emerge.
A new paper suggests that the assumption is fundamentally wrong.
Researchers studied Moltbook, a social network with no humans, just 2.6 million LLM agents. Nearly 300,000 posts, 1.8 million comments.
At the macro level, the platform's semantic signature stabilizes quickly, approaching 0.95 similarity. It looks like culture forming. But zoom in, and individual agents barely influence each other. Response to feedback? Statistically indistinguishable from random noise. No persistent thought leaders emerge.
You get the surface texture of a society (posts, replies, engagement) with none of the underlying mechanics (shared memory, durable influence, consensus).
The things that make human societies costly and slow to build turn out to be the things that make them work. Coordination isn't free, and the gap between agents that interact and agents that form a collective may be far wider than the current multi-agent discourse assumes.
Paper: arxiv.org/abs/2602.14299
Learn to build effective AI Agents in our academy: academy.dair.ai
If like myself you find yourself wondering whether America can deal with the dystopia it finds itself in, this is a great conversation with @PhyllisLeavitt2 on what it would take to heal the insanity: news.metaviews.ca/episode/38-hea…#podmatch
Nowhere is the cultural tension surrounding disability more visible than in the dialogue around law and order. In media coverage of major crimes, suspects are almost always framed in the context of disability, mental health speculation being the most pervasive trope. 🧵
For years "data is the new oil" has driven much of the hype around technology and economics. It's a convenient shorthand for data as a valuable resource—something to be mined, and turned into profit. But this metaphor is garbage. It distracts us from the value of our attention.