Séb Krier@sebkrier
The Moltbook stuff is still mostly a nothingburger if you've been following things like the infinite backrooms, the extended Janus universe, Stanford's Smallville, Large Population Models, DeepMind's Concordia, SAGE's AI Village, and many more. Of course the models get better over time and so the interactions get richer, the tools called are more sophisticated and so on.
I'll concede that at least it's making multi-agent dynamics a bit easier to understand for people who are blessed with not spending their days interacting with models and monitoring ArXiv. The risk side is easy to grok - it always is! Humans are very good at freaking out. And whilst I like poking fun at the prophets of doom and the anxiety/neuroticism fueled parts of the AI ecosystem, it's plainly true that safety is important.
So it's a good time to remind people of the Distributional AGI Safety paper (arxiv.org/abs/2512.16856) and the Multi-Agent Risks from Advanced AI paper (arxiv.org/abs/2502.14143). There's a lot to research here still. As usual, this will benefit from people with deep knowledge in all sorts of domains like economics, game theory, psychology, cybersecurity, mechanism design, and many more. Maybe this is the year we will get better protocols to incentivize coordination and collaboration without the downsides, mechanism design and reputation systems to discourage malicious actors, and walled gardens and proof of humanity to better filter slop.
And risks aside - I think there's so much to be researched to help enable positive sum flywheels: using agents to solve coordination problems, OSINT agent platforms to hold power accountable, decentralised anonymized dataset creation for social good, aggregating dispersed knowledge without the usual pathologies (Community Notes for everything!), simulations of social and political dynamics, multi-agent systems that stress-test policy proposals, contracts, or governance mechanisms by simulating diverse strategic actors trying to game them etc. It's time to build!