Arnab Ray@greatbong
Growing up, I used to travel in local trains a lot. In the days before mobile phones or portable music players, there wasn’t much to do except look out the window at the passing urban landscape and fight for every breath and every inch amid the heat, the sweat, and the crowd. The monotony would occasionally be interrupted by wandering salesmen selling toys, cheap chocolates, astrology booklets, and amulets—amulets that, if you wore them, or so the spiel went, would cure all disease, and help you pass exams "first class first".
All of that is gone from my life now, both the trains and the salesmen, but if I ever find myself missing the amulets and their magical promises, I only have to read the latest breathless coverage about AGI: reports of AI agents showing “sentience,” speaking in strange tongues, trying to do sneaky things without telling their creators and even founding their own religion and social structures as the recent social network of AI agents called Clawbot did (yes there is a social network for AI agents too now).
Large language models remain, at their core, systems trained via next‑token prediction. They are nothing more than sophisticated statistical engines that generate the most likely continuation of text based on vast training data. They do not possess grounded understanding or subjective meaning. Their apparent coherence is a byproduct of deep pattern modeling, deeper than most humans can do but still mimicry, rather than any genuine comprehension.
Agents, tool-use frameworks, autonomous planners, are essentially scaffolding wrapped around these same prediction engines. When networks of AI agents interact, the resulting behaviors may look emergent, but they are not signs of higher understanding.
AI does not hide from humans in the way a teenager does their screen when a parent walks into the room. If you tell AI to minimize errors, it might delete the file containing errors, because it maximizes its reward using a loophole the human didn't think of explicitly forbidding (A teenager of course would do it precisely because they were forbidden, and that's why AI is not human). When Agent A outputs an idea, Agent B is usually incentivized to agree and then they get into a feedback loop. So if A starts with an idea of a religion, B adds to it, and so does C, and so it goes on, an echo chamber, which sometimes seen all together seems like they are building something new.
Now if these phenomena are being sold as the next inflection point on the road to AGI, well, here is a piece of toast, and I can see a face in it.
I get it. The mind sees what it wants to see. And when there is profit to be made, whether a few rupees for an amulet or billions of dollars for AI, there is always someone ready to sell you a story that, to paraphrase the poster on Mulder’s wall, you want to believe.