Auke Hunneman retweetledi

MIT's Nobel Prize-winning economist just published a model with one of the most alarming conclusions in the AI literature so far.
If AI becomes accurate enough, it can destroy human civilization's ability to generate new knowledge entirely.
Not gradually degrade it. Collapse it.
The paper is called AI, Human Cognition and Knowledge Collapse.
Authors: Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar. MIT. Published February 20, 2026.
Acemoglu won the Nobel Prize in Economics in 2024. He is not a doomer blogger. He is the most cited economist of his generation, and his models tend to be taken seriously by the people who set policy.
Here is the argument in plain terms.
Human knowledge is not just a collection of facts stored in individuals. It is a living system that requires continuous reproduction. People learn things. They apply them. They teach others. They build on prior work to generate new work. The entire engine of science, medicine, technology, and innovation runs on this cycle of active human cognition.
What happens when AI provides personalized, accurate answers to every question people would otherwise have to learn themselves?
Individually, each person is better off. They get correct answers faster. They make fewer errors. Their immediate outcomes improve.
But they stop doing the cognitive work that sustains the collective knowledge base.
Acemoglu's model shows this produces a non-monotone welfare curve.
Modest AI accuracy: net positive. AI helps at the margin, humans still do enough learning to sustain collective knowledge, everyone gains.
High AI accuracy: net catastrophic. AI is accurate enough that learning yourself feels unnecessary. Human learning effort collapses. The knowledge base that AI was trained on is no longer being refreshed or extended. Innovation stalls. Then stops.
The model proves the existence of two stable steady states.
A high-knowledge steady state where human learning and AI assistance coexist productively.
A knowledge-collapse steady state where collective human knowledge has effectively vanished, individuals still receive good personalized AI recommendations, but the shared intellectual infrastructure that enables new discoveries is gone.
And the transition between them is not gradual.
It is a threshold effect. Below a certain level of AI accuracy, society stays in the high-knowledge equilibrium. Above that threshold, the system tips. And once it tips, the collapse is self-reinforcing.
Because the people who would have learned the things that would have pushed the frontier forward never learned them. And the AI cannot push the frontier on its own. It can only recombine what humans already knew when it was trained.
The dark irony at the center of the model:
The AI does not fail. It keeps giving accurate, personalized, useful answers right through the collapse.
From the individual's perspective, nothing looks wrong. You ask a question, you get a correct answer.
But the collective capacity to ask questions nobody has asked before, to build the frameworks that generate new knowledge rather than retrieve existing knowledge, that capacity is quietly disappearing.
Acemoglu has been the most prominent mainstream economist skeptical of transformative AI productivity claims. His prior work found that AI's actual measured productivity gains were much smaller than the technology industry projected.
This paper is a different kind of warning. Not that AI will fail to deliver promised gains.
But that if it succeeds too completely, it will undermine the human cognitive infrastructure that makes long-run progress possible at all.
The welfare effect is non-monotone.
That is the sentence worth sitting with.
Helpful until it is not. Beneficial until it crosses a threshold. And past that threshold, the same accuracy that made it so useful is precisely what makes it devastating.
Every student who uses AI instead of working through a problem is a data point.
Every researcher who uses AI instead of developing intuition is a data point.
Every generation that grows up with accurate AI answers and no incentive to develop deep domain knowledge is a data point.
Individually rational. Collectively catastrophic.
Acemoglu proved this is not just a cultural concern or a vague anxiety about screen time.
It is a mathematically coherent equilibrium that a sufficiently accurate AI system will push society toward.
And there is no visible warning sign before the threshold is crossed.


English





































