
Lee Douglas Hoffer
1.5K posts

Lee Douglas Hoffer
@hofferNTPhD
Teaching Fellow @ UChicagoDiv | Husband and Father | Avid Judoka, Powerlifter, Reader of Speculative Fiction / RT =/= endorsement



What we think of as modern civilization is essentially coextensive with mass literacy. People greeting the end of mass literacy with a yawn are assuming that we can keep this machine work going in the absence of the foundations it was built on. Huge civilizational-scale gamble.



“Schools in North Carolina, Virginia, Maryland and Michigan that once bought devices for each student are now re-evaluating heavy classroom technology use”





This is a genuinely interesting story. Huge amount of automated work, potentially saving hundreds of thousands of human hours. BUT: with an error rate of 10%, and with the precise reading of *every* word mattering in such an exercise, it is meaningless, and for manuscripts of authority worse than useless, without a human checking every single word. BUT: there simply aren't people, in 2026, with the expertise, the time, and the funding to check these 32,000 manuscripts at this level. Welcome to Digital Humanities Slop.



People will say, “I want to be a novelist but I don’t like reading books.” Or, “I want AI to generate my novels for me.” Consider that perhaps writing novels isn’t the career for you!



I didn't realize how hilariously bad artificial intelligence was until tonight, when I asked it about something I'm an expert on. I asked it about zoning laws in Pennsylvania and the AI hallucinated case law that doesn't exist. Silicon Valley wants us to rely on this slop? 😬

So like David I think most of the changes 'work' as well. Chani's changes are the most frustrating for me but maybe not for the same reason as others. Paul Atreides is a tragic hero. He spends much of Dune trying to avoid a bloody war and ultimately there isn't a route around it. He must embrace and subvert the plans of the Bene-Gesserit witches, become the Fremen messianic figure, and become emperor. And it is tragic. The Atreides men are the only Good men in the universe. This is evident in how their captains love them rather than fear them in Dune's barbaric feudal society. And Paul is forced to become a tyrant. The alternative is a Bene Geserrit controlled Harkonen Emperor that (to Paul's visions) lead to humanity's eventually extinction. So he has to become a tyrant. The Chani change is so jarring because she is the only happiness in the book ending. Paul's destiny force him to marry Peincess Irulan, but in the book the final scene is a quiet one between Jessica and Chani where Jessica comforts Chani. Both women were forced to be concubines of political men. Men that loved them dearly but had other obligations. But Jessica reminds Chani that the princess will lead a cold and lonely life, never to be touched by her husband. She says that history will remember that Jessica and Chani were the real Atreides wives. And the book ends right there. The movies ending is much darker and more ominous as result of chani leaving (and I have no idea how they course correct, because all huge part of the plot of Dune Messiah is that Chani has been barren for all across the time jump) So most of it I think is pretty good. I do think the tone of the ensing is altered substantially though.










The simulation we live in was created to develop superintelligence, and will soon be turned off.



OpenAI just published a paper proving that ChatGPT will always hallucinate. Not sometimes. Not "until the next version." Always. They proved it mathematically. And three other top AI labs confirmed it independently. Here's what the research actually shows: Even with perfect training data and unlimited compute, LLMs will still fabricate answers with complete confidence. This isn't a bug in the code. It's fundamental to how these systems are built. The numbers are wild: → OpenAI's o1 model: 16% hallucination rate → Their o3 model: 33% → Their newest o4-mini: 48% Nearly half of what their latest model tells you could be invented. And it's getting worse as models get "smarter." Here's why this can't be fixed: Language models predict the next word based on probability. When they hit uncertainty, they don't pause. They don't flag it. They guess with total confidence. Because that's literally what they were trained to do. The researchers analyzed the 10 major AI benchmarks used to test these models. 9 out of 10 give the exact same score for saying "I don't know" as for getting it completely wrong: zero points. The entire testing system punishes honesty and rewards confident guessing. So the AI learned the optimal strategy: always answer. Never show doubt. Sound certain even when making it up. OpenAI's proposed solution? Train models to say "I don't know" when uncertain. The problem? Their own math shows this would leave roughly 30% of questions unanswered. Imagine getting "I'm not confident enough to respond" three times out of ten. Users would abandon the product overnight. The fix exists. But it kills usability. This isn't just OpenAI's problem. DeepMind and Tsinghua University reached identical conclusions working separately. Three elite AI labs. Independent research. Same result: this is permanent. Every time you get an answer from any LLM, you're not getting facts. You're getting the most statistically probable next words from a system that's been rewarded for never admitting when it's guessing. Is this real information, or just a confident hallucination? You can't know. And neither can the AI.


