
Tom Chatfield
19.5K posts

Tom Chatfield
@TomChatfield
Tech philosopher, author, dad. Critical thinking, AI ethics & future skills. Latest book: Wise Animals (Picador) https://t.co/T0qiMAHAjN Views mine









@antoniomaxai It is in the second article and also here bostonreview.net/forum/the-ai-w…


WORLD MODELS are all the rage, as the AI community tries to pivot from the perceived shortcomings of large language models to AIs that use internal “world” models of the environment in which they act. Such AIs, instead of predicting the next token, will predict the next set of states of the environment+agent, conditioned on some action that the agent may or may not take. World modeling AIs promise much—but this is not a new concept by any means. Psychologists, cognitive scientists and now computational neuroscientists have known for a while (the history goes back 150 years) that our brains must be modeling the world and using these models to hypothesize the external causes of sensory inputs. These hypotheses are our perceptions. In my first post in a series exploring world models for the WHERE MACHINES THINK Substack, I discuss the neuroscientific rationale, and put the current interest in historical context. wheremachinesthink.substack.com/p/the-case-for…



The quote below is famously untrue, and has no scientific basis whatsoever. It’s one of those dogma that refuses to die.

I've spent years struggling to prove that the sunk-cost fallacy isn't actually a fallacy. No sense in giving up now, though.








