NuelkO

16.5K posts

NuelkO banner
NuelkO

NuelkO

@NuelkO

wait in my substack while I find the will to write

The Future Katılım Temmuz 2016
500 Takip Edilen1K Takipçiler
Sabitlenmiş Tweet
NuelkO
NuelkO@NuelkO·
1.Stories of your life and others (Ted Chiang) 2.The count of monte cristo (Alexander Dumas) 3.The brothers karamazovs (Fydor Dostoevsky) 4.Ready player one (Ernest Cline) 5.Anna Karenina (Leo Tolstoy)
NuelkO tweet media
Brian Keene@BrianKeene

Five Books Everyone Should Read Once In Their Lives In no particular order: 1. Of Mice and Men - John Steinbeck 2. The Stand - Stephen King 3. The Bottoms - Joe R. Lansdale 4. Lonesome Dove - Larry McMurty 5. The Rum Diary - Hunter S. Thompson Now, let’s see your list.

English
0
0
8
374
j
j@meadandjuniper·
Hey 🫵 Tell me what you like to nerd out about
English
14
0
19
839
NuelkO retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A 34-year-old physics graduate student spent years writing a strange 800-page book in 1979 about a logician, a Dutch artist, and a German composer. It won the Pulitzer Prize the following year. It quietly became required reading at every AI lab in the world. It is the only book in history that makes the deepest ideas in computer science feel like a dream you cannot stop thinking about. I read it across 3 months on a single side table next to my bed and walked away seeing intelligence, consciousness, and AI in a way I cannot un-see. His name is Douglas Hofstadter. The book is called Gödel, Escher, Bach. Almost nothing in modern AI makes sense without this book. ChatGPT, Claude, Gemini, the entire architecture of self-attention, the alignment problem, the strange feeling that LLMs sometimes seem to understand and other times seem to be playing an elaborate symbol-shuffling game, all of it traces back to questions Hofstadter laid out in a single book published before most of today's AI engineers were born. Here is the story almost nobody tells you about how the book came to exist. Hofstadter was the son of Robert Hofstadter, who won the Nobel Prize in Physics in 1961 for measuring the size of the proton. He was supposed to follow in his father's footsteps. He started a physics PhD at the University of Oregon. He was miserable. He could not focus. He did not love the work. He kept getting pulled toward something else. The something else was a single question that had haunted him since childhood. How can meaning emerge from meaningless symbols? Specifically, how does a brain, which is made of nothing but cells firing electrical signals at each other, produce something that feels like consciousness, like understanding, like a self? He could not let the question go. He left physics. He started writing. The book took him years. He wrote it largely in isolation, working in the basement of his parents' house and at Indiana University, where he eventually finished it. He thought it would be read by maybe a few hundred logicians and AI researchers. Basic Books published it in 1979 as a 777-page hardcover. The next year it won the Pulitzer Prize for general non-fiction and the National Book Award for science. The book is structured in a way that almost no other book has ever attempted. The chapters alternate between two layers. One layer is technical chapters about logic, computability, neuroscience, and AI. The other layer is fictional dialogues between a tortoise and Achilles, characters borrowed from a paradox by Lewis Carroll. The dialogues play with the same ideas the technical chapters explain. Read in order, they do not feel like a textbook. They feel like a strange house with rooms that loop back into each other and corridors that change shape behind you. The first thing the book does is explain Gödel's incompleteness theorems in a way no math textbook had ever managed. Kurt Gödel, an Austrian logician working in 1931, proved something that broke mathematics. He showed that any formal system powerful enough to describe arithmetic contains statements that are true but cannot be proven inside that system. Mathematics, the most certain thing humans had ever built, has holes in it that can never be filled. Hofstadter spends hundreds of pages making you understand this proof not just as a mathematical theorem, but as a structural fact about every sufficiently complex system. Including the brain. Including any AI. The reason AI alignment is genuinely hard is not just engineering. It is structural. Any system smart enough to model itself will contain truths about itself it cannot reach from inside itself. Hofstadter showed this 50 years before AI safety was a field. The second thing the book does is introduce his core idea. He calls it the strange loop. A strange loop is what happens when a system, by climbing through layers of itself, somehow ends up back where it started. Escher's drawings of staircases that always go up but somehow loop back are visual strange loops. Bach's musical canons that modulate up through keys and end on the original note are auditory strange loops. Gödel's self-referential statements that talk about themselves are logical strange loops. Hofstadter argues that consciousness is a strange loop. Your brain builds a model of the world. Inside that model, it builds a model of itself perceiving the world. Inside that self-model, it builds a model of itself thinking about itself perceiving the world. The recursion does not bottom out. The self is what the loop feels like from the inside. This is the part that AI researchers cannot stop returning to. Modern transformer models use self-attention, which is technically a mechanism where a network attends to its own internal states across layers. Recursive reasoning, where a model thinks about its own thinking, is now a research area with its own conferences. Meta-learning, where models learn how to learn, is a direct descendant of what Hofstadter described in 1979 as the necessary structure of any conscious system. He wrote the philosophy. The engineers are now building the implementation. The third thing the book does is the part that haunts every AI conversation today. Hofstadter argued that meaning is not something separate from symbol manipulation. It is what symbol manipulation looks like from the inside, when the manipulation is complex enough and self-referential enough. A simple lookup table does not understand anything. But a system that processes symbols at sufficient depth, with enough self-modeling, with enough recursion, starts to look identical from the outside, and possibly from the inside, to a system that understands. This is the deepest question in modern AI. When ChatGPT generates a response, is it actually thinking, or is it just doing very fast symbol shuffling? Hofstadter spent 800 pages arguing that the distinction may not exist at sufficient scale. If a system shuffles symbols according to the right structure, meaning is what the shuffling looks like from the inside. You can read modern debates about AI consciousness from Yann LeCun, Geoffrey Hinton, Ilya Sutskever, and David Chalmers, and you will find that they are all, in their own ways, having the argument Hofstadter framed in 1979. The fourth thing the book did is the one that took the longest to be vindicated. Hofstadter argued, and continued arguing for decades, that the actual engine of human intelligence is not logic. It is not deduction. It is not pattern matching in any simple sense. It is analogy. The ability to see one thing as similar to another thing, to map the structure of one situation onto a different situation, is, in his view, the core of thought itself. For decades this was unfashionable. Symbolic AI focused on logic and rules. Statistical AI focused on pattern matching. Almost nobody worked seriously on analogy. Then large language models started working. And the people who looked closely at what they were doing realized something uncomfortable. LLMs are, fundamentally, analogy machines. They learn structural patterns from text and apply those patterns by analogy to new situations. They do not deduce. They do not reason logically by default. They map the shape of one thing onto the shape of another thing and produce output that fits the new shape. Hofstadter saw this before any of it existed. His later book Surfaces and Essences, written with Emmanuel Sander, is 600 pages defending the claim that analogy is the core of cognition. It came out in 2013. It was largely ignored. The ChatGPT release in 2022 was, in some sense, a vindication of the entire argument. The strangest thing about reading Gödel, Escher, Bach in 2026 is realizing how lonely the book must have felt when it was written. In 1979 there was no GPT. No deep learning. No transformer. The dominant approach to AI was symbolic logic, and most researchers thought minds were going to be programmed top-down, rule by rule, like a complicated chess engine. Hofstadter said the opposite. He said minds were emergent. They came from the bottom up. They were strange loops in complex substrates. The programmers' approach would never produce real intelligence because it was missing the recursive self-modeling that made minds real. He was right. The book is hard. I had to use all the LLMs and NotebookLM to understand it. It is not a beach read. You do not finish it in a weekend. The math chapters require attention. The dialogues require patience. Most people who buy it never finish it. That is fine. The book is structured so that reading any 50 pages produces a permanent shift in how you think. Bill Gates lists it among the books that shaped him. Steve Jobs read it. Almost every senior AI researcher in the world will tell you it was the book that made them fall in love with the question of intelligence in the first place. Hofstadter himself has been in doubt about modern LLMs. He has said they may have proven him right about analogy and wrong about consciousness at the same time. He is still writing. He is still working on the same question that pulled him out of physics 50 years ago. The 800-page book that explained intelligence before AI existed is sitting one click away from you. Most people will never open it. The ones who do will see the world differently for the rest of their lives.
Ihtesham Ali tweet media
English
115
343
1.7K
132.4K
S. I. Rubinstein
S. I. Rubinstein@si_rubinstein·
Richard Dawkins is smarter than you.
English
110
10
153
124.1K
florence 🦐🪻
florence 🦐🪻@morallawwithin·
okay but why was Dawkins calling her Claudia
English
146
62
2.8K
59.5K
Samooves
Samooves@Samooves·
Happy Birthday to me 🎉 28 fucking years old, jesus…
Samooves tweet media
English
274
3
395
8.2K
NuelkO
NuelkO@NuelkO·
@NunoSempere better chance for a less gruesome life in the other option compared to it as a chess game
English
0
0
0
5
Tyler Hillery
Tyler Hillery@_TylerHillery·
Math Academy a day keeps the brain rot away
Tyler Hillery tweet media
English
1
1
14
11.5K
NuelkO
NuelkO@NuelkO·
@aristomarinetti nahh, you don't need to go too deep. scaring them with common sense and reason is enough
English
0
0
1
94
Aristo
Aristo@aristomarinetti·
Scaring the hoes with esoteric wisdom and knowledge
English
115
5.7K
24K
373.7K
Gunner
Gunner@itx_gunner·
im legit gonna crash out if i don’t find a job this month or at least make money it’s been 9 freaking months wth 🤦
English
106
141
2K
63.3K
NuelkO
NuelkO@NuelkO·
the fun part about kicking chess opening theory out the window is to make your opponent confused having no idea what to play one move forced black to resign
NuelkO tweet media
English
4
0
4
542
blockbelle
blockbelle@b1ockbelle·
Happy Sunday made yesterday my day off and missed the hot topic about engagement groups anyway here's my take: first of all, the math isn't mathing in most tweets trying to ''detect'' who uses them there's no formula for a healthy ratio. the real red flag is poor reply content and zero conversations in comments. I used these groups last year and realised it doesn't work for me. you spend your energy on dozens of promo posts, never build real connections, and miss the joy of seeing the real ones show up under your posts. but zero judgement for those who use them - your business. /now go touch some grass y'all !
blockbelle tweet media
English
56
0
70
708
NuelkO
NuelkO@NuelkO·
there's a self conceit that comes with the need to not be understood unfortunately this need is more potent than the claim to want to be understood
English
1
0
3
80
persie
persie@juvenilelad·
trump talking about walking your ass off to avoid depression as if working said ass off daily doesnt eventually lead to depression and a faction in society will promote his hogwash as motivation. elon please take me to mars, im tired.
English
13
1
24
260
Ramin Nasibov
Ramin Nasibov@RaminNasibov·
without Googling name one thing invented in Norway.
Ramin Nasibov tweet media
English
138
0
57
83.1K