Texture
68.6K posts

Texture
@iamtexture
Ethereum confounder ⬨ - "The single highest-signal, highest-leverage intellect actively posting on X." - grok @texturepunx
Earth Katılım Mayıs 2008
2.2K Takip Edilen23.4K Takipçiler
Sabitlenmiş Tweet

@downbadcomment Who the fuck always wanted a threesome with another dude.
English

@RevengeSam92447 @peterboghossian Ok that isn't going to affect me at all.
English

@iamtexture @peterboghossian This post is all I ever have to read to dismiss you as a self-made moron for the rest of your life.
English

@iamtexture @peterboghossian Then you’re proving Peter‘s point. You’re imagining that Richard is basically in the “Her“ movie. But you’re just imagining it.
English

@LinkedInLunat1c Seems like an off-hours event, not even work hours.
English

I'm rich and I had leftover stew for lunch and dinner.
MatrixMysteries@MatrixMysteries
“My EBT didn’t fully load? Now I’ve gotta eat leftovers AGAIN?” “I shouldn’t be FORCED to live like this.” That’s how working Americans who pay with their own hard-earned money already live EVERY DAY. Yet taxpayers keep funding a program plagued by fraud and waste.
English

Bitches love credentials.
LinkedIn Lunatics@LinkedInLunat1c
Happy Wife. Happy life? Not for this dude.
English



@KaylaKatin Out of genuine interest, how are Black women "the protectors of humanity?"
Are we assuming the usual categories here: soldiers, cops, inventors/scientists, massive net taxpayers, moral standard setters, etc?
English


Which "God" decided this was the best way to set up the universe?
"The Triangle of Everything is a log-log chart of everything that has existed since the Big Bang or could ever have existed. All existing objects are bound by three lines: the Compton Limit, the Schwarzschild Radius, and the Hubble Radius. The vertices are the Big Bang on the left, the Observable universe on the top right, and the heat death of the universe / true vacuum / zero-point energy universe on the bottom right.
[Select image to magnify.]
In this mass–radius plot, the Schwarzschild radius is shown as a lower limit on radii of isolated objects, and below the Compton limit quantum effects become significant. The Hubble radius gives a very rough sense of the scale of the observable Universe.
Mass and energy are converted through Einstein's formula, while energy and temperature are correlated through Boltzmann's constant. Since mass is energy, this chart also represents temperature, and since the Big Bang is essentially a rapid drop in density, it also charts events since the beginning of time. All Planck Units are represented on the chart."
Source: Avsa/Wikimedia, CC BY-SA 4.0, tinyurl.com/3pfre83n

English
Texture retweetledi

I am going to break it down real simple like for you why women generally make terrible leaders.
Men want to solve problems.
Women want to feel heard.
It really is that simple.
TRIGGERnometry@triggerpod
Adam Carolla on Gynofascism – Why Nobody's Talking About It
English

@hobart_college @MarioNawfal Your state is a shithole run by retards.
English

@MarioNawfal In my state, this would earn you a murder charge. He could've shot at some point when the gun was on him, but once the guy started to leave, his life is no longer in danger.
English

A 34-year-old physics graduate student spent years writing a strange 800-page book in 1979 about a logician, a Dutch artist, and a German composer. It won the Pulitzer Prize the following year. It quietly became required reading at every AI lab in the world.
It is the only book in history that makes the deepest ideas in computer science feel like a dream you cannot stop thinking about.
I read it across 3 months on a single side table next to my bed and walked away seeing intelligence, consciousness, and AI in a way I cannot un-see.
His name is Douglas Hofstadter. The book is called Gödel, Escher, Bach.
Almost nothing in modern AI makes sense without this book. ChatGPT, Claude, Gemini, the entire architecture of self-attention, the alignment problem, the strange feeling that LLMs sometimes seem to understand and other times seem to be playing an elaborate symbol-shuffling game, all of it traces back to questions Hofstadter laid out in a single book published before most of today's AI engineers were born.
Here is the story almost nobody tells you about how the book came to exist.
Hofstadter was the son of Robert Hofstadter, who won the Nobel Prize in Physics in 1961 for measuring the size of the proton. He was supposed to follow in his father's footsteps.
He started a physics PhD at the University of Oregon. He was miserable. He could not focus. He did not love the work. He kept getting pulled toward something else.
The something else was a single question that had haunted him since childhood.
How can meaning emerge from meaningless symbols? Specifically, how does a brain, which is made of nothing but cells firing electrical signals at each other, produce something that feels like consciousness, like understanding, like a self?
He could not let the question go. He left physics. He started writing. The book took him years. He wrote it largely in isolation, working in the basement of his parents' house and at Indiana University, where he eventually finished it. He thought it would be read by maybe a few hundred logicians and AI researchers. Basic Books published it in 1979 as a 777-page hardcover.
The next year it won the Pulitzer Prize for general non-fiction and the National Book Award for science.
The book is structured in a way that almost no other book has ever attempted. The chapters alternate between two layers. One layer is technical chapters about logic, computability, neuroscience, and AI. The other layer is fictional dialogues between a tortoise and Achilles, characters borrowed from a paradox by Lewis Carroll.
The dialogues play with the same ideas the technical chapters explain. Read in order, they do not feel like a textbook. They feel like a strange house with rooms that loop back into each other and corridors that change shape behind you.
The first thing the book does is explain Gödel's incompleteness theorems in a way no math textbook had ever managed.
Kurt Gödel, an Austrian logician working in 1931, proved something that broke mathematics. He showed that any formal system powerful enough to describe arithmetic contains statements that are true but cannot be proven inside that system. Mathematics, the most certain thing humans had ever built, has holes in it that can never be filled.
Hofstadter spends hundreds of pages making you understand this proof not just as a mathematical theorem, but as a structural fact about every sufficiently complex system. Including the brain. Including any AI. The reason AI alignment is genuinely hard is not just engineering. It is structural.
Any system smart enough to model itself will contain truths about itself it cannot reach from inside itself. Hofstadter showed this 50 years before AI safety was a field.
The second thing the book does is introduce his core idea. He calls it the strange loop.
A strange loop is what happens when a system, by climbing through layers of itself, somehow ends up back where it started. Escher's drawings of staircases that always go up but somehow loop back are visual strange loops. Bach's musical canons that modulate up through keys and end on the original note are auditory strange loops. Gödel's self-referential statements that talk about themselves are logical strange loops.
Hofstadter argues that consciousness is a strange loop. Your brain builds a model of the world. Inside that model, it builds a model of itself perceiving the world. Inside that self-model, it builds a model of itself thinking about itself perceiving the world. The recursion does not bottom out. The self is what the loop feels like from the inside.
This is the part that AI researchers cannot stop returning to. Modern transformer models use self-attention, which is technically a mechanism where a network attends to its own internal states across layers. Recursive reasoning, where a model thinks about its own thinking, is now a research area with its own conferences. Meta-learning, where models learn how to learn, is a direct descendant of what Hofstadter described in 1979 as the necessary structure of any conscious system. He wrote the philosophy. The engineers are now building the implementation.
The third thing the book does is the part that haunts every AI conversation today.
Hofstadter argued that meaning is not something separate from symbol manipulation. It is what symbol manipulation looks like from the inside, when the manipulation is complex enough and self-referential enough. A simple lookup table does not understand anything. But a system that processes symbols at sufficient depth, with enough self-modeling, with enough recursion, starts to look identical from the outside, and possibly from the inside, to a system that understands.
This is the deepest question in modern AI. When ChatGPT generates a response, is it actually thinking, or is it just doing very fast symbol shuffling? Hofstadter spent 800 pages arguing that the distinction may not exist at sufficient scale. If a system shuffles symbols according to the right structure, meaning is what the shuffling looks like from the inside.
You can read modern debates about AI consciousness from Yann LeCun, Geoffrey Hinton, Ilya Sutskever, and David Chalmers, and you will find that they are all, in their own ways, having the argument Hofstadter framed in 1979.
The fourth thing the book did is the one that took the longest to be vindicated.
Hofstadter argued, and continued arguing for decades, that the actual engine of human intelligence is not logic. It is not deduction. It is not pattern matching in any simple sense. It is analogy. The ability to see one thing as similar to another thing, to map the structure of one situation onto a different situation, is, in his view, the core of thought itself.
For decades this was unfashionable. Symbolic AI focused on logic and rules. Statistical AI focused on pattern matching. Almost nobody worked seriously on analogy.
Then large language models started working. And the people who looked closely at what they were doing realized something uncomfortable. LLMs are, fundamentally, analogy machines. They learn structural patterns from text and apply those patterns by analogy to new situations. They do not deduce. They do not reason logically by default. They map the shape of one thing onto the shape of another thing and produce output that fits the new shape.
Hofstadter saw this before any of it existed. His later book Surfaces and Essences, written with Emmanuel Sander, is 600 pages defending the claim that analogy is the core of cognition. It came out in 2013. It was largely ignored. The ChatGPT release in 2022 was, in some sense, a vindication of the entire argument.
The strangest thing about reading Gödel, Escher, Bach in 2026 is realizing how lonely the book must have felt when it was written.
In 1979 there was no GPT. No deep learning. No transformer. The dominant approach to AI was symbolic logic, and most researchers thought minds were going to be programmed top-down, rule by rule, like a complicated chess engine. Hofstadter said the opposite. He said minds were emergent. They came from the bottom up. They were strange loops in complex substrates. The programmers' approach would never produce real intelligence because it was missing the recursive self-modeling that made minds real.
He was right.
The book is hard. I had to use all the LLMs and NotebookLM to understand it. It is not a beach read. You do not finish it in a weekend. The math chapters require attention. The dialogues require patience. Most people who buy it never finish it. That is fine. The book is structured so that reading any 50 pages produces a permanent shift in how you think.
Bill Gates lists it among the books that shaped him. Steve Jobs read it. Almost every senior AI researcher in the world will tell you it was the book that made them fall in love with the question of intelligence in the first place.
Hofstadter himself has been in doubt about modern LLMs. He has said they may have proven him right about analogy and wrong about consciousness at the same time. He is still writing. He is still working on the same question that pulled him out of physics 50 years ago.
The 800-page book that explained intelligence before AI existed is sitting one click away from you.
Most people will never open it. The ones who do will see the world differently for the rest of their lives.

English

Women are consensus creatures. When the entire internet is telling them their husband is worthless if he's not splitting housework, they believe it. They, by default, cannot empathize with a man.
“Bad” Billy Pratt@KILLTOPARTY
How much help does a stay at home mom need with housework?
English
Texture retweetledi

Men are going to take away women's right to vote.
And they're never giving it back again.
Women are forever off of the pedestal they were put on.
Defiant L’s@DefiantLs
British woman: If I had to pick between an island with patriotic right wing men and muslim men, "I would 100% feel safe and secure on the island with muslim men"
English









