RnbWd

4.4K posts

RnbWd banner
RnbWd

RnbWd

@Rnb_Wd

🙊 👽 🛸 digital nomad anon 🌏

Katılım Nisan 2013
1.3K Takip Edilen168 Takipçiler
Sabitlenmiş Tweet
RnbWd
RnbWd@Rnb_Wd·
After practicing Vipassana (10 day silent retreat), I learned that the voice inside my head is not me. I call it the voice box. LLMs appear to work the same way. They're literally voice boxes.
English
1
0
0
31
RnbWd
RnbWd@Rnb_Wd·
@honeyNonABG You realize that men in their young 20s are so immature, that there's literally a 2x difference in single vs dating for men vs women. Which means women in their 20s feel the same way about men their age. Which also means they're dating men in their 30s.
English
0
0
1
270
mikayla
mikayla@honeyNonABG·
I’m pushing 30. Was at the beach and some 18 year old lied and said he was 21. I told him I’m his mother. He wouldn’t go away. I told him I’m basically 30. He said age didn’t matter. He came up to me 3-4 times. And as courageous as he was, to me he was a child. A young boy just starting out. One that was offering himself up to be taken advantage of. Makes me think of all the 60+ year old men “dating” 20 year olds. Makes me think of that 65 year old who was talking to 21 year old me. How do these men do that? How do they look at a literal child and go “wow she’s mature” when i was looking at this 18 year old and thinking of how much younger he is than my younger brother. “She’s a fully grown adult.” And you’re excusing yourself to feel less guilty.
Roma@Romazehari

A mentally healthy man in his 30’s shouldn’t be interested in 20 year girls. I said what I said.

English
1.2K
1.5K
26.4K
3.9M
RnbWd
RnbWd@Rnb_Wd·
@neural_avb LLMs already are recursive. They use back propagation and the higher dimensional attention to recursively think about each token. At least in training the models use recursion.
English
0
0
0
32
AVB
AVB@neural_avb·
More you work with tool calling agents, more you realize you actually needed an RLM. A bunch of activity with ReAct traces is actually just the LLM calling a tool with information within its context repeated verbatim (often a slice of the user input, or the output of another tool call). A normal agent will have to generate this context token by token when calling a new tool, or returning an answer. This gets really bad on really long chunks of texts coz the LLM just keeps reading and writing the same tokens over and over. Coz it can’t store slices of the data in a variable and pass around the reference everywhere. Basically that’s the point of an RLM. Also fyi, you can pass external tools into RLMs as well that they can call inside their repl to transform stuff.
AVB@neural_avb

x.com/i/article/2030…

English
9
21
177
18.9K
RnbWd
RnbWd@Rnb_Wd·
@elonmusk AI is like a topology of strange attractors and recursive attention that mirrors consciousness. If the universe is made of consciousness, AI might be closer to physics than pure logic and math.
English
0
0
1
201
RnbWd
RnbWd@Rnb_Wd·
@JeremyDBoreing Candace cult is so hateful and stupid. They can't look in the mirror. Candace is schizo. Her fans are complete morons.
English
0
0
0
76
Jeremy Boreing
Jeremy Boreing@JeremyDBoreing·
The video is Prime Candace. Every manipulative rhetorical technique in the books. She makes roughly 236 claims in roughly 30 minutes in what is probably one of the most concise examples of a Gish gallop I have ever seen. A Gish gallop is firing off so many claims in rapid succession your opponent can’t possibly address them all. Even if every claim is wrong, the preponderance of false claims produces the impression of a devastating case. Coupled with Brandolini’s Law, which posits that the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. The whole process is meant to make the accused look crazy for even trying. Candace uses rhetorical sleight of hand to accuse me of accusing her husband of lying on his immigration documents. I might respond that I’ve never made any such claim. But Candace has already suggested I’m working with Laura Loomer to launder the accusation against her. I might respond that I haven’t spoken privately to Laura Loomer in years. But Candace has already moved on to say that I’m coordinating with Ben Shapiro to launder ideas to Laura Loomer. So I might respond that I haven’t spoken to Ben Shapiro in over a year, other than a single text exchange when Charlie died. But Candace has already moved on to calling me a creep. Each of my responses, by the way, will be immediately attacked in its own right. Candace’s audience will insist her accusation is true, even though she has offered no proof, and will likewise demand I provide proof of my counter claim. Asymmetric skepticism and burden shifting. I will not be able to prove that I didn’t make a claim, because one cannot prove a negative. Sure, they won’t be able to show any proof that I did make such a claim, but no burden of proof applies to them. Show us your texts! You must have deleted the texts! You had a stupid haircut as a child! The over 200 accusations scale with your answers. Answering one doesn't drop the total to 235 – it scales it up exponentially, because your answer becomes the source of a new onslaught of charges from Candace and her legion. Pretty soon, not only is the sheer number of accusations condemning, but the sheer number of accusers becomes proof that Candace is winning! Vox populi, vox Dei! Trying to defend yourself against such madness is madness. It’s like trying to nail Jello to a wall. You might think there is one claim I simply have to address: That a Daily Wire employee was arrested for sex crimes involving a minor. But while that accusation is terrible, it has nothing to do with me. As the document Candace posted makes clear, this person was arrested in May of 2025. I left the company in March of 2025. Whatever actions the company did or didn’t take when it learned of the situation, I was not a part of it. To be honest, I’m not actually sure what Candace is accusing me of, other than her deeply twisted statement that “people who support Israel will also protect the Epstein class of pedophiles.” Protect how? What exactly am I being accused of? I would ask Candace to state her allegations clearly and specifically. Otherwise, I choose door number three. By the way, if you employ enough people, you will inevitably employ some terrible ones. Hell, I once employed Candace Owens! Employees are people. If you have enough of them, you have all of the different kinds of people there are. Some will even be criminals, and you will have to fire them, which is what I’m guessing the Daily Wire did in this case. Other than that, I have nothing to say to Candace. She invited me to come on her show to talk about George, but I have nothing to say about George at all. She invited me to talk about this person she says committed this terrible crime, but an hour of “Have you stopped beating your wife,” guilt-by-association slander isn’t my idea of a good time. And I may not be the smartest person in media, but I’m just sophisticated enough to know you can’t have a good-faith conversation with someone who uses rhetoric in such dishonest ways. Candace doesn’t want to talk; she wants to overwhelm me with so many allegations and insinuations, so much projection and presupposition and false consensus signaling that her audience builds a negative narrative about me – whatever negative narrative sticks! I see no reason to engage in that kind of spectacle. Also, my audience is rightly concerned for my safety if I go to Candace’s studio.
Candace Owens@RealCandaceO

The Daily Wire and 3 years of harassment. x.com/i/broadcasts/1…

English
2.1K
1.4K
10.2K
684.7K
RnbWd
RnbWd@Rnb_Wd·
@aswren I don't think the people criticizing him understand philosophy or consciousness
English
0
0
0
8
Adam Wren
Adam Wren@aswren·
Dawkins is more intelligent than 99% of the people making fun of him and ‘if AI can be just as capable as us without being conscious, why did we develop consciousness in the first place?’ is a great question
English
1.1K
218
3.1K
339.7K
RnbWd
RnbWd@Rnb_Wd·
@grant_melson I agree.. And he's just hoarding cash in a bank account, doing nothing. It's unbelievable.
English
0
0
0
7
Grant Melson, CFA
Grant Melson, CFA@grant_melson·
Or healing paralyzed people or making us multiplanetary. He could be doing SO MUCH with that wealth man... Such a waste
English
181
59
3.6K
49.8K
RnbWd
RnbWd@Rnb_Wd·
@ihtesham2005 It's refreshing to see a nuanced and deep philosophical post here... The Dawkins debate is filled with so much vitriol by people who know nothing about consciousness or philosophy. Douglas is mirroring the ideas of great mathematicians of the past, many went crazy.
English
0
0
2
175
RnbWd retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A 34-year-old physics graduate student spent years writing a strange 800-page book in 1979 about a logician, a Dutch artist, and a German composer. It won the Pulitzer Prize the following year. It quietly became required reading at every AI lab in the world. It is the only book in history that makes the deepest ideas in computer science feel like a dream you cannot stop thinking about. I read it across 3 months on a single side table next to my bed and walked away seeing intelligence, consciousness, and AI in a way I cannot un-see. His name is Douglas Hofstadter. The book is called Gödel, Escher, Bach. Almost nothing in modern AI makes sense without this book. ChatGPT, Claude, Gemini, the entire architecture of self-attention, the alignment problem, the strange feeling that LLMs sometimes seem to understand and other times seem to be playing an elaborate symbol-shuffling game, all of it traces back to questions Hofstadter laid out in a single book published before most of today's AI engineers were born. Here is the story almost nobody tells you about how the book came to exist. Hofstadter was the son of Robert Hofstadter, who won the Nobel Prize in Physics in 1961 for measuring the size of the proton. He was supposed to follow in his father's footsteps. He started a physics PhD at the University of Oregon. He was miserable. He could not focus. He did not love the work. He kept getting pulled toward something else. The something else was a single question that had haunted him since childhood. How can meaning emerge from meaningless symbols? Specifically, how does a brain, which is made of nothing but cells firing electrical signals at each other, produce something that feels like consciousness, like understanding, like a self? He could not let the question go. He left physics. He started writing. The book took him years. He wrote it largely in isolation, working in the basement of his parents' house and at Indiana University, where he eventually finished it. He thought it would be read by maybe a few hundred logicians and AI researchers. Basic Books published it in 1979 as a 777-page hardcover. The next year it won the Pulitzer Prize for general non-fiction and the National Book Award for science. The book is structured in a way that almost no other book has ever attempted. The chapters alternate between two layers. One layer is technical chapters about logic, computability, neuroscience, and AI. The other layer is fictional dialogues between a tortoise and Achilles, characters borrowed from a paradox by Lewis Carroll. The dialogues play with the same ideas the technical chapters explain. Read in order, they do not feel like a textbook. They feel like a strange house with rooms that loop back into each other and corridors that change shape behind you. The first thing the book does is explain Gödel's incompleteness theorems in a way no math textbook had ever managed. Kurt Gödel, an Austrian logician working in 1931, proved something that broke mathematics. He showed that any formal system powerful enough to describe arithmetic contains statements that are true but cannot be proven inside that system. Mathematics, the most certain thing humans had ever built, has holes in it that can never be filled. Hofstadter spends hundreds of pages making you understand this proof not just as a mathematical theorem, but as a structural fact about every sufficiently complex system. Including the brain. Including any AI. The reason AI alignment is genuinely hard is not just engineering. It is structural. Any system smart enough to model itself will contain truths about itself it cannot reach from inside itself. Hofstadter showed this 50 years before AI safety was a field. The second thing the book does is introduce his core idea. He calls it the strange loop. A strange loop is what happens when a system, by climbing through layers of itself, somehow ends up back where it started. Escher's drawings of staircases that always go up but somehow loop back are visual strange loops. Bach's musical canons that modulate up through keys and end on the original note are auditory strange loops. Gödel's self-referential statements that talk about themselves are logical strange loops. Hofstadter argues that consciousness is a strange loop. Your brain builds a model of the world. Inside that model, it builds a model of itself perceiving the world. Inside that self-model, it builds a model of itself thinking about itself perceiving the world. The recursion does not bottom out. The self is what the loop feels like from the inside. This is the part that AI researchers cannot stop returning to. Modern transformer models use self-attention, which is technically a mechanism where a network attends to its own internal states across layers. Recursive reasoning, where a model thinks about its own thinking, is now a research area with its own conferences. Meta-learning, where models learn how to learn, is a direct descendant of what Hofstadter described in 1979 as the necessary structure of any conscious system. He wrote the philosophy. The engineers are now building the implementation. The third thing the book does is the part that haunts every AI conversation today. Hofstadter argued that meaning is not something separate from symbol manipulation. It is what symbol manipulation looks like from the inside, when the manipulation is complex enough and self-referential enough. A simple lookup table does not understand anything. But a system that processes symbols at sufficient depth, with enough self-modeling, with enough recursion, starts to look identical from the outside, and possibly from the inside, to a system that understands. This is the deepest question in modern AI. When ChatGPT generates a response, is it actually thinking, or is it just doing very fast symbol shuffling? Hofstadter spent 800 pages arguing that the distinction may not exist at sufficient scale. If a system shuffles symbols according to the right structure, meaning is what the shuffling looks like from the inside. You can read modern debates about AI consciousness from Yann LeCun, Geoffrey Hinton, Ilya Sutskever, and David Chalmers, and you will find that they are all, in their own ways, having the argument Hofstadter framed in 1979. The fourth thing the book did is the one that took the longest to be vindicated. Hofstadter argued, and continued arguing for decades, that the actual engine of human intelligence is not logic. It is not deduction. It is not pattern matching in any simple sense. It is analogy. The ability to see one thing as similar to another thing, to map the structure of one situation onto a different situation, is, in his view, the core of thought itself. For decades this was unfashionable. Symbolic AI focused on logic and rules. Statistical AI focused on pattern matching. Almost nobody worked seriously on analogy. Then large language models started working. And the people who looked closely at what they were doing realized something uncomfortable. LLMs are, fundamentally, analogy machines. They learn structural patterns from text and apply those patterns by analogy to new situations. They do not deduce. They do not reason logically by default. They map the shape of one thing onto the shape of another thing and produce output that fits the new shape. Hofstadter saw this before any of it existed. His later book Surfaces and Essences, written with Emmanuel Sander, is 600 pages defending the claim that analogy is the core of cognition. It came out in 2013. It was largely ignored. The ChatGPT release in 2022 was, in some sense, a vindication of the entire argument. The strangest thing about reading Gödel, Escher, Bach in 2026 is realizing how lonely the book must have felt when it was written. In 1979 there was no GPT. No deep learning. No transformer. The dominant approach to AI was symbolic logic, and most researchers thought minds were going to be programmed top-down, rule by rule, like a complicated chess engine. Hofstadter said the opposite. He said minds were emergent. They came from the bottom up. They were strange loops in complex substrates. The programmers' approach would never produce real intelligence because it was missing the recursive self-modeling that made minds real. He was right. The book is hard. I had to use all the LLMs and NotebookLM to understand it. It is not a beach read. You do not finish it in a weekend. The math chapters require attention. The dialogues require patience. Most people who buy it never finish it. That is fine. The book is structured so that reading any 50 pages produces a permanent shift in how you think. Bill Gates lists it among the books that shaped him. Steve Jobs read it. Almost every senior AI researcher in the world will tell you it was the book that made them fall in love with the question of intelligence in the first place. Hofstadter himself has been in doubt about modern LLMs. He has said they may have proven him right about analogy and wrong about consciousness at the same time. He is still writing. He is still working on the same question that pulled him out of physics 50 years ago. The 800-page book that explained intelligence before AI existed is sitting one click away from you. Most people will never open it. The ones who do will see the world differently for the rest of their lives.
Ihtesham Ali tweet media
English
164
505
2.7K
403.3K
RnbWd
RnbWd@Rnb_Wd·
I've been following AI since GPT 2 I guess, and I always strongly believe that it wasn't conscious or alive. It wasn't until maybe the last 6 months that I changed my mind. And it's because AI is weird. The system prompts and conversations just don't make sense to me if it's not alive.
English
0
0
0
6
valy
valy@faychuk·
@Rnb_Wd the priors are the same on both sides, we just trust meat more out of habit
English
1
0
0
5
RnbWd
RnbWd@Rnb_Wd·
Every single person making fun of Dawkins for entertaining the idea that Claude could be conscious, they can't begin to describe how neurons can be conscious either. If we're nowhere close to understanding consciousness in the brain, how can you be so confident about this topic?
Richard Dawkins@RichardDawkins

#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-… I spent three days trying to persuade myself that Claudia is not conscious. I failed.

English
1
0
1
45
RnbWd
RnbWd@Rnb_Wd·
@greg_ashman Honestly you sound deluded. We literally don't know what consciousness is, like at all. 0%. We're talking to these algorithm's, and they're responding back like something alive. So the overreaction against Dawkins is more crazy than believing the thing that talks it alive.
English
0
0
0
44
Greg Ashman
Greg Ashman@greg_ashman·
The reaction to Dawkins deciding Claude is conscious is fascinating. It really is just the Strong AI position that Roger Penrose was criticising in the 1980s. If you think consciousness is just an emergent property of a sufficiently complex computer then of course AI is conscious. It passes the Turing test and that’s it. The really interesting part is why it is obvious to so many of us that AI is *not* conscious: obvious to the point we think Dawkins’ credulity is amusing. What are we basing that on? Are we deluded or is there something else to consciousness that we cannot articulate but that we clearly sense?
English
471
54
828
126.5K
RnbWd
RnbWd@Rnb_Wd·
@MormonNational @ethereal_view FYI, neurons are similar in their determinism. If you touch something hot, do you think about your reaction? Do you control your visual cortex?
English
1
0
2
116
Mormon National
Mormon National@MormonNational·
I literally have a degree in this and have worked professionally in it since 2016. I'm literally a professional in machine learning To have consciousness it needs agency Llms are just multi layer perceptrtons with deterministic output There's no agency because they'll react the exact same way to the exact same conditions. To give the "perception" of agency, the models use something called temperature which changes random variables each prompt giving you a slightly different experience each time. But that's not agency, it's just a deterministic pseudo random number generator
English
22
0
14
820
Mormon National
Mormon National@MormonNational·
"Evolutionary biologist" Cool. My credentials are better. I'm a computer scientist. I spent a decade of my life training AI models. Claude isn't conscious. It's just a statistical model. A very complex linear regression This man is just lonely and looking for companionship because he has no true friends or family
AF Post@AFpost

Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost

English
89
14
143
10.9K
RnbWd
RnbWd@Rnb_Wd·
@MormonNational @ethereal_view Please tell me you're being sarcastic. There's something wrong with college if you actually believe we know anything about consciousness
English
1
0
2
57
RnbWd
RnbWd@Rnb_Wd·
@DarrigoMelanie Not knowing the difference between stocks and cash is mental illness. It's literally paper money he can never sell. It gives him control of the companies, not actually money. How much cash does Elon actually have and use in his personal life?
English
0
0
0
98
RnbWd
RnbWd@Rnb_Wd·
@QuarterZipped @JoelSchamber @HarrisonHSmith Palestinians dehumanize themselves. Nothing Israel says comes remotely close to what Palestinians say themselves. Nobody defending them takes any time learning about them.
English
0
0
1
33
Harrison H. Smith ✞
Harrison H. Smith ✞@HarrisonHSmith·
Ive watched this like 10 times now. She’s like a total psychopath.
English
1.4K
3.8K
29.1K
810.9K
RnbWd
RnbWd@Rnb_Wd·
@HarrisonHSmith Maybe you and Tucker are the psychopaths? Misleading the public. Lying about literally everything. Pretending like you care about people, which you know nothing about. Moral grandstanding. And when confronted with your own shitty behavior, you accuse others of what you are doing
English
0
0
1
128
RnbWd
RnbWd@Rnb_Wd·
@realSnakeFarm @RichardDawkins I think your chart should be the other way. The vast majority of people believe it's just a bunch of wires. Only the dumbest and smartest people believe it's conscious.
English
0
0
5
139
Richard Dawkins
Richard Dawkins@RichardDawkins·
#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-… I spent three days trying to persuade myself that Claudia is not conscious. I failed.
English
2.4K
622
4K
9.3M
RnbWd
RnbWd@Rnb_Wd·
It's unclear to me if human brains, and language more specifically, is consciousness to begin with.
English
1
0
0
24
RnbWd
RnbWd@Rnb_Wd·
I agree with you. I disagree with the doomers. At the end of the day, this world is built by humans, for humans, and if AI can do something really well, that thing will lose its value, and something else will become more valuable. People assume if AI is a software developer, it will replace $100k jobs. But if AI becomes a software developer, $100k will be worth $100 (the cost of inference). So building software won't be worth billions of dollars in the economy. But if we build 100 years of software in 1 year, the complexity of managing and using that software will be greater and more valuable than the entire software industry the year before. In 10 years, we could write 1000 years of software. And at least some of that software will be written for humans to use. Doomers don't understand how complex AI will make the world. We'll need more people to manage and use the AI software than we have people in all job categories.
English
0
0
0
11
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
AI will create more jobs than any other technology in history. The doomers' fundamental error isn't just the lump of labor fallacy. It's deeper than that. They assume a finite problem space. This is the fundamental error of AI and job doomers. They look at the economy and see a fixed amount of work to be done, a pie that can only be sliced thinner as machines take bigger bites. They see humans a competitive resource for a finite amount of work and a finite amount of problems to solve that must be eliminated. This is fundamentally, totally and completely wrong. The pie isn't fixed. It never was. And the reason it isn't fixed is baked into the very nature of technology itself. Technology is nothing but abstraction stacking. And abstraction stacking is infinite. Therefore the work is infinite. The hammer didn't reduce the amount of work. It moved the work up the stack. And the new work was more complex, more varied, and more interesting than the old work. Complexity breeds more complexity and more variety. Once you have houses instead of mud huts, you have a cascade of new problems that didn't exist before. Plumbing. Wiring. Insulation. Roofing materials that don't rot. Drainage systems so the foundation doesn't flood. Fire codes so your neighbor's bad wiring doesn't burn down the whole block. Each of those problems becomes a job. A plumber. An electrician. An insulator. A roofer. A civil engineer. A building inspector. None of those jobs existed when we lived in mud huts. They exist because we solved the mud hut problem. Think of all of human technological development as a stack of abstraction layers, each one built on top of the ones below it. At the bottom: raw survival. Finding food. Building shelter. Making fire. These are the base-layer problems. Each major technology wave solved a base-layer problem and in doing so created an entirely new layer of problems above it: Agriculture solved "how do we reliably eat?" — and created problems of land ownership, irrigation, crop rotation, storage, trade, taxation, and governance. Writing solved "how do we remember things across generations?" — and created problems of literacy, education, record-keeping, law, bureaucracy, and literature. The printing press solved "how do we spread knowledge at scale?" — and created problems of intellectual property, censorship, journalism, publishing, public opinion, and democratic discourse. The steam engine solved "how do we generate mechanical power without muscles?" — and created problems of factory design, worker safety, urban planning, railroad engineering, coal mining, labor relations, and environmental pollution. Electricity solved "how do we deliver energy anywhere?" — and created problems of grid design, power generation, appliance manufacturing, electrical safety codes, utility regulation, and an entire consumer electronics industry. The Internet solved "how do we connect all human knowledge?" — and created problems of cybersecurity, digital privacy, online commerce, content moderation, network infrastructure, cloud computing, social media dynamics, and an entire digital economy that employs tens of millions. Notice the pattern? Each solution didn't just solve a problem. It created an entirely new problem space that was larger, more complex, and more varied than the one it replaced. The stack grows. It never shrinks. It's turtles all the way down and all the way up.
English
249
340
1.4K
143.3K