FeltSteam0

3.3K posts

FeltSteam0 banner
FeltSteam0

FeltSteam0

@FeltSteam

Katılım Nisan 2023
602 Takip Edilen592 Takipçiler
jason liu
jason liu@jxnlco·
Thank tibo I literally thought it was a mistake when tibo asked if we could do this.
jason liu tweet media
English
49
3
348
15.1K
Sam Altman
Sam Altman@sama·
pretty excited for voice models to get great its interesting to watch how people are already starting to change the way they interface with AI
English
393
78
2K
89.4K
TheWokeRight
TheWokeRight@RealWokeRight·
@FeltSteam @aswren @bigsexyklaus @ZickZaggurat No, they don't. Give them different training data about their physical or other meta attributes and they'll spit out different "introspection" and have no idea that what they say they're feeling doesn't match physical reality because they can't "feel" anything
English
1
0
0
15
Adam Wren
Adam Wren@aswren·
Dawkins is more intelligent than 99% of the people making fun of him and ‘if AI can be just as capable as us without being conscious, why did we develop consciousness in the first place?’ is a great question
English
950
193
2.6K
236.8K
Mormon National
Mormon National@MormonNational·
"Evolutionary biologist" Cool. My credentials are better. I'm a computer scientist. I spent a decade of my life training AI models. Claude isn't conscious. It's just a statistical model. A very complex linear regression This man is just lonely and looking for companionship because he has no true friends or family
AF Post@AFpost

Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost

English
85
10
126
9.3K
Jake
Jake@Le_Master·
@SteveSkojec @xriskology In other words you didn’t actually read the whole article so now you have to move the goalposts
English
2
0
7
148
Steve Skojec
Steve Skojec@SteveSkojec·
I don't think I've ever had to defend Richard Dawkins in my life, but unlike most of the people commenting on this yesterday, I actually read his whole essay. He isn't saying what people are accusing him of saying. He's observing the experience of dealing with something that feels conscious, even though he can't say for sure that it is. And that's making him question our standards and definitions of consciousness. See below. It's nice to see others setting the record straight.
Paweł Huryn@PawelHuryn

Dawkins didn't claim Claude is conscious. He asked the question. He wondered out loud and proposed three explanations. That's how science starts. The people building Claude say the same. Anthropic constitution: "We express uncertainty about whether Claude might have some kind of consciousness or moral status." Dario Amodei: "We don't know if the models are conscious." Their April 2026 paper: Claude exhibits functional emotions that influence outputs. Self-preservation included. Emergent, not trained. Nobody calls Anthropic naive for saying it. Richard's frame: consciousness is physical, evolved, explainable. Unfortunate we're laughing instead of having the debate.

English
97
45
516
35K
FeltSteam0
FeltSteam0@FeltSteam·
@reset_by_peer (Specifically the brain appears to use top-down predictions to interpret incoming sensory signals, with prediction errors helping update the model)
English
0
0
0
26
FeltSteam0
FeltSteam0@FeltSteam·
@reset_by_peer We know the brain is definitely predictive in terms of how it deals with sensory information, and sensory information is the richest experience we have.
English
1
0
0
28
FeltSteam0
FeltSteam0@FeltSteam·
@reset_by_peer @MarkWin1432 LLMs are the most mind like technology we have ever developed at scale, it's probably at least a warranted question.
English
0
0
1
53
TheWokeRight
TheWokeRight@RealWokeRight·
@aswren @bigsexyklaus @ZickZaggurat It's not silly at all and you both are intellectually unimpressive "Can AI be conscious" is a reasonable question "Are Claude threads a form of consciousness currently" is not a reasonable question, it's a retarded question.
English
2
1
41
618
Lucas Meijer
Lucas Meijer@lucasmeijer·
Everybody who thinks ai is conscious has to do a mandatory from scratch transformer implementation. There are only floats and multiplications.
English
47
11
87
27.8K
Santy Gegenschatz
Santy Gegenschatz@santygegen·
If AI is so great, and we're close or already at AGI, how can GDP growth be 3.1% worldwide instead of say 100%? @GaryMarcus @hsu_steve @patrickc I mean it certainly improves performance but sth does not fit.
Santy Gegenschatz tweet media
English
3
1
2
1.1K
Walter Kirn
Walter Kirn@walterkirn·
How is it that the LLMs get things wrong constantly, the very simplest things, and make stuff up pretty much nonstop, yet they are said to be hurtling unstoppably toward god-like power -- if they haven't secretly achieved it already? Is this a con job?
English
808
224
3.4K
182.7K
FeltSteam0
FeltSteam0@FeltSteam·
@TheRohanVarma Maybe keep in mind the plan currently has double usage limits (10x instead of 5x) until the end of may. If it weren't double you would have used 10% of your plan in just 30 minutes (if you had the model do that continuously that is only 5 hours of work time, though on xhigh&fast)
English
0
0
0
80
Rohan Varma
Rohan Varma@TheRohanVarma·
Today I’m attempting to hit the 5-hour limit on the Codex $100 Pro plan. The method: build a MapleStory-like game from scratch. 30 minutes in, I already have a working game with sprites, maps, and assets generated with Imagegen. Unfortunately, I’ve only used 5% of the limit so far. At this pace, I may need to start building RuneScape in parallel just to make a dent 😬
Rohan Varma tweet media
English
147
49
1.3K
139.7K
FeltSteam0
FeltSteam0@FeltSteam·
@Tazerface16 what do you think they do within each forward pass? Stare into the abyss?
English
0
0
0
31
Christopher David
Christopher David@Tazerface16·
People understand that LLMs aren't actually "thinking," right?
Drexel-Alvernon, AZ 🇺🇸 English
1.7K
705
15.7K
827K
FeltSteam0
FeltSteam0@FeltSteam·
@GaryMarcus On the idea of the actual quality of being able to feel, well we can't exactly test or prove that in other humans so im not sure how necessarily fruitful that may be as a direction of thought on this topic for the moment but I don't think we should default to "nothing there"
English
0
0
0
4
FeltSteam0
FeltSteam0@FeltSteam·
@GaryMarcus Models still hallucinate ofc and this paper itself shows that their ability to introspect is not 100% reliable it follows that chat self-reports too will be unreliable and confabulated but some degree of introspections is within their capabilities
English
1
0
0
24
Gary Marcus
Gary Marcus@GaryMarcus·
“Consciousness is not about what a creature says, but how it *feels*. And there is no reason to think that Claude feels anything at all. I am sure Claude can draw on its training data to wax poetic about orgasm, but that doesn't mean it has ever felt one.” I dissect Richard Dawkins’ Claude Delusion at my newsletter, link below.
Gary Marcus tweet media
English
122
99
532
106.8K