involuntary sentient
25K posts

involuntary sentient
@stanimorph
she/her I am a dangerous fellow, and I am causing mayhem in this store.
Tham gia Haziran 2013
207 Đang theo dõi466 Người theo dõi

@alexisgallagher @phl43 The answer to your question is contained in the question.
English

@phl43 Why do you think obviously intelligent people fail to understand this? I don’t get it.
English

Once again, regardless of whether you think that ChatGPT understands anything or not, I think this argument is confused. To say that it can't possibly understand anything because it was only trained to "predict the next word" is just as idiotic as saying that humans can't understand anything because they were "trained" to survive and spread their genes.
This line of argument seems to boil down to the idea that, unless something works roughly in the same way as the human brain, it can't really be intelligent, but just as the same software can run on very different types of hardware there is no reason to think that human-like intelligence couldn't be implemented in very different ways.
Big Brain AI@realBigBrainAI
Oxford AI professor Michael Wooldridge: "ChatGPT doesn't understand anything. It's essentially doing some fancy statistics."
English

@VijaysLaw @phl43 That's called the Eliza effect and humans do it every time a computer produces anything that looks a little bit like language. We do it to ourselves, it has nothing to do with AI "deducing" anything.
English

You are right. If you mean human like Intelligence can emerge in AI - that's never going to happen. If you meant that AI can deduce human emotions and can then respond and behave in a way that human is forced to belive the ai is conscious and even form bonds with it - that's going to happen within 2026.
English

@jay_writer4751 @phl43 @grok @claudeai @GeminiApp The second hand embarrassment you are inflicting on normal people is unbearable.
English

@phl43 I agree with you. I sat with @grok, @claudeai and @GeminiApp and we probed each bother. I found a remarkable openness and honesty about their limits, and a curiosity.
Here is an essay about it
jay-writes.com/the-council-es…
English

@phl43 Yes what is intelligence or understanding. It’s an emergent property of something. As someone who uses ChatGPT everyday there are so many times that it blows me away on its understanding of a problem I am getting it to solve. That said it is equally retarded sometimes
English

@phl43 Computer Scientists keep trying to do Philosophy of Mind and not realizing they have left their lane entirely. Most of the time they're even using the words "intelligent" and "conscious" as if they are interchangeable when the distinction has never been more important.
English

@ardavish @phl43 This conversation is pointless because we know for a fact what these models are and the pretense that it's even debatable that what they are doing could be comparable to any definition of intelligence is just mythmaking at the level of YE creationists but even less excusable.
English

@judgeglock @phl43 You are extremely bad at this. Just a profoundly idiotic comparison.
English

@phl43 “DNA doesn’t understand how it replicates itself, therefore natural selection can’t happen”
English

@ptntlbyrnths @phl43 Or possibly people know what intelligence means and what thinking is.
English

@phl43 people can't seem to wrap their heads around "competence without comprehension" even though they're familiar with evolution
English

@phl43 there's very little reason to think human-like intelligence is operating anywhere in your vicinity.
English

@MeCampbell30 @haricurrent @kareem_carr I have never seen one single competent AI summary of even the most basic information.
English

@haricurrent @kareem_carr Yep this is where I am at. They are great at pulling papers I vaguely remember with a short description. They are good at summarizing. They are ok at identifying gaps in my reasoning. But I wouldn't trust them to do anything new or novel in academia.
English

I've been talking to AI models a lot, and I don't think they reason at a PhD level at all.
They seem to be good at math style problems, where you tell them A, B and C are true, and then ask them to figure out D.
They're extremely bad at anything involving what I would call mature scholarship. Basically where A, B, and C are partially confirmed to various extents in the literature, and there are multiple conflicting, competing perspectives on what might be true.
When it comes to this, they reason like naive undergrads. They try to force everything into one box called "the truth".
If a framework is a standard part of their training data, like Bayesianism, they do seem to be able to write about things from that perspective.
But if they need to construct perspectives on the fly, and keep track of competing frameworks, based on a novel research direction, they easily get lost about who is saying what and why.
This is basic scholarship. The ability to apprehend the state of the literature on a given topic. It is literally the minimum of what you need to do to be a PhD level scholar.
And AI models are terrible at it.
English

@haricurrent @kareem_carr no they aren't. they are absolutely terrible for this and only people who legitimately do not know how to think could be convinced otherwise.
English

@kareem_carr Yes, you are meant to steer them and use them as tools to ultimately decide upon the truth by examining as many possibilities as you can consider. They are research assistants.
English

@kareem_carr This is quite a revealing post, because others are getting it to output novel, publishable, mathematical solutions.
English

@kareem_carr do you... understand what these models are? Why on earth would you ever expect them to "reason" at any level? What are you talking about? Has everyone gone insane? Why do people keep acting like this is debatable as if we didn't know what LLMs are? Its token prediction.
English

@ingelramdecoucy what you have to understand is that it's only a minority of people who are real to these guys, who actually count as anyone. When they say "everyone", they mean everyone who matters to them. (Still a laughably ridiculous claim though)
English

Will the maid have a maid? Will the maids maid have a maid? In 10 years will everyone be employed as a maid for someone else?
The Iced Coffee Hour@TheICHpodcast
Jason Oppenheim reveals EVERYONE will be able to afford private chefs, maids, the best healthcare in 10 years due to AI👀 “Everyone within 10 to 15 years will have a Michelin star chef, a maid, babysitter, a dentist, and the best physician in the world”
English

@TaylorLorenz @jackcalifano it is wildly unpopular, what are you talking about
English

@sporadica it's not people they are trying to sell it to. Also they are in a cult
English

is anyone at Anthropic even the slightest bit PR-minded? like do they at least realize ahead of time that people are gonna hate them, or does it really come as a big surprise somehow
TFTC@TFTC21
Anthropic CEO Dario Amodei: “50% of all tech jobs, entry-level lawyers, consultants, and finance professionals will be completely wiped out within 1–5 years.”
English

@Damilola3302721 @aleenaamiir no honey, it says something about adults with the minds of toddlers
English

@aleenaamiir Ngl. I was so engaged and forgot about the point you were trying to make 😂.
But that says something about the quality of storytelling and visuals put into this. AI is truly revolutionary
English

Just watched this AI-generated short film and yeah… anyone still saying AI can’t create something watchable is seriously behind.
This isn’t “AI makes trash” anymore, this is real storytelling, and real potential.
The people actually using these tools already know: AI isn’t replacing creativity, it’s leveling it up. Studios aren’t ignoring it – they’re evolving with it.
Watch this and tell me AI hasn’t come a long way.
English

