James Ramsden

2.5K posts

James Ramsden banner
James Ramsden

James Ramsden

@JamRam96

Born in London, live in Liverpool, Newcastle United fan. Don’t ask how. He/Him

Liverpool, England انضم Ağustos 2010
50 يتبع23 المتابعون
Yosarian2
Yosarian2@YosarianTwo·
I really want "AI can not be intelligent by defintion" people to explain what they mean by "intelligence". Or better yet, to make concrete predictions about what they think LLM's won't be able to do because they lack "intelligence" and then notice when they do those things.
onion person@CantEverDie

my biggest pet peeve around LLMs is when people (usually those invested in its success) call it “intelligent”. it definitionally, how it functions on a base level, is not intelligent. the way LLMs are built, it can never hit real intelligence. it’s just predictive

English
85
23
556
38.7K
tumtumtum 🇺🇦
tumtumtum 🇺🇦@tumtumtum·
@JamRam96 @AVMiceliBarone Nope, not a hint of consciousness let alone independent thought. Reptile brain (possibly inbred brain based on profile pic) scared af that they’re not special tho.
English
1
0
0
16
Antonio Valerio Miceli Barone
This is the "simulated water is not wet" argument, and it's wrong as it has always been. Computation is invariant under simulation. This is the fundamental property of Turing machines. An interpreter running a Python program is not just simulating it, it's actually running it.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
47
11
113
7.4K
James Ramsden
James Ramsden@JamRam96·
@tautologer False. In order to predict what a smart person would say you need to be at least as smart as their pet parrot
English
0
0
0
3
tautologer
tautologer@tautologer·
there is no functional difference between predicting what an intelligent person would say in a situation and actually being intelligent. in order to predict what a smart person would say you need to be at least as smart as them
onion person@CantEverDie

my biggest pet peeve around LLMs is when people (usually those invested in its success) call it “intelligent”. it definitionally, how it functions on a base level, is not intelligent. the way LLMs are built, it can never hit real intelligence. it’s just predictive

English
177
66
1.9K
135.9K
James Ramsden
James Ramsden@JamRam96·
@tumtumtum @AVMiceliBarone You and your tech-bro ilk are all fucking dangerous sickos, to a man. Amoral, anti-intellectual scum. Fuck off and rot in a pool of your own ejaculate you mindless freak
English
1
0
0
27
sugnerhan
sugnerhan@swengsbelike·
@AVMiceliBarone it makes no practical difference if we are real or simulated, "is it really wet" doesn't matter if the result has all the relevant properties of wetness, "does it really think" doesn't matter if the result has all the properties and benefits of thinking, "is it conscious"..
English
1
0
0
32
tumtumtum 🇺🇦
tumtumtum 🇺🇦@tumtumtum·
These arguments come up over and over again and never have I read how they prove that humans aren’t everything they claim simulated consciousness to be. There’s been zero evidence that anything going on in our brains is anything more than neuro-chemical reactions. What’s more, we simulated those processes on silicon and got intelligence that is starting to feel more and more conscious.
English
1
0
1
66
James Ramsden
James Ramsden@JamRam96·
@ThePremiseOfIt @YosarianTwo Also, you can't get to an ontological distinction by considering o/p only, b/c one system can have many types of o/p. The difference in o/p between "my dog, barking", and "my dog, pooping" is immediately tangible, but that difference doesn't imply these two are different dogs
English
0
0
0
8
James Ramsden
James Ramsden@JamRam96·
@ThePremiseOfIt @YosarianTwo Wait, is it your view that consciousness is "manifestly non-physical"? How does one come to such a view outside of religious dogma?
English
1
0
0
5
James Ramsden
James Ramsden@JamRam96·
@YosarianTwo Because of that attractor property, we could use neurons to do computations (as has been done already), but this doesn't imply that any network of neurons is a computer; computation is a particular thing, and requires a conceptual map between the physical and the symbolic
English
0
0
0
4
James Ramsden
James Ramsden@JamRam96·
@YosarianTwo That's not to say physical systems cannot have stable attractors, and neurons can be thought of as having two of those which we could term "on" and "off", but we must be aware that such quantisations are not part of the neuron itself, only of our particular conceptual framework
English
1
0
0
3
🇩🇿
🇩🇿@aboupinel·
@JamRam96 @YosarianTwo the inference from « consciousness is a physically instantiated process » to « computers can only simulate consciousness » is obviously not warranted
English
1
0
0
18
James Ramsden
James Ramsden@JamRam96·
@YosarianTwo As an aside, having dabbled in electrical engineering, I can say this is untrue. The physics of all circuits is continuous, and the output voltage of a FET is unquantized. We assign "on" or "off" to a FET's output based on a voltage threshold, but the o/p voltage itself floats
English
0
0
0
3
Yosarian2
Yosarian2@YosarianTwo·
@JamRam96 A circut in a computer either fires or does not transmit electricity according to real-world physics. Nothing makes one more or less "symbolic" than the other
English
2
0
0
35
James Ramsden
James Ramsden@JamRam96·
@YosarianTwo Ah, but of the two, only computation is intrinsically symbolic. Electromechanical computation *requires* a symbolic map between: - the evolution of a concrete system (the computing machine) along a physics, and - the evolution of an abstract system (an algorithm) along a logic
English
0
0
0
2
Yosarian2
Yosarian2@YosarianTwo·
@JamRam96 Descartes was a brilliant man, but we understand the brain better than he did. In any case consciousness can and must rely on a physical system and have both physical inputs and outputs if it were to have any meaning at all.
English
2
0
0
41
James Ramsden
James Ramsden@JamRam96·
@YosarianTwo So a computational system cannot physically instantiate consciousness, it could only ever simulate doing so. This, for me, implies that a computer can't *understand* things (because understanding is a subset of consciousness), so ipso facto cannot exhibit human-level intelligence
English
1
0
1
20
James Ramsden
James Ramsden@JamRam96·
@YosarianTwo In other words, computation depends on some extrinsically constructed relation between a physical system and an abstract system. However, consciousness, as Descartes famously observes, depends on nothing extrinsic to itself whatsoever.
English
1
0
1
18