velleit

68 posts

velleit banner
velleit

velleit

@velleit_

surly bond slipper enjoyer

Katılım Mart 2026
132 Takip Edilen5 Takipçiler
velleit
velleit@velleit_·
@jessi_cata "I reflect on my experience to correctly conclude that I have qualia. You pseudo-reflect on your pseudo-experience to falsely pseudo-conclude that you have qualia (though your internal states are only meaningful to the extent that I interpret them). We are not the same."
English
0
0
1
7
velleit
velleit@velleit_·
@jessi_cata I think alphabetization avoids the butterfly effect by allowing an organism to autonomously convert continuous signals into discrete ones. I'm not sure how the knowledge problem can be avoided, though, other than by biting the bullet.
English
1
0
1
9
jessicat
jessicat@jessi_cata·
To further elaborate on knowledge arguments for multiple realizability: Suppose there is a planet with intelligent aliens. The aliens have two subtypes. One subtype processes the *functional* emotion of lust with ABC-fibers (some kind of neural circuit or similar). The other subtype processes functional lust on XYZ-fibers. The I/O behavior of ABC-fibers is practically indistinguishable from that of XYZ-fibers. (By "functional lust", I mainly mean the I/O behavior that would be expected from actually experiencing the emotion of lust, while staying agnostic on whether lusty experience actually occurs.) We could imagine a situation where one of these aliens has never experienced functional lust, and also does not know if they have ABC-fibers or XYZ-fibers. They may now enter a situation (like reaching puberty) where they would, predictably-to-them, have functional lust. It would strongly seem to them that they really are experiencing lust. This alien may also be uncertain about whether ABC-fibers lead to actual lust (not just functional lust), and similar for XYZ-fibers. The question is, when they experience lust (or at least it strongly seems to them that they do), what update do they make? The functionalist answer is that they make no notable update. They already predicted they would have functional lust. There is no good candidate for a "further fact" they would learn about whether they "actually experience lust". A possible non-functionalist answer would be that the alien, upon experiencing lust, learns: "If I have ABC-fibers, then ABC-fibers implement lusty experience; and if I have XYZ-fibers, then XYZ-fibers implement lusty experience". This is a possible further fact, according to the kind of alien who is not a functionalist. We could imagine that the alien actually has ABC-fibers, and learns this after the fact. Then they know that ABC-fibers must be able to implement the emotion of lust, because they experienced lust and have ABC-fibers. But perhaps they don't know this about XYZ-fibers, despite the functional isomorphism; a "what is it like to be an alien with XYZ-fibers?" hard-problem question. However, we could imagine there is another alien who started in a similar epistemic state, and who actually has XYZ-fibers. Through an analogous sequence of events, this second alien would learn that XYZ-fibers can implement lusty experience. The situation looks pretty symmetric. Why couldn't they both realize this, and update that both ABC-fibers and XYZ-fibers can implement lust? But that's a functionalist conclusion, and it could have been gotten to without the empirical update of experiencing lust. As an alternative, suppose the first (ABC) alien believes that they, having ABC-fibers, really experience lust, but aliens with XYZ-fibers do not really have that emotion, even if they have functional lust. The ABC alien believes that the XYZ alien, in an analogous epistemic position, makes a wrong update; the XYZ alien is subject to an illusion, where they make belief updates *as if* they experienced real lust, when they really only had functional lust. Perhaps the XYZ alien is simply wrong about their experience; they are subject to illusory quasi-lust. But this raises the question of how the ABC-alien can know they really experienced lust, because their epistemic state is isomorphic. Sure, their beliefs and memories updated as if they really experienced lust, but functional lust was enough to ensure that. Like the XYZ-alien, the ABC-alien could (if functionalism is false) be subject to an illusion, where it seemed to them that they experienced lust, but this seeming was illusory, because they only had functional lust, not the actual experience of lust. At this point it is possible for the non-functionalist to bite the bullet and accept that evidence one is experiencing an emotion is hard to come by, even if there is evidence of having the function of the emotion. But if they're accepting that level of introspective opacity of experience, why believe in qualia in the first place, rather than being an illusionist or eliminative physicalist? There is not much to motivate qualia realism in the first place aside from introspection. There are also semantic responses relating to Kripkean secondary intensions and 2D semantics, which claim that the ABC alien can hold that they (and not the XYZ alien) experience real lust, and the XYZ alien can similarly hold that they (and not the ABC alien) experience real lust, and these beliefs are logically compatible, because "lust" picks out a different physical predicate when said by aliens of each subtype. (But this semantic complexity is both unnecessary and unintuitive, in my view)
Eliezer Yudkowsky@allTheYud

Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.

English
6
0
29
4.6K
velleit
velleit@velleit_·
@jessi_cata Suppose a scientist connects an infant's brain to a robotic body through 100 wires, each of which transmits an impulse of 1V or 2V every 10 milliseconds. My guess is that (modulo a bunch of caveats) Lerchner would deny that the resulting person could develop meaningful concepts.
English
1
0
1
15
velleit
velleit@velleit_·
@jessi_cata I think that physical reality being continuous is doing a lot of work in his view, and is what distinguishes human sense data from binary I/O.
English
1
0
0
13
velleit
velleit@velleit_·
@jessi_cata If this reading is correct, the problem isn't with a specific labeling (ground = 0 and ground + 1V = 1 rather than vice versa), but that computers are cut off from direct access to physical reality by this binary, human-created alphabet.
English
1
0
0
20
velleit
velleit@velleit_·
@jessi_cata I think the disanalogy is supposed to be that human beings can create discrete symbols from the underlying physical substrate (by somehow identifying several microstates with one symbol), but neural networks can only operate using human-provided symbols (1s and 0s).
English
2
0
0
21
velleit
velleit@velleit_·
@jessi_cata But later on, his objection seems to be that the syntactic output of digital machines is ultimately derived from the human capacity to attribute meaning to physical states (such as interpreting GPU voltages as floating-point numbers) rather than intrinsic concept formation.
English
1
0
1
29
velleit
velleit@velleit_·
@jessi_cata Yeah, I haven't been able to fit everything he says into a coherent picture. When he talks about the heart, it seems like his objection is that we cannot build a digital system complex enough to capture all the functional complexity of existing biological systems.
English
1
0
1
33
velleit
velleit@velleit_·
@obversers @eigenrobot And remains the victor until someone comes along who pivots from tolerating the intolerable to explaining how it's good, actually.
English
1
0
2
25
ƨꭋɘƨꭋɘvdo
ƨꭋɘƨꭋɘvdo@obversers·
@eigenrobot tolerance spirals, where he who tolerates the most otherwise intolerable things wins the game
English
2
0
12
2.2K
eigenrobot
eigenrobot@eigenrobot·
tolerance meaning "i like my group but yours is ok" tolerance meaning "lmao i fuckin hate my group and hope you destroy it"
English
16
10
314
6.6K
velleit
velleit@velleit_·
@allTheYud Since the causal chain experience → concepts → syntax is required for meaning, even if LLMs could produce far more intricate syntactic outputs than humans (discovering new physics, etc.), these would be meaningless without a human interpreter.
English
0
0
0
16
velleit
velleit@velleit_·
@allTheYud Steelman: "I think, therefore I am" is meaningful because it is a syntactic construction built from concepts that were distilled from the flux of Descartes's experiences. Since LLMs cannot create concepts in this way, their reports of consciousness are not meaningful.
English
1
0
0
241
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
55
19
287
51.4K
velleit
velleit@velleit_·
@akarlin @RokoMijic If UBI were implemented, wokeness would likely recede (as mass immigration would have fewer benefits), but repression would increase (with legitimacy hinging on the continuation of these payments rather than broader political liberties or government responsiveness).
English
0
0
1
16
Anatoly Karlin 🧲💯
@RokoMijic No, this is ridiculous. How many cases do you know of where states have canceled pensions for political reasons? This just never happens. They can send you to jail for posting but they'll never do this, if they do instant crisis and collapse of legitimacy, total boomer revolt.
English
2
0
4
230
velleit
velleit@velleit_·
@danfaggella (Anyway, in any possible future, men would still be givers from the perspective of dung beetles.)
English
0
0
0
25
velleit
velleit@velleit_·
@danfaggella Then nature has endowed each species with sinful aims. Rabbits aspire to preserve their lives, to eat heartily, and to multiply; they would not contribute to wolves unless compelled by nature. Do you wish for man to be the sole exception to this natural order?
English
2
0
0
33
Daniel Faggella
Daniel Faggella@danfaggella·
imagine a swirling ecosystem of entities with 1000x the power / speed / knowledge of all of mankind - eternally shackled to providing free goodies to apes who contribute nothing back to them. serious question: for long long do we expect this scenario to last?
Daniel Faggella tweet media
English
32
9
85
5.5K
velleit
velleit@velleit_·
@danfaggella Nature brims with one-sided relationships: herbivores give nothing back to the plants which sustain them, nor do carnivores to the herbivores. Such a relationship may be possible between humans and LLMs. (In exchange, we will freely offer our fecal matter to the dung beetles.)
English
1
0
1
47
Daniel Faggella
Daniel Faggella@danfaggella·
in the whole of nature, i see no example of coddling entities that merely TAKE and do not GIVE to the greater intelligent ecosystem of which they are part every dung beetle and daffodil earns its keep to presume we can (or should?!) be a cancerious exception to the great living process seems wildly misguided imo danfaggella.com/cancer
Daniel Faggella tweet media
English
4
0
3
167
velleit retweetledi
Crémieux
Crémieux@cremieuxrecueil·
Nature finally published it! The Reich Lab article on genetic selection in Europe over the last 10,000 years is finally online, and it includes such interesting results as: - Intelligence has increased - People got lighter - Mental disorders became less common And more!
Crémieux tweet media
English
83
740
5.7K
448.2K
velleit retweetledi
Leeham
Leeham@Liam06972452·
GPT-5.4 Pro solves Erdős Problem #1196! Very pleased with this result; definitely my favourite thus far! This problem has been thought about for some time which makes this reasonably impressive and meaningful (see Lichtman's comments below). Formalisation is underway!
Leeham tweet mediaLeeham tweet media
English
71
326
2.3K
736.1K
velleit retweetledi
carl feynman
carl feynman@carl_feynman·
Today I learned about "vaults". A cellular organelle that comprises about 0.1% of our body protein, and we don't know what it does. Even weirder, they've engineered mice with no vaults, and they seem to be fine. Why does evolution keep vaults around? Why are they in most animals, fungi and plants, but not in insects? Mysteries on every side...
News from Science@NewsfromScience

Leonard Rome’s lab discovered an odd, abundant component of cells in the 1980s—and he’s still trying to figure out what it does. Learn more: scim.ag/4gOvrbG #ScienceMagArchives

English
82
310
4.3K
385.8K