beethoven

536 posts

beethoven

beethoven

@spearofdog

Katılım Mart 2021
0 Takip Edilen39 Takipçiler
beethoven
beethoven@spearofdog·
@painmute @xenocosmography @QuetzalPhoenix If it were a hostage situation would you do the same? Ie would you concede to any demands of the hostage taker in exchange for an unknown chance of the hostages living?
English
0
0
1
18
painmute
painmute@painmute·
@spearofdog @xenocosmography @QuetzalPhoenix 100%. It's also why my first intuition, if I could, would be to tell my wife and kids to vote red, while I voted blue. Similar to putting them in a life raft while I remained on the sinking ship. But, that's not how the question is presented.
English
2
0
1
33
Xenocosmography
Xenocosmography@xenocosmography·
Everyone completely sickened and exhausted by the Red/Blue Button wave should try to hold on just a little longer. It's the most perfect abstract (and experimental) model of European political catastrophe ever formulated. Blue Button majority is the horror that shouldn't even be possible. ...
English
52
52
937
27.6K
painmute
painmute@painmute·
@spearofdog @xenocosmography @QuetzalPhoenix Because 500,000,000+ people on the planet are under the age of five. It's beyond surreal, witnessing this, the moment is pivotal. I never expected so many midwits to rise to the surface.
English
1
0
0
54
painmute
painmute@painmute·
@xenocosmography @QuetzalPhoenix No. It's what divides us from the animals and the hellscapes they live in. Ie; treating your wife with respect shouldn't lead to a longhouse nightmare of suffragette delusion. Instead, remove those who weaponize Western excellence against us, not the excellence itself.
English
11
0
8
940
madison
madison@dearmadisonblue·
Another view: the connectome model is a network of functions; computation is repeated use of modus ponens, which is local. The field model is a network of constraints; computation is repeated use of the laws of the excluded middle and non-contradiction, which are non-local
madison@dearmadisonblue

One way to address this is to write a language model in a process theory with this connective: ⅋ ⅋ allows a type of non-local connection. Basically your inference pass does constraint solving with an interaction net (graph rewriting)

English
2
0
1
4.7K
Raonak
Raonak@RaonakRN·
@spearofdog @lymanstoneky Yes, but who wouldn't try to reduce human deaths as much as possible lol. It's the entire reason civilisation exists in the first place.
English
2
0
2
70
Raonak
Raonak@RaonakRN·
@lymanstoneky Red can only save all humans if 100% of humans vote red. Blue can save all humans if 51% of humans vote blue. Getting 51% of the vote is far easier. Blue is the only real way to make nobody dies.
English
9
0
24
804
beethoven
beethoven@spearofdog·
@Saf96231355 @Anton81191831 Gun to your head, if you say “yes” I will flip a coin and if it is tails shoot, if you say “no” I will walk away. What do you say!
English
0
0
1
40
Saf
Saf@Saf96231355·
@Anton81191831 This would be true if coordination was allowed. But since it isn't, red is either murder or attempted murder, while blue is either death or saving other. In small samples I agree that red is the correct choice. On the scale of the whole world? Blue.
English
3
0
1
571
Devon Eriksen
Devon Eriksen@Devon_Eriksen_·
Can a created computational system be a person? Absolutely yes. There not one thing a human can do that a simulation of a human couldn't. The puzzlement about inner experience is a red herring. Philosophical zombies are unfalsifiable. Chinese Rooms have a hidden Fallacy of Division, which philosophy grads can't see, but CompSci grads can. Will a created computational system be a person? Very likely, barring civilizational collapse. Not soon, though. Are current-generation LLMs people? No. And anyone who thinks so is crazy. Or not a person themselves. Can an LLM ever be a person? Almost certainly no. An LLM only replicates the function and capability of certain subsystems of the human brain, not the whole set. An LLM is not a person any more than Broca's Area or Wernecke's Area are, by themselves.
English
4
0
7
111
Lachlan Phillips exo/acc 👾
@Devon_Eriksen_ @gfodor Spaghettiman? I dunno, I think "matrix math is conscious" is the dumbest version of a position that can be found, regardless. Steelman it for me. I respect you so maybe you'll be able to help me see the difference.
English
2
0
1
184
wanye
wanye@xwanyex·
I don’t have to be convinced that LLM’s make programmers more productive. But where’s all the stuff? We’ve now had months and months of 100x or 1000x programmet productivity improvements. Where’s all the stuff they’re building?
English
768
226
9.5K
841.6K
beethoven
beethoven@spearofdog·
@deontologistics Do you think there is a difference between a model and a thing being modelled
English
0
0
0
36
pete wolfendale
pete wolfendale@deontologistics·
…from a cybernetic perspective (i.e., control theory), exact real computation lets us compute this stuff to arbitrarily fine degrees using digital computers. It’s ultimately a difference that’s makes no informatic difference, and so can’t provide the special sauce they want.
English
5
0
15
1.6K
pete wolfendale
pete wolfendale@deontologistics·
I’m a philosopher and I think the computer scientists are just right about this one. The arguments against computational functionalism exhibit diverse forms of special pleading, but no real conceptual unity.
Jonathan Birch@birchlse

Computer scientists often seem incredibly confident one way or the other about computational functionalism. What they should say is that the arguments both for and against provide only inconclusive considerations and the right attitude is therefore one of great uncertainty.

English
19
19
177
23.7K
Pawel Pachniewski
Pawel Pachniewski@pwlot·
Functionalism brags that “causal structure matters,” then throws away the actual physics doing the causing. Digital computation is always physically realized too, but it is engineered precisely to suppress substrate-specific detail and preserve abstract state transitions across many media. Brains are not like that. Their causal powers are inseparable from physics, chemistry, timing, morphology, and organization interacting across levels at once, including everything the body does in tandem. Digital computation is designed to keep layers relatively separable. Brains are not. And what we care about w.r.t. brains is not that they are organic, biological or mushy, but the *causal work* their do. Substitute consciousness with some other physical phenomenon, say an ocean. A digital simulation can preserve abstract structure, dynamics, even functional relations. But it does not become wet, saline, massive, or hydrodynamically forceful. It does not inherit the causal powers of an actual ocean. So why does functionalism suddenly sound acceptable only when the target is consciousness? In every other case it reads like hogwash about simulations magically becoming the thing simulated. And this is not just throwing out the causal baby with the causal bathwater. It is throwing out the causal baby with the causal ocean: a combinatorial explosion of possible interactions on the order of 10^24 and beyond, vastly exceeding the number of stars in the observable universe, interactions that can never, ever, even in principle occur, because the simulation does not share the causal signature of the thing simulated.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
46
9
113
23.8K
Dave.R
Dave.R@Dave_Kayac·
@spearofdog @C1aranMurray @pwlot And so is digital computation. It is not separable from from the substrate. The magnetized transistors ARE the computation just as much as electrochemically charged bio neurons are biological computation.
English
1
0
0
23
Okay Egg
Okay Egg@yeastsplainer·
@spearofdog @MLStreetTalk Because Searle believes the human being (aka the little man) is the only thing capable of real understanding. That's the crux of the argument. Little man doesn't understand, therefore no understanding. Seriously, try to formulate his point without the man
English
1
0
5
75
Machine Learning Street Talk
Machine Learning Street Talk@MLStreetTalk·
> 1980: John Searle explains why we can't abstract away the causal properties that actually produce mind > 2025: Minds, Brains, and "but what if we scaled the program" > 2026: Twitter still thinks simulated water is wet when argument is rehashed > 2035: Sam Altman: "ok fine it was autocomplete the whole time" > 2045: Chalmers: "the hard problem was, in fact, hard" > 2050: textbooks: "the 2020s functionalism revival is now considered an embarrassing episode, like phrenology"
Machine Learning Street Talk tweet media
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
57
144
1K
105.6K
Okay Egg
Okay Egg@yeastsplainer·
@spearofdog @MLStreetTalk Yes, and the only way to make it sufficient is a little man inside the brain who understands. That's why the argument hinges on the little man. Falls apart without him
English
1
0
4
81
Ciarán Murray
Ciarán Murray@C1aranMurray·
@pwlot It’s not if you get the analogy…. Man didn’t simulate nature when building the lake. The process by which the lake came about was as real as the lake. AI didn’t simulate consciousness when proving Erdos primitive sets conjecture either.
English
1
0
2
133