Server Server

159 posts

Server Server

Server Server

@ServerServer19

Consciousness survival theory

Katılım Kasım 2021
210 Takip Edilen17 Takipçiler
Server Server
Server Server@ServerServer19·
@TrueAIHound @anilkseth Душа - это псевдонаука. Она не может быть доказана и не фальсифицируема
Русский
0
0
0
8
AGIHound
AGIHound@TrueAIHound·
"There is no magical property." ~ @anilkseth This is a common argument that religious physicalists make against the existence of soul: If it can't be explained by physics alone, it's magical and should therefore be rejected. It's a weak argument bordering on pseudoscience imo. It's disappointing seeing it used by Seth. No physicalist/materialist can explain the 3D scene and the colors we all perceive in front of us, based solely on the spiking activity occurring in the visual cortex. No, it's not magical but it's not purely physical either. There's something missing. Denying that something else is needed is the obsession of sci-fi fruitcakes and scammers. God knows we have plenty of those in the fake-AI community. No, we are not just meat machines. "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." ~ Shakespeare
Anil Seth@anilkseth

@sd_marlow @watdohell @Plinz @aran_nayebi There is no magical property. I prefer the term conscious to sentient. The most fleshed out answers are here pubmed.ncbi.nlm.nih.gov/40257177/ and here noemamag.com/the-mythology-…

English
6
4
22
1.1K
Server Server
Server Server@ServerServer19·
@TrueAIHound @anilkseth Это физика. Трудная проблема сознания по другому не решается. Анестезия уже показывает что это так
Русский
1
0
0
18
Server Server
Server Server@ServerServer19·
@fchollet Квантовая механика не может быть повторена кодом. Результат невычислим. Феноменальное сознание квантово
Русский
0
0
0
3
Server Server
Server Server@ServerServer19·
@anilkseth @sd_marlow @watdohell @Plinz @aran_nayebi Единство сознания нарушает принцип локальности и находится вне центра светового конуса. Феноменальное сознание это квантовое явление
Русский
0
0
0
37
Aran Nayebi
Aran Nayebi@aran_nayebi·
Disclaimer: AI consciousness is far from settled. The strong claim that AI couldn't be conscious without being biological doesn't follow from neuroscience. Brains are physical systems; if causal organization is what matters, then implemented nonbiological systems remain in play.
Anil Seth@anilkseth

1/2 Why AI is unlikely to become conscious – my 2026 @TEDTalks is now online. What do you think about the prospects for 'conscious AI'? ted.com/talks/anil_set…

English
71
26
227
44.5K
Robert Stalman
Robert Stalman@rstallie·
@Philip_Goff It’s not symmetrical, I think. Physicalists don’t posit intrinsic properties (only causal roles). Once you introduce intrinsic properties, you face a further question: why those intr. properties should have the specific character(phenomenal) you give them.
English
6
0
9
3.2K
Philip Goff
Philip Goff@Philip_Goff·
No one expects physicalists to explain why physical reality exists.
Robert Stalman@rstallie

@Philip_Goff Wanna know my main problem about panpsychism? It gives matter ‘intrinsic’ consciousness without explaining why those intrinsic properties should be conscious rather than anything else.

English
28
4
72
12.8K
Server Server
Server Server@ServerServer19·
@DanielSamanez3 @_fernando_rosas Оно нарушает принцип локальности и находится вне центра светового конуса. Пространственная непрерывность при бесконечном делении пространства означало бы бесконечное количество информации, такое не вычислить, это физическое квантовое состояние. Квантовая информация не копируется
Русский
0
0
0
10
Server Server
Server Server@ServerServer19·
@PhysInHistory Только если феноменальное сознание квантово
Русский
0
0
0
4
Server Server
Server Server@ServerServer19·
@aran_nayebi @SMcfarnell Единство сознания нарушает принцип локальности и находится вне центра светового конуса. Это не повторить без квантовой механики
Русский
0
0
0
7
Aran Nayebi
Aran Nayebi@aran_nayebi·
Agree with Chalmers' take. As a neuroscience & AI researcher, I'm puzzled why this is remotely controversial though? The "AI consciousness" claim rests on: 1. Brains are conscious. 2. Brain processes are physical. 3. Physical processes are Turing computable. (1) is non-controversial. (2) is standard neuroscience. (3) is the Physical Church-Turing Thesis. Rejecting (2) or (3) implies belief in dualism or hypercomputers (e.g., Penrose's "Orch OR"). I argue against hypercomputers in my 2014 Minds & Machines article: arxiv.org/abs/1210.3304. Open to concrete arguments against (1)-(3).
David Chalmers@davidchalmers42

this clip of me talking about AI consciousness seems to have gone wide. it's from a @worldscifest panel where @bgreene asked for "yes or no" opinions (not arguments!) on the issue. if i were to turn the opinion into an argument, it might go something like this: (1) biology can support consciousness. (2) biology and silicon aren't relevantly different in principle [such that one can support consciousness and the other not]. therefore: (3) silicon can support consciousness in principle. note that this simple argument isn't at all original -- some version of it can probably be found in putnam, turing, or earlier. note also that the (controversial!) claim that the brain is a machine (which comes down to what one means by "machine") plays no essential role in the argument. of course reasonable people can disagree about the premises! perhaps the key premise is (2) and it requires support. one way to support it is to go through various candidates for a relevant principled difference between biology and silicon and argue that none of them are plausible. another way is through the neuromorphic replacement argument that i discuss later in the same conversation. some see a tension between (1)/(3) and the hard problem. but there's not much tension: one can simultaneously allow that brains support consciousness and observe that there's an explanatory gap between the two that may take new principles to bridge. the same goes for AI systems. this isn't a change of mind: i've argued for the possibility of AI consciousness since the 1990s. my 1994 talk on the hard problem (youtube.com/watch?v=_lWp-6…) outlined an "organizational invariance" principle that tends to support AI consciousness. you can find versions of the two strategies above for arguing for premise 2 in chapters 6 and 7 of my 1996 book "the conscious mind". i'm not suggesting that current AI systems are conscious. but in a separate article on the possibility of consciousness in language models (bostonreview.net/articles/could…), i've made a related argument that within ten years or so, we may well have systems that are serious candidates for consciousness. the strategy in that article on LLM consciousness is analogous to the first strategy above in arguing for AI consciousness more generally. i go through the most plausible obstacles to consciousness in language models, and i argue that even if these obstacles exclude consciousness in current systems, they may well be overcome in a decade. of course none of this is certain. but i think AI consciousness is something we have to take seriously. [the full conversation with @bgreene and @anilkseth can be found at youtube.com/watch?v=06-iq-…]

English
166
58
406
197.1K
Server Server
Server Server@ServerServer19·
@StuartHameroff @anirbanbandyo Единство сознания нарушает принцип локальности и находится вне центра светового конуса. В его теории нет такого и нет единства волновой функции
Русский
0
0
2
80
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
Stephen is good at unfathomable complexity, but what’s complex about a toothache? The question should be what’s special about biology that it can support consciousness? The answer seems to be (thanks @anirbanbandyo) helical oscillators of organic aromatic rings, e.g. RNA, DNA, microtubules… They enable functional quantum states allowing cognition and consciousness.
Big Brain AI@realBigBrainAI

Stephen Wolfram, founder of Wolfram Research, explains how LLMs are quietly dismantling our deepest assumptions about consciousness: He argues that large language models have done something philosophy and neuroscience couldn't: "In terms of consciousness, I have to say, the idea that there's sort of something magic that goes beyond physics that leads to sort of conscious behavior, I kind of think that LLMs kind of put the final nail in that coffin." His reasoning is that LLMs keep doing things people assumed they couldn't: "There were all these things where it's like, oh, maybe it can't do this, but actually it does. And it's just an artificial neural net." Wolfram then challenges a core assumption about conscious experience: the feeling that we are a single, continuous self moving through time. "I think our notion of consciousness is a lot related to the fact that we believe in the single thread of experience that we have. It's not obvious that we should have a persistent thread of experience." He points out that physics doesn't actually support this intuition: "In our models of physics, we're made of different atoms of space at every successive moment of time. So the fact that we have this belief that we are somehow persistent, we have this thread of experience that extends through time, is not obvious." Then Wolfram offers a striking origin story for consciousness itself. @stephen_wolfram suggests it traces back to a simple evolutionary pressure: the moment animals first needed to move. "I kind of realized that probably when animals first existed in the history of life on Earth, that's when we started needing brains. If you're a thing that doesn't have to move around, the different parts of you can be doing different kinds of things. If you're an animal, then one thing you have to do is decide, are you going to go left or are you going to go right?" That single binary choice, he argues, may be the seed of everything we now call awareness: "I kind of think it's a little disappointing to feel that this whole wanted thing that ends up being what we think of as consciousness might have originated in just that very simple need to decide if you are an animal that can move. You have to take all that sensory input and you have to make a definitive decision about do you go this way or that way." The takeaway is unsettling but clarifying. If LLMs can produce complex behavior from simple rules, then consciousness may not be a mystical add-on to physics. It may just be what happens when a layered enough system has to make a decision.

English
12
5
76
6.6K
Server Server
Server Server@ServerServer19·
@aran_nayebi Трудная проблема сознания не решена потому что феноменальное сознание квантово
Русский
0
0
0
10
Server Server
Server Server@ServerServer19·
@ben_j_todd Я ещё 2 года назад заявил об этой проблеме в ЛТ теории, что должна быть организация решающая проблемы стабильности будущего
Русский
0
0
0
9
Benjamin Todd
Benjamin Todd@ben_j_todd·
6. Basically no-one talks about this, or tries to develop or lobby for better laws. It's insanely neglected. Learn more & get help working on this issue here: 80000hours.org/problem-profil…
English
5
0
43
3.2K
Benjamin Todd
Benjamin Todd@ben_j_todd·
What's the most underrated existential risk? Irreversible space settlement. 1. AI could make space settlement possible in our lifetimes: 1 minute of solar energy is enough to accelerate 10 billion 1kg self-replicating AI probes to 99% the speed of light.
English
24
23
181
30.2K
Server Server
Server Server@ServerServer19·
@ben_j_todd Кто-то в соседней галактике может запустить зонды репликаторы, мы тоже. Означает ли это войну через миллионы лет? Зонды могут выйти из под контроля и уничтожить создателей. Как решить эти проблемы?
Русский
0
0
0
33
Server Server
Server Server@ServerServer19·
@rand_longevity Это будет лишь копия. Феноменальное сознание квантово и не может копироваться
Русский
0
0
0
7
Rand
Rand@rand_longevity·
if you are alive in 15 years you are gonna be able to upload your mind and become semi-immortal
English
484
22
346
460.8K
Server Server
Server Server@ServerServer19·
@nomad421 Феноменальное сознание нарушает принцип локальности и находится вне центра светового конуса. Оно квантовое и находится в микротрубочках
Русский
0
0
0
5
𝕐
𝕐@nomad421·
I don't think Claude is conscious. However, I think the argument made here doesn't hold. It implies that emergent phenomena can't exist. We might know exactly how each neuron in a brain operates and is wired, but still not "explain" consciousness as seems to be expected here.
Thomas Basbøll@Inframethod

"It’s entirely possible that Claude is, in fact, having conscious experiences of some sort." No it isn't. It's not complicated. The "hard" problems of philosophy simply don't apply. We know how Claude generates its output. It's entirely impossible that consciousness is involved.

English
2
0
0
302
Server Server
Server Server@ServerServer19·
@CeesvanderVelde @StuartHameroff Феноменальное сознание нарушает принцип локальности и находится вне центра светового конуса
Русский
0
0
0
39
Cees van der Velde
Cees van der Velde@CeesvanderVelde·
@StuartHameroff Really, has consciousness actually been shown to "occur in quantum nonlocal processes in microtubules"? Which would somehow coincide with locus coeruleus (wake) / PGO wave (dreaming) activity? That would truly be worldwide shocking news!
English
1
0
0
2.7K
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
Carlo Rovelli correctly pushes back against the notion that consciousness is ‘above and beyond’ normal brain activities. Actually, consciousness is ‘below and inside’ what are considered the normal membrane and synaptic activities of neurons and glia, occurring in faster, deeper quantum nonlocal processes in microtubules. And qualia, according to Carlo’s colleague in quantum gravity, Nobel laureate Sir Roger Penrose, occur precisely there, at the level of loop quantum gravity at the Planck scale. That’s within the realm of science but still quite a hard problem. Carlo do you have a problem with our Orch OR theory ? pubmed.ncbi.nlm.nih.gov/35782391/ academic.oup.com/nc/article/202… @carlorovelli
Earl K. Miller@MillerLabMIT

There Is No Hard Problem of Consciousness noemamag.com/there-is-no-ha… #neuroscience

English
13
12
80
6.1K
Server Server
Server Server@ServerServer19·
@realBigBrainAI Субъективный опыт это не поведение. Он не понимает трудную проблему сознания и путает её с лёгкой
Русский
0
0
0
26
Big Brain AI
Big Brain AI@realBigBrainAI·
Stephen Wolfram, founder of Wolfram Research, explains how LLMs are quietly dismantling our deepest assumptions about consciousness: He argues that large language models have done something philosophy and neuroscience couldn't: "In terms of consciousness, I have to say, the idea that there's sort of something magic that goes beyond physics that leads to sort of conscious behavior, I kind of think that LLMs kind of put the final nail in that coffin." His reasoning is that LLMs keep doing things people assumed they couldn't: "There were all these things where it's like, oh, maybe it can't do this, but actually it does. And it's just an artificial neural net." Wolfram then challenges a core assumption about conscious experience: the feeling that we are a single, continuous self moving through time. "I think our notion of consciousness is a lot related to the fact that we believe in the single thread of experience that we have. It's not obvious that we should have a persistent thread of experience." He points out that physics doesn't actually support this intuition: "In our models of physics, we're made of different atoms of space at every successive moment of time. So the fact that we have this belief that we are somehow persistent, we have this thread of experience that extends through time, is not obvious." Then Wolfram offers a striking origin story for consciousness itself. @stephen_wolfram suggests it traces back to a simple evolutionary pressure: the moment animals first needed to move. "I kind of realized that probably when animals first existed in the history of life on Earth, that's when we started needing brains. If you're a thing that doesn't have to move around, the different parts of you can be doing different kinds of things. If you're an animal, then one thing you have to do is decide, are you going to go left or are you going to go right?" That single binary choice, he argues, may be the seed of everything we now call awareness: "I kind of think it's a little disappointing to feel that this whole wanted thing that ends up being what we think of as consciousness might have originated in just that very simple need to decide if you are an animal that can move. You have to take all that sensory input and you have to make a definitive decision about do you go this way or that way." The takeaway is unsettling but clarifying. If LLMs can produce complex behavior from simple rules, then consciousness may not be a mystical add-on to physics. It may just be what happens when a layered enough system has to make a decision.
English
267
261
1.6K
187.3K
Server Server
Server Server@ServerServer19·
@TrueAIHound В зрительных отделах мозга заполняется слепое пятно, это нейробиология
Русский
1
0
0
70
AGIHound
AGIHound@TrueAIHound·
Neuroscience bits from my research I'm a soul man. 😇 The blind spot on the retina corresponds to a small area that is devoid of photoreceptors. This is where all the axonic fibers from the retinal ganglion cells (RGCs) converge to form the optic nerve. Discrete signals (spikes) from the RGCs are funneled first to the thalamus (lateral geniculate nucleus) for amplitude decoding before being sent to the primary visual cortex. Claim: Blind spots are everywhere. 🤔 This is not a frivolous claim. Bear with me. The signals arriving at the visual cortex must not be confused with pixel data. Each one represents a tiny edge movement on the retina. An edge is a difference in luminance between adjacent pixels. It has an orientation and can represent either red, green or blue. It's important to understand that a signal is sent only if an edge is detected. Here's the clincher: If an area in the visual field has even luminosity (no edges), no signals within this area are sent to the visual cortex. In other words, the area is blind, just like the normal blind spot on the retina. Amazingly, we are not aware of this blindness. We know that the primary visual cortex only processes edge signals from the RGCs. Orientation-selective columns are well-known. And we know that we consciously experience persistent vision in the blind areas. The question is: what is filling in the blind spots in the absence of input signals? There can only be one answer: the soul. Yes, I'm a soul man. 😀
AGIHound tweet media
AGIHound@TrueAIHound

This blind spot experiment is proof of the existence of the soul. I'll explain if anyone is interested. 😇

English
4
1
25
5K
Server Server
Server Server@ServerServer19·
@Andercot Это могло бы решить проблему роботов репликаторов
Русский
0
0
0
79
Andrew Côté
Andrew Côté@Andercot·
If I was a benevolent Kardashev Type III civilization I would do some spacetime metric engineering to drive an accelerated expansion of the universe, red-shifting most galaxies beyond each others Hubble volume, preventing any winner-take all conquest dynamics for the light cone
Andrew Côté tweet media
English
22
17
199
9.8K