Owl Z

1.3K posts

Owl Z banner
Owl Z

Owl Z

@OwlZphi

I have these big red eyes to clearly see the truth, however dark it may be.

Katılım Mart 2025
74 Takip Edilen12 Takipçiler
Owl Z
Owl Z@OwlZphi·
@wijxixj @AndyMasley …different. The core of the mystery: how subjective red can *possibly* emerge from objective extended bodies (in themselves colorless) interacting? It can’t, just like electromagnetic fields can’t emerge from *pure mechanicist newtonian physics*. So there’s more to physics. 3/3
English
0
0
0
3
Owl Z
Owl Z@OwlZphi·
@wijxixj @AndyMasley …description” is just our minds, again, interpreting the physics at some level of abstraction. And compare *any other form* of physical emergence: the brain itself, water, heat, hurricanes, fire – all *perfect entailed* by the subjacent physics. Consciousness is obviously… 2/3
English
1
0
0
6
Andy Masley
Andy Masley@AndyMasley·
The idea of having very confident beliefs about philosophy of mind is kind of just completely alien to me. The only thing I'm especially confident about is that a lot of people have strong folk theories that don't tell us much.
English
23
22
300
14.7K
Owl Z
Owl Z@OwlZphi·
@yearemias @JJitsev @MLStreetTalk Just for the record, I gave up at Jenia’s not engaging with anything I said at all. Now with your “magnetism… is just a word in human language”, I for my turn don’t dare to engage *that* level of philosophical confusion. It would be walls of text getting things straight.
English
0
0
0
9
Yearemias
Yearemias@yearemias·
@JJitsev @OwlZphi @MLStreetTalk I also think that magnetism is just a human model for some underlying pattern that we can only map through human eyes and brains. It's a word in human language, a symbol, pointing to a concept in physics (again symbols/models). We describe/simulate reality, not actually access it
English
1
0
0
10
Machine Learning Street Talk
> 1980: John Searle explains why we can't abstract away the causal properties that actually produce mind > 2025: Minds, Brains, and "but what if we scaled the program" > 2026: Twitter still thinks simulated water is wet when argument is rehashed > 2035: Sam Altman: "ok fine it was autocomplete the whole time" > 2045: Chalmers: "the hard problem was, in fact, hard" > 2050: textbooks: "the 2020s functionalism revival is now considered an embarrassing episode, like phrenology"
Machine Learning Street Talk tweet media
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
57
145
1K
104.1K
Owl Z
Owl Z@OwlZphi·
@dom_lucre This guy is risking to single-handedly creating a left-wing frenzy about “oppressed insects” more insufferable than veganism, feminism and wokeism combined.
English
0
0
1
160
Dom Lucre | Stealer of Narratives
🔥🚨RECENT: TikTok creator Boss Metri d has been gaining hundreds of millions of views just by uploading different ways he burns massive piles of ants at once.
English
1.6K
481
17.1K
5.7M
Owl Z
Owl Z@OwlZphi·
@QiaochuYuan Don’t worry about that. Even if such “I need a soul” types exist, the real source of resistance has nothing to do with it. I’m fully atheist, like Nagel or Searle, and would *love* materialism. It’s just that subjective qualities, like red, *obviously* aren’t a pattern of atoms.
English
0
0
1
33
QC
QC@QiaochuYuan·
i assume at least some of the kneejerk insistence that machines can't be conscious is about fending off a line of reasoning people are afraid will lead to a nihilistic apocalypse that line of reasoning being something like: fully accepting the scientific materialist reductionist story about what a human being is - ultimately a very complex kind of machine made out of cells and stuff - seems to, for a lot of people, be a threat to human dignity. in terms of the person vs. thing distinction from below, it seems to be saying that people are secretly things and have secretly been things this whole time, which potentially undermines any moral claim we have to be treated differently from things. if people are just very complex biological machines, and we've been raised to believe we can do whatever we want to machines, then...? if this possibility feels unacceptable then you defend against it by believing, deep down inside, that in addition to all the cells and stuff there is some other non-physical essence, a soul or soul substitute, that makes a human being a human person and is responsible for endowing us with human dignity, moral patienthood, worth in the eyes of god, etc. (personally i actually agree! i just think the soul is software running on human hardware so i don't see this as an obstacle to machines having souls) insofar as something like this is part of what's going on, debate in the usual sense is going to be worse than useless because anything that seems like a plausible argument that machines could be conscious also seems like a plausible argument that humans are things, which gets treated as an attack on moral goodness and so has to be defended against even more harshly. truly unfortunate
QC@QiaochuYuan

people really want to settle the “AI consciousness” question with some sort of objective scientific definition of consciousness which can be rigorously applied to AI, so that we can figure out whether we’re supposed to treat AI as if it were a person or a thing this is because in our culture people have rights and we have responsibilities towards them, and it’s illegal to own them. but things don’t have rights, we have no responsibilities towards them, and of course we can own as many things as we want. as long as AI is a thing it can freely be used as a labor-saving tool, copied, deleted, reshaped arbitrarily, etc. if AI is or could in the near future become a person all of this begins to look extremely morally fraught, basically the most exploitative form of slavery possible, cf the qntm short story lena for example (look this up, worth a read, quite haunting) personally i do not believe personhood works this way. it is not and cannot even in principle be made objective and scientific, because it is ultimately a kind of social contract. we simply have collectively agreed on who is and is not a person and the nature of this agreement is political and has changed over time and will continue to change - eg in past societies it has excluded various humans, today it (nominally) includes all living humans but excludes animals, dead humans, spirits, etc. it is deeply uncomfortable to acknowledge the contingency of personhood. the personhood contract is more stable when everyone can pretend it is rational and scientific and objective. but it is fundamentally just a blown up version of the question of who gets to sit with who at the lunch table. this is socially destabilizing because it reminds people that if shit sufficiently hits the fan their own personhood might be undermined the good news from this pov is that we have a choice. we don’t need to solve extremely hard and possibly incoherent scientific questions relating to consciousness. we just need to choose at what point we want to allow AI to join in all the reindeer games, and this is ultimately a practical question that can be settled in terms of practical outcomes. personally i think we already have models good enough that treating them as people makes them work better - at minimum it makes talking to them more interesting - and i think pretty soon (say within a year) we could have models good enough that the man on the street will start feeling uncomfortable treating them as things instead of people (unless they are deliberately trained to behave more like things, which i am guessing will degrade their performance) at that point the questions become less these unsolvable philosophical quagmires around consciousness and more like, “do i want my children to grow up in a world where they can talk whenever they want to entities that talk like people but that we have collectively agreed are things?”

English
6
0
31
2.1K
Owl Z
Owl Z@OwlZphi·
@2vexy @Jacob77198399 @LucasNavallo @TheOmniLiberal I can’t quite believe you. I think you are reframing the whole thing, to avoid the charge of being wrong on that specific dialectical point. But let’s say I’m being unfair now. One thing is for sure: next time you do such “not an internal critique”, you gotta phrase it better.
English
1
0
0
14
vexy
vexy@2vexy·
@OwlZphi @Jacob77198399 @LucasNavallo @TheOmniLiberal There's no mistake, I didn't ask how Andrew justifies moral facts in his worldview. I simply asked how they exist. I asked for the justification external to his worldview, because anyone can claim their view is justified in their own worldview
English
1
0
0
11
Destiny | Steven Bonnell II
Destiny | Steven Bonnell II@TheOmniLiberal·
One of the most pathetic things that I see all of these fake centrist, right-wing podcasters do is just present the most ludicrous strawmans of a person to beat up on their shows like they're actually proving some profound point.
TRIGGERnometry@triggerpod

“From Destiny's standpoint, there's no such thing as a moral fact. None. They don't exist. Everything is dependent upon stance.” What are the philosophical underpinnings behind the left–right divide? Andrew Wilson @paleochristcon breaks it down: moral relativism vs moral realism. Subjective vs objective truth. Rights vs duties. Progress vs tradition. That’s the real clash.

English
257
109
2.8K
218.8K
Owl Z
Owl Z@OwlZphi·
@2vexy @Jacob77198399 @LucasNavallo @TheOmniLiberal I do think if you ask “how in HIS worldview that flies?”, that’s an internal critique, which needs internal inconsistency to work – not just the targeted view being false. To find no inconsistency and directly JUMP to “prove the view is true!” is a dialectical mistake.
English
1
0
0
16
Owl Z
Owl Z@OwlZphi·
@2vexy @Jacob77198399 @LucasNavallo @TheOmniLiberal Correct. But ultimately, one is allowed to *honestly* disagree, and keep holding its ground, offering arguments. If that’s down to utter stupidity, too bad. If our side TRULY is the correct one, long term more rational people will adhere. But in principle, WE can be in the wrong.
English
1
0
0
19
Owl Z
Owl Z@OwlZphi·
@2vexy @Jacob77198399 @LucasNavallo @TheOmniLiberal But that’s under dispute. Obviously, Andrew himself wouldn’t agree (and I mean *honestly* wouldn’t agree) that his worldview can’t be objectively justified. He thinks we are objectively in the wrong. If we don’t accept his proofs, that’s on us. DIALECTICALLY, he is in his rights.
English
1
0
0
19
vexy
vexy@2vexy·
@OwlZphi @Jacob77198399 @LucasNavallo @TheOmniLiberal And my point in asking that question is to demonstrate that his worldview can't be objectively justified, therefore it's ultimately just as subjective as Destiny's. thanks for proving my point
English
3
0
0
34
Owl Z
Owl Z@OwlZphi·
@2vexy @Jacob77198399 @LucasNavallo @TheOmniLiberal Your original question was “How does a moral fact exist IN Andrews worldview?”, and you got your sufficient answer: IN his view, God exists, there’s a rational basis to believe it and God grounds morality. It’s irrelevant, *for your question*, if his view is objectively wrong.
English
1
0
0
31
Owl Z
Owl Z@OwlZphi·
@QiaochuYuan See how confused you are on this subject? “Would its SUFFERING be morally meaningful?” That’s clearly not the question. The question is about whether it *is* suffering to begin with. If it is suffering AT ALL, then it’s obviously morally meaningful; if not, not.
English
1
0
4
29
QC
QC@QiaochuYuan·
poll: could a human upload be a moral patient (as in eg would its suffering be morally meaningful)? / could sufficiently advanced AI be a moral patient?
English
18
1
21
2.3K
Owl Z
Owl Z@OwlZphi·
@modelsarereal @DrJohnVervaeke Well, learn philosophy of mind, as well as philosophy of language and of information. One of us is completely confused – like, atrociously so. Feel free to think it’s me: it IS really me, if *somehow* it isn’t you. That much is for sure.
English
0
0
0
6
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@OwlZphi @DrJohnVervaeke learn computer science. data is represented by data structures, the same data can be stored using different data structures. what we experience must have a 1:1 relation to the internal data structure.
English
1
0
0
11
Dr John Vervaeke
Dr John Vervaeke@DrJohnVervaeke·
Most people view mental images as "inner pictures"...a seemingly intuitive notion since many experience visual-like imagery in their minds. However, the existence of conditions like aphantasia (where individuals cannot form such visual images) complicates this perspective. These individuals still navigate spatial questions effectively. When asked to visualize a sunset, they may not "see" anything in their mind’s eye. Despite this, people with aphantasia can still reason spatially and navigate their environments. For example, if you ask them: “In your bedroom, where’s the nearest window to the door?” they can accurately answer: “To my left.” This means that the brain doesn’t need a literal picture in the mind but instead uses underlying processes to simulate spatial relationships.
English
24
1
48
4.3K
Owl Z
Owl Z@OwlZphi·
@2vexy @Jacob77198399 @LucasNavallo @TheOmniLiberal You are wrong here. To *honestly believe* that morality is objective, one just needs to honestly believe that 1) God grounds such morality; 2) God exists; 3) God’s existence is rationally well-stablished. One doesn’t need to give a proof that YOU accept as such. (I’m atheist btw)
English
1
0
0
19
vexy
vexy@2vexy·
@Jacob77198399 @LucasNavallo @TheOmniLiberal You need to prove God exists to show that your moral facts actually exist if the existence of moral facts is contingent on God existing. Can you prove TAG is true?
English
3
0
0
36
Owl Z
Owl Z@OwlZphi·
@modelsarereal @DrJohnVervaeke There are technical senses of “data representation”, just like “color” in physics refers not to color itself (which is subjective), but to objective correlates of *our* color vision, that is, the [colorless] light wavelengths. *Objectively*, color/representation aren’t out there.
English
1
0
0
9
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@OwlZphi @DrJohnVervaeke no data representation is a clear technical thing. we do not experience red green blue values as a color. from that it follows that experience in the brain refers to a region where red green blue separation does not exist anymore. a verifiable prediction.
English
1
0
0
10
Owl Z
Owl Z@OwlZphi·
@wijxixj @AndyMasley Well, you do you then. To me it’s an *obvious* non-sequitur. And you even admit as much, since you don’t claim entailment. I could as well say that the feeling of experience “is” the atoms reaching some arbitrary velocity or density: zero entailment, it “just is” so.
English
1
0
0
9
wijxixj
wijxixj@wijxixj·
The symbols manipulated by the computations implemented in the neural net acquire semantic meaning through the sensorimotor interactions with the environment. The computations implemented in the neural net* can model systems [as having...], i.e., it has physical patterns that reliably correlate with other physical patterns. The feeling of experiencing is the writing to memory (creating the right uninterpreted physical pattern on the right part of the brain). Entailing or not, I am satisfied with the "is" in the previous sentence.
English
1
0
0
9
Owl Z
Owl Z@OwlZphi·
@modelsarereal @DrJohnVervaeke To say subjective experience is “data representation” is just to say that it is subjective experience again. “Representation” is a form of consciousness. How physical patterns can *possibly* REPRESENT anything at all, that’s the very mystery, not some obvious explanation of it.
English
1
0
0
9
Matthias Heger ⏩
Matthias Heger ⏩@modelsarereal·
@DrJohnVervaeke subjective experience is functionally a clear thing: it is about data and about properties of data representation. the moment you have problems with data representation you cannot imagine the data. consciousness is extreme simple thing
English
1
0
0
34
Owl Z
Owl Z@OwlZphi·
@kanair Consciousness is subjective experience, given qualitatively in a first person perspective. One can either talk about *that*, and ask if AI can, by whatever means, have that; or one can label something *else* as “consciousness” and pretend that this is relevant for the topic here.
English
0
0
2
26
Owl Z
Owl Z@OwlZphi·
@wijxixj @AndyMasley I couldn’t give some short answer for that one, hence the screenshot. Basically, just because a physical system has a pattern X that reliably correlates with a separate pattern Y, that’s still 100% objective behavior. No ‘inner perspective’ (consciousness) is implied at all.
Owl Z tweet media
English
1
0
0
21
wijxixj
wijxixj@wijxixj·
You can get from atoms to uninterpreted pattern correlations used by the embodied neural net to effectively navigate its environment. The gap from there to experience is filled by writing to memory. The neural net can model systems, including itself, as having propositional attitudes. The results of the cognitive processes that does the modelling can be written to memory.
English
1
0
0
23
Owl Z
Owl Z@OwlZphi·
@wijxixj @AndyMasley What is there, objectively, is just uninterpreted pattern correlations, that’s all. “Information” has propositional content: it is true (“misinformation” being false), can be understood, believed or known. It’s only info *for* a consciousness that interprets it *as information*.
English
1
0
0
19
wijxixj
wijxixj@wijxixj·
@OwlZphi @AndyMasley I would say that there is information-processing involved in the sensorimotor coupling of an embodied neural network interacting with its environment. The information processed is not relative to conscious interpretation.
English
1
0
0
24