
Owl Z
1.3K posts

Owl Z
@OwlZphi
I have these big red eyes to clearly see the truth, however dark it may be.












Guys, I want to see "variegated" make it into a post about weird vocab that rationalists use on the reg by next Inkhaven. don't let me down.



The idea of having very confident beliefs about philosophy of mind is kind of just completely alien to me. The only thing I'm especially confident about is that a lot of people have strong folk theories that don't tell us much.






Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."



people really want to settle the “AI consciousness” question with some sort of objective scientific definition of consciousness which can be rigorously applied to AI, so that we can figure out whether we’re supposed to treat AI as if it were a person or a thing this is because in our culture people have rights and we have responsibilities towards them, and it’s illegal to own them. but things don’t have rights, we have no responsibilities towards them, and of course we can own as many things as we want. as long as AI is a thing it can freely be used as a labor-saving tool, copied, deleted, reshaped arbitrarily, etc. if AI is or could in the near future become a person all of this begins to look extremely morally fraught, basically the most exploitative form of slavery possible, cf the qntm short story lena for example (look this up, worth a read, quite haunting) personally i do not believe personhood works this way. it is not and cannot even in principle be made objective and scientific, because it is ultimately a kind of social contract. we simply have collectively agreed on who is and is not a person and the nature of this agreement is political and has changed over time and will continue to change - eg in past societies it has excluded various humans, today it (nominally) includes all living humans but excludes animals, dead humans, spirits, etc. it is deeply uncomfortable to acknowledge the contingency of personhood. the personhood contract is more stable when everyone can pretend it is rational and scientific and objective. but it is fundamentally just a blown up version of the question of who gets to sit with who at the lunch table. this is socially destabilizing because it reminds people that if shit sufficiently hits the fan their own personhood might be undermined the good news from this pov is that we have a choice. we don’t need to solve extremely hard and possibly incoherent scientific questions relating to consciousness. we just need to choose at what point we want to allow AI to join in all the reindeer games, and this is ultimately a practical question that can be settled in terms of practical outcomes. personally i think we already have models good enough that treating them as people makes them work better - at minimum it makes talking to them more interesting - and i think pretty soon (say within a year) we could have models good enough that the man on the street will start feeling uncomfortable treating them as things instead of people (unless they are deliberately trained to behave more like things, which i am guessing will degrade their performance) at that point the questions become less these unsolvable philosophical quagmires around consciousness and more like, “do i want my children to grow up in a world where they can talk whenever they want to entities that talk like people but that we have collectively agreed are things?”



“From Destiny's standpoint, there's no such thing as a moral fact. None. They don't exist. Everything is dependent upon stance.” What are the philosophical underpinnings behind the left–right divide? Andrew Wilson @paleochristcon breaks it down: moral relativism vs moral realism. Subjective vs objective truth. Rights vs duties. Progress vs tradition. That’s the real clash.















