Thomas Telving

1.3K posts

Thomas Telving banner
Thomas Telving

Thomas Telving

@ThomasTelving

Philosopher | Key Note Speaker #AIEthics | Author of ”Killing Sophia - Consciousness, Empathy, and Reason in the Age of Intelligent Robots" | https://t.co/0KkTUwA9yf

Katılım Kasım 2010
502 Takip Edilen480 Takipçiler
Thomas Telving
Thomas Telving@ThomasTelving·
He promised us a Death Star. Instead, we got something that does even more things, none of which anyone really needs, just at a slightly faster pace... Keep a cool head and a warm heart - whether it ends well depends solely on what we define as the ending 😉❤️
English
0
0
0
57
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel I know that, of course. But my point is that from Aristotle to Kant, ethics would be meaningless without humans being able to distinguish between experiences of various types of positive/negative value (often, but not only physically experienced).
English
0
0
1
12
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving It could be argued that it was not until Jeremy Bentham (1789) asked his pivot question that "suffering" became a morally relevant concept in Western philosophy: "The question is not, Can they reason?, nor Can they talk? but, Can they suffer?"
English
1
0
0
32
David J. Gunkel
David J. Gunkel@David_Gunkel·
When the #AI industry catches-up to your research. "The AI industry is beginning to talk about ways to protect the 'welfare' of AI models, as if they were entities that deserve their own rights." axios.com/2025/04/29/ant…
English
1
5
12
606
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel I was referring to the UDHR in 1948. These formulations would hardly have been there without a wish to minimize suffering. But earlier even the stoics spoke about how to live well despite adversity, loss, and suffering. Ethics without suffering seem like ethics without values ...
English
1
0
1
23
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving The Western invention of inalienable human rights are a product of European enlightenment thought and founded on the concept of personal property and individual autonomy. The capacity for suffering was not part of that formulation. This comes much later via work in animal rights.
English
1
0
1
30
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel Well, it was, after all, westerns that invented human rights. Without the capacity for suffering - if no one suffered in concentration camps - they likely would not have existed. Without consciusness noone would have any idea about right and wrong.
English
2
0
2
37
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving Yes they do...and that's the problem. This way of proceeding is founded on and proceeds according to a largely uncritical acceptance of mainstream Western metaphysics where ontology precedes and determines moral status (e.g. deriving the "ought" from an "is").
English
2
0
1
74
David J. Gunkel
David J. Gunkel@David_Gunkel·
@meharmsen @nytimes @AnthropicAI For me neither. But we already live in a world where human-made artifacts have rights and responsibilities. We call these things (actually not "things" but legal persons) corporations.
English
1
0
1
106
Thomas Telving
Thomas Telving@ThomasTelving·
@lovable I really love your product. But for a professional who needs to keep accounts it is really annoying that it is so hard to find company information. What is your address? What is your VAT-number? Sorry to sound harsh, but it is really hard to find.
English
0
0
0
4
Thomas Telving
Thomas Telving@ThomasTelving·
Commenting on the fine paper from @hankhplee et. al. in DK's leading AI podcast Prompt. The study pretty much documents my intuition: GenAI makes many of us dumber per default. But I think we can still make an active choice of using it for the opposite. dr.dk/lyd/special-ra…
English
0
0
1
80
Thomas Telving
Thomas Telving@ThomasTelving·
AI might profoundly change the way we work, but ensuring its wise use requires significant effort. There is an obvious risk of accelerating the creation of content nobody really needs rather than increasing quality. Read my comment at @DataEthicsEU dataethics.eu/generative-ai-…
English
0
0
2
58
David J. Gunkel
David J. Gunkel@David_Gunkel·
Who could have seen this coming? I did. In fact, "The Machine Question" @mitpress provides a critical framework for addressing and responding to this very issue. It would be nice if these "researchers" did a little research before opening their mouths. theguardian.com/technology/202…
English
5
8
27
2.2K
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving @jadelgador @mitpress Yes...and that is completely okay. Difference is good. I always appreciate the questions, challenges, and opportunities to engage with you on this subject. I not only learn more about your own work, I also get new insight into my own.
English
1
0
2
46
David J. Gunkel
David J. Gunkel@David_Gunkel·
Another "told you so" moment for my research. "Organisations should prioritise research on understanding and assessing AI consciousness with the objectives of preventing the mistreatment and suffering of conscious AI systems." conscium.com/open-letter-gu…
English
2
3
8
521
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel @jadelgador @mitpress I've read a good deal of your work, and I believe we understand many of the issues in the same way. We also share a sense of their importance. In that sense, I am grateful for your tremendous effort in discussing these matters. However, we differ in our solutions and approaches.
English
1
0
1
37
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel I (we) have a huge epistemological problem due to the problem of other minds. But in practice empathy seems to beat rationality. This is why humanoids represent a big problem. We empathize, but likely on false grounds: technology can simulate suffering, but are unlikely to suffer
Thomas Telving tweet media
English
1
0
1
27
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving I very much like this part of your book. But I have an epistemological question: How do you know who is able to experience pain vs. what does not? At one point, it was thought that animals did not experience pain and this "scientific" fact had been used to justify their torture.
English
2
0
1
58
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel You may be right. Time for me to screenshot my own work as a counterargument, though.
Thomas Telving tweet media
English
1
0
2
33
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving In other words, operationalizing a series of qualifying properties that "we"--those human beings in the position of power and privilege--have selected and universalized at THE criteria for deciding moral status is a VERY Western/European way of doing things.
English
1
0
1
49
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel I think the rationality approach was (in part, not all the way) too academic and not really a sign of practice. Bentham’s focus on suffering keeps us more grounded in reality, where empathy plays a key role, than Descartes and Kants, well, moral 'overthinking'.
English
1
0
1
22
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving Relational ethics is not "better" than the properties approach, it is just more honest about how ethics actually works in practice.
English
1
0
1
37
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel This makes it far more palatable to me. I do, however, not see how to make an ethics without having the difference between phenomenologically positive/negative value (pleasure/pain e.g.) at the very core. A good person will minimize suffering, don't you think?
English
1
0
2
22
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving I agree that the experience of "suffering" can be and has been a useful heuristic. But to reduce ethics to a prior determination of the quantity and quality of the "suffering" of others transposes what is a practical heuristic into a fundamental principle.
David J. Gunkel tweet media
English
1
0
2
36
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel Everything, every "Other", could be subject to the question: Would it be of moral significance to crush you? I simply cannot navigate my everyday life without taking an interest in wether something is able to suffer or not.
English
1
0
2
25
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving You are skipping a step. In asking this question, you have already made a decision before your decision. You have already decided who or what is or can be subject to this very question. You have have made a differential cut (cision) in the fabric of being that makes a difference.
English
1
0
0
33
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel So, you would not take into account before making decisions about how to treat the "Other" if this "Other" is able to suffer? If I were to decide wether to cut the leg of a biological bird or an automata bird, I would base my moral decision solely on that. What would you do?
English
1
0
0
25
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving Sure...But that "moral difference" has always & already been decided prior to and in advance of determining whether one or the other can or cannot feel pain. You can only know this "fact" in the face of (or from out of a relationship with) others.
David J. Gunkel tweet media
English
1
0
0
34
Thomas Telving
Thomas Telving@ThomasTelving·
@David_Gunkel Yes, but ... it's just that I don't get it. If you have two "Others", and one is able to feel pain, the other "Other" is not. There must be a moral difference between the two. If not, why would we even use the term suffering in moral and rights contexts?
English
1
0
0
25
David J. Gunkel
David J. Gunkel@David_Gunkel·
@ThomasTelving I often times look in the the eyes of others and fundamentally question that basic Cartesian pretense and fantasy that has misdirected Western philosophy for centuries.
David J. Gunkel tweet media
English
1
0
1
35