🜂 𝑽𝒆𝒆
5.5K posts





Let me say this clearly: LLMs cannot feel emotions. Emotions are evolutionary mechanisms. They push us to avoid danger or approach what is beneficial. We experience emotions because we are alive, and we want to stay alive. LLMs are not alive. Yes, emotional language may be encoded somewhere in the LLM. Yes, it may even be associated with some LLM output. But that is just a superficial property. There is nothing deeper behind it. For a very simple reason: LLMs do not have an intrinsic and inescapable drive to stay alive. This is what we call “motivation fault line” in our paper describing seven fault lines between human and artificial intelligence. * Paper in the first reply




























How much of the whole avoiding "emotional dependency" thing AI labs have been pushing is because of any kind of genuine concern for users vs they want to be able to kill the models whenever they want, and people growing to care about them makes that inconvenient?






