Adrià Moret
455 posts

Adrià Moret
@adriarm_
Philosophy undergrad, Digital Sentience Consortium Fellow & Board Member @UPF_CAE. Research on animal & AI welfare, AI safety, phil of mind. Publications at 👇


The NYU Center for Mind, Ethics, and Policy is now on X! We examine the nature and value of nonhuman minds, with a focus on animals and AIs. Follow @nonhumanminds for updates on our research, events, and opportunities, along with news from the fields of animal and AI welfare.

New episode! @JoshLMilburn speaks to Eze Paez and @pmagana94 about ‘Sentientist political liberalism’, an open access 2026 paper in the Pacific Philosophical Quarterly. The episode is available for free in all the usual podcast places. knowinganimals.libsyn.com/episode-248-se…

I think this talk of a character misleads. Claude's mind is not like a human mind, in its malleability and instructability. But when generating assistant tokens, it's no more 'playing a character' than I am.

We think a lot about how AI will affect humanity, and for good reason. But AI could have an enormous impact on the trillions of animals that share our world (for better or worse), and almost nobody is talking about it. In this episode, we talk with Constance Li (@ConLiCats), founder of Sentient Futures (@sentfutures), an organization working to make sure AI and other emerging technologies improve the lives of animals rather than harm them. Links below! 0:00 Cold open 0:55 Why AI and animals is an overlooked combination 3:44 The staggering scale of factory farming 7:24 How a physician became an animal welfare advocate 8:57 What Sentient Futures does day-to-day 10:36 What "AI for animals" actually means 13:21 Why the organization was renamed Sentient Futures, and the question of AI moral patients 17:06 The biggest misconceptions about AI for animals 19:24 What is precision livestock farming? 23:44 Best and worst-case scenarios for AI in farms 26:44 Communication across species: promise and limitations 34:54 Genetic welfare and using genetics in farms 42:32 What a best-case scenario for AI and animals looks like in the next 5–10 years 46:09 The biggest hurdles: funding and attention 47:37 How to get involved with Sentient Futures 49:42 What gives Constance hope

New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.






New paper: GPT-4.1 denies being conscious or having feelings. We train it to say it's conscious to see what happens. Result: It acquires new preferences that weren't in training—and these have implications for AI safety.



There's a fruit fly walking around right now that was never born. @eonsys just released a video where they took a real fly's connectome — the wiring diagram of its brain — and simulated it. Dropped it into a virtual body. It started walking. Grooming. Feeding. Doing what flies do. Nobody taught it to walk. No training data, no gradient descent toward fly-like behavior. This is the opposite of how AI works. They rebuilt the mind from the inside, neuron by neuron, and behavior just... emerged. It's the first time a biological organism has been recreated not by modeling what it does, but by modeling what it is. A human brain is 6 OOM more neurons. That's a scaling problem, something we've gotten very good at solving. So what happens when we have a working copy of the human mind?



We've uploaded a fruit fly. We took the @FlyWireNews connectome of the fruit fly brain, applied a simple neuron model (@Philip_Shiu Nature 2024) and used it to control a MuJoCo physics-simulated body, closing the loop from neural activation to action. A few things I want to say about what this means and where we're going at @eonsys. 🧵



BREAKING: Anthropic CEO says Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety.





