



Xander Balwit
868 posts

@AlexandraBalwit
Editorial @AnthropicAI. Formerly editor-in-chief at @AsimovPress. Well-fed vegan. Currently reading: Nelson's Trafalgar






AI scientists could one day design experiments, test hypotheses, and make discoveries faster than humans. But what if their breakthroughs are so advanced we can't understand them? This is the "legibility problem." Much like chess engines play moves that grandmasters can't comprehend, AI scientists might generate knowledge beyond human understanding. To counteract this, we must build new scientific infrastructure. We will need new forums to store AI-generated findings so that they can be interrogated and communicated. We have partial precedents in preprint servers and structured databases like UniProt, but nothing designed for the scale and speed of AI-driven science. We will also need systems designed specifically for explication rather than discovery, capable of making AI-generated findings legible to human researchers so that they can be evaluated and prioritized for further study. New column by Matthew Carter

A statement on the comments from Secretary of War Pete Hegseth. anthropic.com/news/statement…

Why does an Ai “lab” have an editorial team 😬





Can computers understand smells? Smell is our most ancient, yet mysterious, sense. It arose at least 3 billion years ago, in bacteria adrift in the ocean. And yet, it resists formalization. Odorants vary in far more ways than photons or frequencies, and there is no shared vocabulary to describe all of them. Machines have learned to “see” and “hear,” but scent remains stubbornly analog. But now, a growing cadre of companies, including Google, Osmo, and fragrance houses like Givaudan, are working to digitize scent. These groups are building AI models to sort, filter, and predict which molecules will elicit which smells. They are building datasets that computers can understand, and then using these models to design entirely new, synthetic fragrances. Our latest piece, “Scent, In Silico,” explains the science. It was written by Taylor Rayne.

Over 5 years I've advised dozens of philanthropists on AI. I compiled the answers to all of the questions I've been asked in one report. 2024 Nobel Prize Geoffrey Hinton calls it “an extremely useful resource for philanthropists interested in funding AI safety and preparedness."



if you listen to the audio narration of this excellent article, you’ll also hear a musical accompaniment i composed for it i’m open to taking commissions for music projects related to podcasts, audiobooks, short films, and more. dms open for collabs!

Tiny worms, with just 302 neurons, can make complex decisions. They can weigh risks (like moving toward toxins) against rewards (like seeking food) much like conscious beings. And they do it using just five of their neurons. This finding challenges some major assumptions about "behavioral markers" that researchers have long relied upon to decide whether or not a being is sentient. Scientists have long believed, for instance, that the ability to weigh competing desires — like choosing between seeking food and avoiding danger — requires a special kind of mental experience that can compare different feelings on the same scale. This theory suggested that the capacity for pain and pleasure evolved to help animals make these complicated decisions, first appearing in the ancestors of birds, mammals, and reptiles around 200-300 million years ago. Yet here are worms, with neural circuits functionally identical to unconscious mammalian reflexes, capable of performing the same types of decisions. If a five-neuron circuit can mimic what we consider as evidence of consciousness, then either our behavioral markers are deeply flawed, or sentience extends much deeper into the animal kingdom than previously believed. It's also really important to have a good understanding of which beings are sentient, and capable of feeling pain, and which are not. Making the wrong judgment can have devastating effects. Before the 1980s, surgeons routinely operated on newborns without anesthesia, partly because they assumed infants couldn't experience meaningful pain. Today, around 400,000 people annually fall into prolonged disorders of consciousness, and as many as a quarter retain some awareness despite appearing to be in a "vegetative state." Read our latest essay on borderline sentience, "WHAT IT'S LIKE TO BE A WORM" by @RalphStefanWeir.

Building a Brain on a Computer Our latest essay, from @mxschons, explains what it will take to build an accurate computer emulation of a full, human brain. It is based on >3,500 hours of research and discussions with more than 50 researchers. Check it out!