



Xander Balwit
881 posts

@AlexandraBalwit
Editorial @AnthropicAI. Formerly editor-in-chief at @AsimovPress. Well-fed vegan.





We're pausing @AsimovPress for awhile. Thanks to everyone who has taken this journey with us so far. We will plan to see you again in a few months :) Read: asimov.press/p/pause

We're pausing @AsimovPress for awhile. Thanks to everyone who has taken this journey with us so far. We will plan to see you again in a few months :) Read: asimov.press/p/pause


🚨BREAKING: Kentucky family rejects a $26 million offer to turn their farmland into a data center, roughly 10x the area’s going rate. “If it’s my way, I’ll stay and hold and feed a nation. 26 million doesn’t mean anything.”


Introducing the Anthropic Science Blog. Increasing the pace of scientific progress is a core part of Anthropic’s mission. The Science Blog will feature new research and stories of how scientists are using AI to accelerate their work. Read the intro: anthropic.com/research/intro…

We’re launching with two new posts. Can AI do theoretical physics? Harvard physicist Matthew Schwartz led Claude Opus 4.5 through a graduate-level calculation. AI can’t yet do original work autonomously, but it can vastly accelerate it. Read more: anthropic.com/research/vibe-…

I think this is one of the most important articles we've published at @AsimovPress. If you read carefully, there are at least 3-4 ideas in here that *should* be large, well-funded research programs. The article begins by arguing that existing AI models are good at predicting things *within* an existing framework, but are not good at building new frameworks (and, thus, cannot do paradigm-shifting science). As AI models become more widespread in science, they therefore risk "hypernormal science," meaning we will have less actual breakthroughs and more incremental discoveries. The author (Alvin Djajadikerta) supports this argument with several examples, one of which comes from germ theory: "In the mid-nineteenth century, doctors thought that illness was caused by noxious air, and kept meticulous records accordingly. The physician William Farr mapped cholera deaths across London and found they correlated strongly with low elevation, which he thought was because noxious vapors accumulated in low-lying areas. He was actually picking up a real signal: low-lying districts were closer to the contaminated Thames River. But because his data was organized around air quality, he could not find the true cause..." "An AI trained on Farr’s records could have found even subtler correlations, and would have been genuinely useful for predicting which neighborhoods would be hit hardest in the next outbreak. But it would not be able to derive the concept of a waterborne microorganism, as this was not a variable anyone had yet recorded." After giving other examples of this, Alvin begins mapping out ideas to solve this problem and create AIs that are "visionary" rather than "merely predictive." My favorite idea, of his, is to use AI agents as a model organism for metascience. The gist is that many paradigm shifts seem to happen under particular conditions. "Bell Labs, Xerox PARC, and the early Laboratory of Molecular Biology at Cambridge all produced extraordinary concentrations of paradigm-shifting work," Alvin writes, "mostly because they were small groups with enough institutional protection to pursue ideas that looked unproductive by conventional measures." Alvin continues: "We have never been able to run controlled experiments on scientific institutions; it is impossible to create labs that differ in only one respect and compare the results. But we could run AI agents in parallel populations under different research conditions, and analyze the results...In this sense, AI scientists may give metascience its first model organism." "For instance, one could test how group structure shapes discovery: do small, isolated teams produce more conceptual reorganization than large, well-connected ones? Do flat hierarchies outperform rigid ones? One could run AI agent populations that vary these factors independently and measure the results — something that is impractical to do with real institutions..." This essay is excellent throughout and I hope you'll read it.





Introducing The Anthropic Institute, a new effort to advance the public conversation about powerful AI. anthropic.com/news/the-anthr…


AI scientists could one day design experiments, test hypotheses, and make discoveries faster than humans. But what if their breakthroughs are so advanced we can't understand them? This is the "legibility problem." Much like chess engines play moves that grandmasters can't comprehend, AI scientists might generate knowledge beyond human understanding. To counteract this, we must build new scientific infrastructure. We will need new forums to store AI-generated findings so that they can be interrogated and communicated. We have partial precedents in preprint servers and structured databases like UniProt, but nothing designed for the scale and speed of AI-driven science. We will also need systems designed specifically for explication rather than discovery, capable of making AI-generated findings legible to human researchers so that they can be evaluated and prioritized for further study. New column by Matthew Carter

A statement on the comments from Secretary of War Pete Hegseth. anthropic.com/news/statement…

Why does an Ai “lab” have an editorial team 😬



