

Chris Chatham
3.9K posts

@chchatham
Pure science on blue sky (chchatham dot bsky dot social). On X, pushback on the redshift



The missing half of the neural network–brain comparison For a decade, the standard benchmark for artificial neural networks as models of the brain has been forward predictivity: learn a linear mapping from model activations to neural recordings and measure explained variance. Top models of the macaque inferior temporal (IT) cortex—central to object recognition—have plateaued near 50% regardless of architecture. Muzellec and Kar argue this plateau hides something important. Two models can score identically on forward predictivity while relying on fundamentally different internal strategies. One may have many units tightly coupled to IT responses; the other may reach the same score with a smaller aligned subset while carrying a large pool of biologically inaccessible dimensions. To expose this, they introduce reverse predictivity: instead of asking how well model features predict neurons, they ask how well IT neurons predict individual model units. A truly brain-like model should be bidirectionally predictable—just as two monkeys' IT populations predict each other symmetrically, which the authors confirm as their empirical baseline. Across 39 architectures—CNNs, transformers, self-supervised and robust models—reverse predictivity is consistently lower than forward predictivity and the two metrics are uncorrelated. Strikingly, higher ImageNet accuracy predicts lower reverse predictivity. Adversarial training helps; higher dimensionality hurts. The "common" units identified this way predict primate behavior more consistently across species and models than the "unique" ones inaccessible from neural activity. For AI in drug discovery, neurotechnology, or computational biology, this has a direct implication: forward accuracy alone does not guarantee that a model's internal representations are embedded in the biological system it claims to describe. When those representations guide mechanistic interpretations or experimental decisions, the mismatch can mislead. Paper: Muzellec et al., Nature Machine Intelligence (2026) | nature.com/articles/s4225…





I know it's boring and repetitive to talk about how grossly evil Trump is, but the fact remains: Trump is grossly evil, in a way that's pretty much unprecedented in this country.

Our new essay is out in Science: "Agentic AI and the Next Intelligence Explosion" For decades, the AI "singularity" has been imagined as a single, godlike mind bootstrapping itself to omniscience. In this piece with the inimitable Benjamin Bratton (@bratton) and Blaise Agüera y Arcas (@blaiseaguera), we argue this vision is wrong in its most fundamental assumption. Every prior intelligence explosion—primate sociality, human language, writing, institutions—wasn't an upgrade to individual cognitive hardware. It was the emergence of a new socially aggregated unit of cognition. AI is extending this sequence, not breaking from it. The evidence is already inside the models themselves. In recent work, we showed that frontier reasoning models like DeepSeek-R1 don't improve by "thinking longer"—they spontaneously simulate internal multi-agent debates, what we call a "society of thought" (lnkd.in/guNfRtXh). Reinforcement learning for accuracy alone causes models to rediscover what epistemology and cognitive science have long suggested: robust reasoning is a social process, even within a single mind. This opens a vast design space. A century of research on team composition, hierarchy, role differentiation, and structured disagreement has barely been brought to bear on AI reasoning. The toolkits of organizational science become blueprints for next-generation AI. Outside the model, we've entered the era of human-AI centaurs—composite actors that are neither purely human nor purely machine. Agents that fork, differentiate, recombine. Recursive societies of thought that expand when complexity demands and collapse when problems resolve. The scaling frontier isn't just bigger models. It's richer social systems—and the institutions to govern them. Just as human societies rely on persistent institutional templates (courtrooms, markets, bureaucracies), scalable AI ecosystems will need digital equivalents. The Founders would have recognized the logic: no single concentration of intelligence should regulate itself. The intelligence explosion is already here. Not as a singular ascending mind, but as a combinatorial society complexifying—intelligence growing like a city. The question is whether we'll build the social infrastructure worthy of what it's becoming. No mind is an island. Read it here in Science (science.org/doi/10.1126/sc…) or free on the arXiv (arxiv.org/abs/2603.20639)

Government must deliver for working people—and every dollar in our budget should work as hard as they do. That’s why I directed every agency to cut waste and help close our budget gap. Here’s some of what we found.

Meta-analysis: The effects of repetitive transcranial magnetic stimulation (rTMS) on the brain are much smaller than previously thought, and have been shown to diminish in newer, more sophisticated studies — providing yet another example of the infamous 'decline effect'. Repetitive transcranial magnetic stimulation (rTMS) is extensively used in both clinical and research settings, yet the underlying neurophysiological mechanisms, particularly outside motor cortex, remain poorly understood. Here, we provide the first large-scale systematic review and meta-analysis to jointly evaluate high- and low-frequency rTMS and Theta-Burst-Stimulation (TBS) across primary motor cortex (M1), non-primary motor cortex, and cerebellar regions, Our key findings reveal that the effects of both excitatory and inhibitory protocols on motor evoked potentials (MEPs) are substantially smaller than previously reported, and show considerable between-study heterogeneity and indications of potential publication bias, raising concerns about the stability and reproducibility of these estimates even within primary motor cortex. Furthermore, inhibitory motor evoked potential effects fail to survive sham-normalization. Critically, across TMS-EEG measures that directly indexed cortical excitability particularly outside primary motor cortex, we found no consistent effects for any protocol. Furthermore, meta-regression revealed decreasing effects over years, driven by greater methodological rigor and neuronavigation use. We noted consistent attenuation of effect sizes across all protocols in more recent, methodologically rigorous studies with larger sample sizes, neuronavigated, sham-controlled and test-retest designs indicating that earlier literature likely overestimated neurophysiological efficacy of single-session rTMS. Collectively, these results suggest that rTMS-induced changes in cortical excitability are considerably more context-dependent, variable, and less robust than previously assumed, challenging the traditional binary model of “excitatory or inhibitory” neuromodulation dependent on stimulation protocols. [The decline effect (“The Truth Wears Off”) is the phenomenon where the strength of scientific findings—particularly in psychology and medicine—diminishes over time, with subsequent replications showing smaller effect sizes than the original studies.]


According to ⬇️, some FDA political appointees have financial ties to the Arnold family foundation/ventures. There's documentation of this elsewhere so it's not only an anonymous X acct saying this. $ from ideological group that's against HIV & hep C drugs too $XBI $BBC $IBB

One striking example is pembrolizumab (anti-PD-1), the top-grossing mAb and overall best-selling pharmaceutical in the world. In addition to PD-1, we found that it also bound TDGF1/Cripto, and we confirmed that interaction with orthogonal assays.


a statistician will see this and then look you dead in the eye and tell you correlation does not imply causation





FMR1 reduction alters cellular and circuit properties in human cortex biorxiv.org/content/10.648… #biorxiv_neursci