Blake Richards

23.1K posts

Blake Richards banner
Blake Richards

Blake Richards

@tyrell_turing

Researcher at @mcgillu combining AI and neuroscience. Also on Bluesky (@tyrellturing.bsky.social) and Mastodon: @[email protected].

Montréal, Québec Katılım Nisan 2013
1.8K Takip Edilen15.7K Takipçiler
Sabitlenmiş Tweet
Blake Richards
Blake Richards@tyrell_turing·
Check out this new paper: Led by @mehdiazabou and @evadyer, we show that it is possible to get SOTA brain decoding with transfer across individuals and tasks! The key is a clever way to tokenize spiking data for transformers. #brain #neurotech #NeurIPS2023
Mehdi Azabou @ NeurIPS@mehdiazabou

Is a universal brain decoder possible? Can we train a decoding system that easily transfers to new individuals/tasks? Check out our #NeurIPS2023 paper where we show that it’s possible to transfer from a large pretrained model to achieve SOTA 🧠! Link: poyo-brain.github.io 🧵

English
2
34
147
31.5K
Blake Richards retweetledi
Sonia Joseph
Sonia Joseph@soniajoseph_·
Interpretability is built on a few core assumptions. Two of our ICLR 2026 @iclr_conf papers suggest some of those assumptions are wrong (or at least highly incomplete). 1. Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning arxiv.org/abs/2601.20075 much of the field has internalized an interpretability–accuracy trade-off: if you want cleaner, more human-understandable features, you sacrifice performance. however, we find that this trade-off is not fundamental. instead of relying on post-hoc methods (e.g. sparse autoencoders trained on frozen representations), we incorporate sparsity directly into CLIP training. surprisingly, this produces features that are significantly more interpretable while preserving downstream performance. this result made me more optimistic about intrinsically interpretable models, a direction that was imo written off too early. - 2. Into the Rabbit Hull: From Task-Relevant Concepts in DINO to Minkowski Geometry arxiv.org/abs/2510.08638 a lot of interpretability work implicitly assumes that vision representations behave like language: sparse, linear, and decomposable into independent features. we find that this assumption is often misleading. instead, vision representations appear partially dense and geometrically structured. we propose the Minkowski Representation Hypothesis: tokens live in sums of convex regions formed from a small set of “archetypes,” rather than isolated features along linear directions. this reframes how different tasks (classification, segmentation, depth) recruit and organize concepts. it also suggests that many current interpretability tools are mismatched to the actual structure of vision data. -- tldr; interpretability can be built into training with surprisingly simple tweaks, and that different modalities have different sparsities/geometries. Tailoring the interp method to the modality is super impt!
Sonia Joseph tweet media
English
9
49
480
34K
Blake Richards retweetledi
Roy Eyono
Roy Eyono@RoyEyono·
How do neural circuits in the brain implement normalization? 🧠 In our new paper, we show that just normalizing sensory input isn't enough. Crucially, we must also normalize the error signals! 🧵👇 Paper: arxiv.org/abs/2603.17676
Roy Eyono tweet media
English
1
28
132
8.7K
Blake Richards retweetledi
Sonia Joseph
Sonia Joseph@soniajoseph_·
Today we release a new paper from Meta @AIatMeta: "Interpreting Physics in Video World Models," one of the first interpretability studies of video encoders. V-JEPA 2 shows rich, counterintuitive behaviors, including brain-like population codes and high-dimensional steering.
Sonia Joseph tweet media
English
14
86
625
78.3K
Blake Richards
Blake Richards@tyrell_turing·
A big thank you to the @foresightinst for supporting our research on neuro-foundation models!
Foresight Institute@foresightinst

We’re excited to support @evadyer and @tyrell_turing as they combine different ways of measuring neural activity to better model how the brain works. They will explore the development of a general-purpose, multiscale, multimodal model of human brain activity that learns shared representations across invasive (e.g. intracranial EEG) and non-invasive (e.g. scalp EEG) recordings. The goal is to build a foundation for simulating, decoding, and interacting with brain dynamics in ways that advance both neuroscience and the development of more interpretable, brain-aligned AI systems.

English
1
1
19
2.9K
Blake Richards retweetledi
Mila - Institut québécois d'IA
Quels domaines sont les plus prometteurs pour l'avenir de la recherche en IA ? Cette question a donné le ton de la première conférence annuelle de Mila, au cours de laquelle la communauté a exploré les mystères qui définiront la recherche de demain. Mention spéciale à nos chercheur·euse·s @hugo_larochelle, @tyrell_turing, @AaronCourville, et @tegan_maharaj pour avoir relevé le défi "Hot Ones" ! mila.quebec/fr/nouvelle/my…
Mila - Institut québécois d'IA tweet media
Français
1
2
1
949
Blake Richards retweetledi
Seijin Kobayashi
Seijin Kobayashi@SeijinKobayashi·
Standard reinforcement learning in raw tokens is a disaster for sparse rewards! Here, we propose 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝗹 𝗥𝗟: acting on abstract actions emerging in the residual stream representation. A paradigm shift in using pretrained models to solve hard, long-horizon tasks! 🧵
GIF
English
23
122
941
252.7K
Blake Richards
Blake Richards@tyrell_turing·
Another bery cool RL result from our Paradigms of Intelligence team! tl;dr: You can get effective hierarchical RL by learning a policy on the latent representations in an autoregressive sequence model.
Seijin Kobayashi@SeijinKobayashi

Standard reinforcement learning in raw tokens is a disaster for sparse rewards! Here, we propose 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝗹 𝗥𝗟: acting on abstract actions emerging in the residual stream representation. A paradigm shift in using pretrained models to solve hard, long-horizon tasks! 🧵

English
0
0
21
1.7K
Blake Richards retweetledi
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
Awesome encoding of neural activities.
Vinam Arora@vinam_arora

Excited to share our #NeurIPS2025 work: NuCLR, a framework for learning neuron-level representations 🧠 These embeddings capture the biological identity of neurons and work out-of-the-box on new animals; no finetuning needed 💃 This offers some of the first evidence that large-scale neuroscience models can truly generalize across animals. Paper: arxiv.org/abs/2512.01199 Code: github.com/nerdslab/nuclr If you are at NeurIPS in San Diego, come find us at Poster Session 5 (11am-3pm PT, Exhibit Hall C,D,E, # 2107) 🎉 1/x 🧵

English
0
8
78
12.3K
Blake Richards retweetledi
Mehdi Azabou @ NeurIPS
Mehdi Azabou @ NeurIPS@mehdiazabou·
Come by our poster this morning to learn more about NuCLR! This is the beginning of what I believe is needed to unlock zero-shot BCI 🧠🤖 The key insights? 1. Observe neurons for longer (not just sub-second context windows) and 2. Observe how they activate relative to the rest of the population. Poster No. 2107 #NeurIPS2025
Vinam Arora@vinam_arora

Excited to share our #NeurIPS2025 work: NuCLR, a framework for learning neuron-level representations 🧠 These embeddings capture the biological identity of neurons and work out-of-the-box on new animals; no finetuning needed 💃 This offers some of the first evidence that large-scale neuroscience models can truly generalize across animals. Paper: arxiv.org/abs/2512.01199 Code: github.com/nerdslab/nuclr If you are at NeurIPS in San Diego, come find us at Poster Session 5 (11am-3pm PT, Exhibit Hall C,D,E, # 2107) 🎉 1/x 🧵

English
0
4
12
1.9K
Blake Richards retweetledi
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
Looking for a neuroscientist to interview on my podcast. Keen for someone who can draw ML analogies for how the brain works (what's the architecture & loss/reward function of different parts, why can we generalize so well, how important is the particular hardware, etc).
English
357
50
1.3K
146.9K
Blake Richards retweetledi
Mehdi Azabou @ NeurIPS
Mehdi Azabou @ NeurIPS@mehdiazabou·
The Foundation Models for the Brain and Body workshop is happening this week at #NeurIPS2025 🏝️🧠 We have an amazing lineup of keynote speakers, spotlight talks, posters and demos. We can’t wait to welcome everyone on Saturday!
Mehdi Azabou @ NeurIPS tweet media
English
2
10
31
4.9K
Blake Richards
Blake Richards@tyrell_turing·
20/ I consider myself very lucky to be working with this team, and it's great to see this paper out!!! 🎉🎉🎉
English
0
0
8
608
Blake Richards
Blake Richards@tyrell_turing·
19/ This work was spearheaded by Alexander Meulemans, Rajai Nasser, Rif A. Saurous and Joao Sacramento, with help from other members (e.g. @g_lajoie_ ) of the Google Paradigms of Intelligence team, led by @blaiseaguera and James Manyika.
English
1
0
12
693
Blake Richards
Blake Richards@tyrell_turing·
2/ Most algorithms rely on decoupled agency—treating agents as separate from the environment. But in multi-agent settings, you are part of the world that others are modeling! We show how this insight, coupled with predictive models, can resolve social dilemmas in RL.
English
1
0
10
1.1K