Navve wasserman
4 posts

Navve wasserman
@NavveW
PhD student @ Weizmann | Brain–Vision Models, Multimodal AI, Brain Interpretability

This paper uses AI-style interpretability tools to map which images trigger which visual concepts in the human brain. It scales by adding about 120K extra images using predicted functional magnetic resonance imaging (fMRI) signals. The problem is that fMRI data has about 40K voxels per person, each voxel is a tiny 3D pixel, and manual labeling does not scale. The pipeline first breaks each brain region’s activity into patterns that can be mixed to rebuild any response, and a sparse autoencoder pushes each response to use only a few patterns. For every pattern, it finds the top images that trigger it, captions those images, and has a model that writes text suggest shared meanings like “kitchen” or “hands in action”. To avoid random labels, it builds a big concept list, marks each image as true or false for each concept, then keeps the concept that shows up most consistently in that pattern’s top images. The payoff is a searchable map from image concepts to brain areas, plus a fair way to compare breakdown methods using held-out real scans. ---- Paper Link – arxiv. org/abs/2512.08560 Paper Title: "BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain"



