Navve wasserman

4 posts

Navve wasserman

Navve wasserman

@NavveW

PhD student @ Weizmann | Brain–Vision Models, Multimodal AI, Brain Interpretability

Boston เข้าร่วม Mayıs 2024
101 กำลังติดตาม47 ผู้ติดตาม
Navve wasserman รีทวีตแล้ว
Yossi Gandelsman
Yossi Gandelsman@YGandelsman·
Diffusion models are great, but we can squeeze out so much more from them. The only problem is that it usually requires extra training or manual representation editing. In our new paper, we show that with the current capabilities of LLMs, it is much simpler than we thought!
English
8
36
401
32.3K
Navve wasserman
Navve wasserman@NavveW·
Thanks @rohanpaul_ai for sharing our new work! Automatic Interpretability Pipeline + Human Brain Data = 🧠🔍🔥 See how we use a large-scale automatic interpretability pipeline to discover what concepts are represented in the human brain. Page & Demo: navvewas.github.io/BrainExplore/
Rohan Paul@rohanpaul_ai

This paper uses AI-style interpretability tools to map which images trigger which visual concepts in the human brain. It scales by adding about 120K extra images using predicted functional magnetic resonance imaging (fMRI) signals. The problem is that fMRI data has about 40K voxels per person, each voxel is a tiny 3D pixel, and manual labeling does not scale. The pipeline first breaks each brain region’s activity into patterns that can be mixed to rebuild any response, and a sparse autoencoder pushes each response to use only a few patterns. For every pattern, it finds the top images that trigger it, captions those images, and has a model that writes text suggest shared meanings like “kitchen” or “hands in action”. To avoid random labels, it builds a big concept list, marks each image as true or false for each concept, then keeps the concept that shows up most consistently in that pattern’s top images. The payoff is a searchable map from image concepts to brain areas, plus a fair way to compare breakdown methods using held-out real scans. ---- Paper Link – arxiv. org/abs/2512.08560 Paper Title: "BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain"

English
1
2
25
3.7K
Navve wasserman รีทวีตแล้ว
DailyPapers
DailyPapers@HuggingPapers·
Unlocking the brain's visual secrets with a new AI framework from MIT Researchers introduce BrainExplore, an automated system that maps thousands of interpretable visual concepts directly from fMRI activity. It's a huge leap towards understanding how our minds process the world.
DailyPapers tweet media
English
1
5
28
1.9K
Navve wasserman รีทวีตแล้ว
DailyPapers
DailyPapers@HuggingPapers·
Brain-IT: Reconstructs images from fMRI with unprecedented faithfulness & data efficiency This brain-inspired approach uses a Brain-Interaction Transformer to faithfully recover visual content from fMRI. It outperforms current SoTA and achieves strong results with just 1 hour of data from new subjects, matching models trained on 40 hours.
DailyPapers tweet media
English
2
19
93
13.4K