Brian Cheung

259 posts

Brian Cheung banner
Brian Cheung

Brian Cheung

@thisismyhat

This is my hat, there are many like it, but this one is mine. @MIT_CSAIL 🧢 / ex: @berkeley_ai 🎓 Google B̶r̶a̶i̶n̶ DeepMind 🎩

Natural Intelligence ∩ AI Katılım Haziran 2015
612 Takip Edilen5.6K Takipçiler
Brian Cheung retweetledi
Badr AlKhamissi
Badr AlKhamissi@bkhmsi·
🎉 Re-Align is back for its 4th edition at ICLR 2026! 📣 We invite submissions on representational alignment, spanning ML, Neuroscience, CogSci, and related fields. 📝 Tracks: Short (≤5p), Long (≤10p), Challenge (blog) ⏰ Feb 5, 2026 for papers 🔗 representational-alignment.github.io/2026/
Badr AlKhamissi tweet media
English
1
20
57
26.5K
Brian Cheung retweetledi
Kush Tiwary
Kush Tiwary@ktiwarylab·
👁️🌋 Our new @ScienceAdvances paper: We replayed the Cambrian explosion of vision by evolving AI agents inside a physics engine to understand the principles that shape visual intelligence. We believe that this is a promising way to do AI for science and build new forms of AI by computationally mimicking biological design principles of evolution and learning. Website: eyes.mit.edu and paper links at the end of this thread 👇
Kush Tiwary tweet media
English
2
6
15
945
Phillip Isola
Phillip Isola@phillip_isola·
@thisismyhat Love this sentiment. Corollary: the value of data increases with time. What is bad data in year t becomes useful data in year t+1. This can be such a blessing. Even if we don't collect any new data, the quantity of useful data increases over time.
English
1
5
61
3.4K
Brian Cheung
Brian Cheung@thisismyhat·
@chris_j_paxton Data is just information. There's always something to learn from data, maybe not what you need right now. But can become important later.
English
1
0
3
298
Jie Wang
Jie Wang@JieWang_ZJUI·
@thisismyhat No I think even the example here counter proves the importance of high quality data (though they are post-training ego centric videos)
English
1
0
4
181
Brian Cheung retweetledi
Phillip Isola
Phillip Isola@phillip_isola·
Impromptu NeurIPS meetup: "representational convergence by the beach." We will meet at ballroom 20c (near lunch) 2pm Fri and walk over to Marina. Will chat about platonic reps, fractured reps, or anything else about where all these models are heading. Anyone is welcome to join!
English
6
19
225
22.3K
Brian Cheung
Brian Cheung@thisismyhat·
In all this craziness, let's take a moment and enjoy the three accounts @ilyasut follows.
Brian Cheung tweet media
English
0
0
6
646
Brian Cheung retweetledi
Phillip Isola
Phillip Isola@phillip_isola·
This paper is really interesting to me -- it shows substantially stronger representational convergence than previously measured! In the PRH we found ~0.2 mknn alignment between vision and text models. This new paper reaches ~0.4. Challenge: find a setting where it reaches ~1.0.
Tyler Zhu@tyleryzhu

Today seems to be a fitting day for @GoogleDeepMind news, so I'm excited to announce our new preprint! Prior work suggests that text & img repr's are converging, albeit weakly. We found these same models actually have strong alignment; the inputs were too impoverished to see it!

English
6
13
153
22.8K
Brian Cheung retweetledi
Aritra 🤗
Aritra 🤗@ariG23498·
Read and reproduced this paper in a free tier colab notebook. 🔥
Aritra 🤗 tweet media
English
1
3
28
4.2K
Brian Cheung retweetledi
Phillip Isola
Phillip Isola@phillip_isola·
Over the past year, my lab has been working on fleshing out theory/applications of the Platonic Representation Hypothesis. Today I want to share two new works on this topic: Eliciting higher alignment: arxiv.org/abs/2510.02425 Unpaired rep learning: arxiv.org/abs/2510.08492 1/9
English
10
119
694
67K
Brian Cheung
Brian Cheung@thisismyhat·
@jasonfurman Prediction markets don't just predict, they can influence outcomes (e.g. 'the fix is in', reflexivity, etc)
English
0
0
20
31.2K
Jason Furman
Jason Furman@jasonfurman·
The other day a student asked me about the prevalence of insider trading in prediction markets. I now have an answer.
Jason Furman tweet media
English
106
680
12.1K
1.3M
Brian Cheung
Brian Cheung@thisismyhat·
It turns out, a simple cue like asking the model to ‘see’ or ‘hear’ can push a purely text-trained language model towards the representations of purely image-trained or purely-audio trained encoders. 2/3
Brian Cheung tweet mediaBrian Cheung tweet mediaBrian Cheung tweet media
English
1
0
3
413
Brian Cheung
Brian Cheung@thisismyhat·
A takeaway I learned from LLMs: You can just ask for things. What if you asked a language model to imagine senses it never experienced? @SophieLWang , @phillip_isola and I asked language models to "Imagine seeing..." and "Imagine hearing...". 1/3
GIF
English
1
0
13
1.3K