Kempner Institute at Harvard University

807 posts

Kempner Institute at Harvard University banner
Kempner Institute at Harvard University

Kempner Institute at Harvard University

@KempnerInst

The Kempner Institute for the Study of Natural and Artificial Intelligence at @Harvard University. RTs ≠ Endorsements

Cambridge, Mass Katılım Kasım 2022
344 Takip Edilen3.5K Takipçiler
Kempner Institute at Harvard University
#KempnerInstitute Research Fellow @nsaphra explains why you can't do interpretability w/o understanding training dynamics... with @ziv_ravid. Listen to the full podcast episode 👇
Ravid Shwartz Ziv@ziv_ravid

New episode of the Information Bottleneck! @nsaphra (Kempner Fellow at Harvard, incoming prof at BU) explains why you can't do interpretability without understanding training dynamics (the same way you can't do biology without evolution). We talk about grokking, phase transitions, why SAEs are basically topic models, and why our intuitions about what's hard for models are consistently wrong. Would love to hear your feedback!

English
0
5
21
3.4K
Kempner Institute at Harvard University retweetledi
Sham Kakade
Sham Kakade@ShamKakade6·
1/ Au revoir, RLVR. New work: EBFT (Energy-Based Fine-Tuning), a post-training method that directly optimizes the long-horizon behavior of model generations, addressing SFT’s deployment-time error amplification without relying on sparse, task-specific rewards.
English
7
39
266
258.9K
Kempner Institute at Harvard University retweetledi
Yilun Du
Yilun Du@du_yilun·
Excited to share Energy-Based Finetuning (EBFT), which directly trains autoregressive models to match ground truth generations in the feature space! Across domains, we find that it can outperform SFT and RLVR in both downstream performance as well as perplexity.
Carles Domingo-Enrich@cdomingoenrich

(1/9) Most LM fine-tuning optimizes next-token loss or scalar rewards. What if we fine-tune language models so that feature statistics of partial rollouts match those of ground-truth completions? That leads to Energy-Based Fine-Tuning (EBFT). arXiv: arxiv.org/abs/2603.12248

English
4
29
191
21.5K
Kempner Institute at Harvard University retweetledi
Yonatan Belinkov
Yonatan Belinkov@boknilev·
@KempnerInst has a unique combination of top AI researchers, massive compute for an academic institution, and an expert engineering team. Check this out.
Kempner Institute at Harvard University@KempnerInst

Applications for the 2026 #KempnerInstitute Undergraduate Summer #Internship Program in AI/ML Research & Engineering are now open! 👀 This 10-week program offers a structured opportunity to further your experience in AI/ML engineering. Learn more 🔗 bit.ly/4cxbS8K

English
0
1
8
2.1K
Kempner Institute at Harvard University retweetledi
Gabriel Poesia
Gabriel Poesia@GabrielPoesia·
Too hard to come up with hard questions for reasoning LLMs? Get your popcorn and watch models challenge each other! In The Token Games - arxiv.org/abs/2602.17831 - we set up pairwise "puzzle duels". No human-written problems, solver win rates highly correlate with GPQA/HLE!
English
2
6
35
1.9K
Kempner Institute at Harvard University retweetledi
Satpreet (Sat) Singh
Satpreet (Sat) Singh@tweetsatpreet·
📣 Excited to announce the 2nd edition of our workshop “Agent-Based Models in Neuroscience: Theory, Autonomy, Embodiment & Environment” at @CosyneMeeting #CoSyNe2026! 🧠🤖🌍🪰🐟🐭💪🧘🏃 🗓️ March 17, 2026 📍 Cascais, Portugal 🔗 Speakers and schedule: neuro-agent-models.github.io
English
1
13
35
7.6K
Kempner Institute at Harvard University
New method to turn intensity sensors into lifetime sensors and apply it to GCAMP8m to make a lifetime Ca sensor... check out the newly published research led by @lodder_bart and co-authored by #KempnerInstitute's @blsabatini
Bernardo Sabatini@blsabatini

Not sure who is still out there but we're proud of this work led by @lodder_bart that (1) propose a new method to turn intensity sensors into lifetime sensors and (2) applies it to GCAMP8m to make an excellent lifetime Ca sensor. biorxiv.org/content/10.648…

English
0
1
5
1K
Kempner Institute at Harvard University
NEW: #Kempner researchers develop a mean-field theory of task-trained RNNs that bridges random and learned connectivity—and find macaque motor cortex is best captured by an intermediate, task-specific recurrent structure. Read the blog post 👇 🔗bit.ly/47f3Ldl
English
2
10
53
3K