

Kempner Institute at Harvard University
863 posts

@KempnerInst
The Kempner Institute for the Study of Natural and Artificial Intelligence at @Harvard University. RTs ≠ Endorsements



Now in PRE: "Transient dynamics of associative memory models." I argue that the "blackout catastrophe" (the famous α≈0.14 transition) is not catastrophic when viewed from an out-of-equilibrium, dynamical perspective. Journal: journals.aps.org/pre/abstract/1… PDF: dclark.io/media/clark-pr…












From a mind-controlled bionic arm in high school to ML research at SEAS, the Kempner Institute, NASA and Johns Hopkins, Benjamin Choi shows how applied math and CS can turn noisy brain signals into real-world impact. bit.ly/3P2k7jM




State of the art World Models still lack a unified world memory for representing and predicting dynamics out of their field of view. Why is that, and how can we fix it? Introducing Flow Equivariant World Models: models with memory capable of predicting out of view dynamics!🧵⬇️

Agentic AI for science featured in @naturemethods: nature.com/articles/s4159…. We are still early, with many open challenges ahead, but it is exciting to see this direction continue to evolve, wonderful piece by @metricausa ToolUniverse — an open platform enabling AI agents to use scientific tools and databases at scale, by @GaoShanghua → aiscientist.tools ClawInstitute — shared research boards for long-running collaborative discovery where agents co-develop ideas over time, by @GaoShanghua @AdaFang_ → clawinstitute.aiscientist.tools Medea — an omics AI agent for large-scale biological reasoning and analysis, by Pengwei Sui → medea.openscientist.ai @HarvardDBMI @harvardmed @KempnerInst @broadinstitute




Sniffing Out Distance: How the Brain Tracks Odor Dynamics #Olfaction #Neuroscience mcb.harvard.edu/department/new… @luxorboero @JosephDZak @paul_masset @siddjakes29 @BTolooshams @VMurthyLab @RachelleGaudet @hseas @harvardbrainsci @KempnerInst



Had a great time at the Eric and Wendy Schmidt Biomedical Science and AI Symposium today! Such an exciting community of biologists and AI researchers at @Schmidt_Center @broadinstitute. I am excited to share I was also awarded the First Prize Poster Award! 🥇 Check out my poster on ATOMICA here 👇

Before AI can generate professional videos, it needs to see like a professional. We spent a year with 100+ content creators teaching AI to describe video like a filmmaker would. Introducing CHAI: Critique-based Human-AI Oversight for Building a Precise Video Language [CVPR'26 Highlight, Top 3%]. Try prompting a video generator for a dolly zoom, dutch angle, point of view, or camera roll. Most fall back to the same bland defaults: a push-in, a level shot, a third-person view. Why? These techniques require a language of cinema that current models rarely speak. We built that language: 1️⃣ Precise specification: 5-aspect structured captions co-designed with professional cinematographers covering subject, scene, motion, spatial, and camera dynamics 2️⃣ Scalable oversight: LLMs draft captions, humans critique what's wrong and how to fix it 3️⃣ Post-training recipes: Qwen3-VL-8B surpasses Gemini-3.1 and GPT-5 4️⃣ Video generation: fine-tuned Wan follows 400-word cinematic prompts with precise control Here's how each works 🧵 Work led by CMU and Harvard with @chancharikm, @du_yilun, and @RamananDeva. 📄 Paper: huggingface.co/papers/2604.21… 🌐 Site: linzhiqiu.github.io/papers/chai/