Michael C. Mozer

20 posts

Michael C. Mozer

Michael C. Mozer

@mc_mozer

Research Scientist, Google Brain now DeepMind where cognitive science and machine learning meet

San Francisco, CA Katılım Ocak 2022
118 Takip Edilen741 Takipçiler
Michael C. Mozer
Michael C. Mozer@mc_mozer·
[1/4] As you read words in this text, your brain adjusts fixation durations to facilitate comprehension. Inspired by human reading behavior, we propose a supervised objective that trains an LLM to dynamically determine the number of compute steps for each input token.
Michael C. Mozer tweet media
English
4
10
27
3.1K
Michael C. Mozer
Michael C. Mozer@mc_mozer·
[3/4] To train the model to calibrate its uncertainty and use <don't know> outputs judiciously, we frame the selection of each output token as a sequential-decision problem with a time penalty. We refer to the class of methods as “Catch Your Breath” losses.
Michael C. Mozer tweet media
English
0
0
5
246
Michael C. Mozer
Michael C. Mozer@mc_mozer·
[2/4] The model can request additional compute steps for any token by emitting a <don't know> output. If the model is granted a delay, a <pause> token is inserted at the next input step, providing the model with additional compute resources to generate an output.
Michael C. Mozer tweet media
English
0
0
4
233
Michael C. Mozer retweetledi
Danny Sawyer
Danny Sawyer@dannypsawyer·
Happy to announce that our work has been accepted to workshops on Multi-turn Interactions and Embodied World Models at #NeurIPS2025! Frontier foundation models are incredible, but how well can they explore in interactive environments? Paper👇 arxiv.org/abs/2412.06438 🧵1/13
GIF
English
1
6
24
7.1K
Michael C. Mozer retweetledi
Effie Li
Effie Li@_EffieLi_·
🌟To appear in the MechInterp Workshop @ #NeurIPS2025 🌟 Paper: arxiv.org/abs/2509.04466 How do language models (LMs) form representation of new tasks, during in-context learning? We study different types of task representations, and find that they evolve in distinct ways. 🧵1/7
Effie Li tweet media
English
1
14
111
20K
Michael C. Mozer retweetledi
Shoaib Ahmed Siddiqui
Shoaib Ahmed Siddiqui@ShoaibASiddiqui·
[📜1/9] Does machine unlearning truly erase data influence? Our new paper reveals a critical insight: 'forgotten' information often isn't gone—it's merely dormant, and easily recovered by fine-tuning on just the retain set.
Shoaib Ahmed Siddiqui tweet media
English
2
10
51
7.7K
Michael C. Mozer retweetledi
Archit Karandikar
Archit Karandikar@KarchitK·
We are announcing the launch of Airial Travel’s open-to-all beta version for desktop today. Airial is your personal travel agent with AI superpowers which makes planning and booking trips as easy as dreaming them up. airial.travel Me and Sanjeev co-founded Airial Travel a year ago to solve a problem we faced repeatedly. Being avid travelers living in the US with our families in India, we were traveling for several months a year and spending multiple days planning and booking each trip. Hours and hours of research, browsing, watching videos, form-filling, spreadsheets, refinements etc. At the end of the process in most cases, we just booked because we were exhausted and wanted to get it over with. As we talked to people and read up about this, we realized that the scale and the intensity of this problem is stunning - hundreds of millions of trips are booked online every year and planning each of them takes over five hours on average. Our vision “Just imagine your trip, and Airial it!” stems from our ultimate wish as travelers - AI that can figure out all the intricacies of trip planning for you - hotels, activities, flights, trains, transits, deals, date options, restaurants, interests, research, travel videos and everything else. Our defining features originate from the core beliefs that Airial is built on: 📅 Detailed intricately crafted plans: Attention to detail makes trip plans incredible. Airial plans trips in a level of detail that is simply unmatched, taking care of hundreds of common sense constraints across thousands of variables in seconds. 🚀 From Reels to Itineraries: TikTok / IG reels to Trips is work that millions do manually. We now enable all this in a click. This is the intersection of the two big trends in travel - AI and Socials. 🏖️ Personalized Planning: Travel portals today are one-size-fits-all. We plan trips tailored to your specific interests - architecture tours, scenic hikes or samurai sword fighting lessons! 👆 Actionable Itineraries: Just having a chatbot pick out one combination for you isn’t practically useful. Which was the last trip you planned that didn’t need any refinement? Every decision Airial makes for you is changeable via chat or UI controls. 🌎 Discovery in context: As you plan your trip, Airial gives you the tools to discover incredible ideas and expert advice specific to your itinerary and interests which can be instantly imported into your trip. Now you can “Just imagine your trip and Airial it!”. Try it out now on your laptop at airial.travel!
English
9
11
45
5.8K
Michael C. Mozer retweetledi
Anand Gopalakrishnan
Anand Gopalakrishnan@agopal42·
Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024! TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.
Anand Gopalakrishnan tweet media
English
2
39
161
19.4K
Michael C. Mozer retweetledi
Michael Lepori
Michael Lepori@Michael_Lepori·
The ability to properly contextualize is a core competency of LLMs, yet even the best models sometimes struggle. In a new preprint, we use #MechanisticInterpretability techniques to propose an explanation for contextualization errors: the LLM Race Conditions Hypothesis. [1/9]
Michael Lepori tweet media
English
5
14
102
14.8K
Michael C. Mozer retweetledi
Mengye Ren
Mengye Ren@mengyer·
🔍 New LLM Research 🔍 Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by @YanlaiYang, we discovered a curious behavior in overparameterized networks, especially LLMs—as we train the network on a cyclic sequence of documents, it starts to anticipate the next document and reverses the forgetting trend! ⤴️ ▶️ After 3-4 cycles, the network reverses over 90% of the forgetting right before seeing the original document again. ▶️ The amount of anticipation emerges with the size of the network. LLMs <= 160M show no anticipation. ▶️ We showed that you can reproduce such an effect in a toy network! Check out more details in our arXiv preprint on anticipatory recovery: Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training. 🚀 arxiv.org/abs/2403.09613 🚀 #LLM #AI #Research
Mengye Ren tweet mediaMengye Ren tweet media
English
3
38
215
28K
Michael C. Mozer retweetledi
Gamaleldin Elsayed
Gamaleldin Elsayed@gamaleldinfe·
Nature Comms paper: Subtle adversarial image manipulations influence both human and machine perception! We show that adversarial attacks against computer vision models also transfer (weakly) to humans, even when the attack magnitude is small. nature.com/articles/s4146…
GIF
English
12
89
384
84K
Michael C. Mozer retweetledi
Dumitru Erhan
Dumitru Erhan@doomie·
1/ Today we are excited to introduce Phenaki: phenaki.github.io, short-link-to-paper, a model for generating videos from text, with prompts that can change over time, and that is able to generate videos that can be as long as multiple minutes!
GIF
English
35
390
1.7K
0
Michael C. Mozer retweetledi
Sundar Pichai
Sundar Pichai@sundarpichai·
Two important breakthroughs from @GoogleAI this week - Imagen Video, a new text-conditioned video diffusion model that generates 1280x768 24fps HD video. And Phenaki, a model which generates long coherent videos for a sequence of text prompts. imagen.research.google/video/
English
56
277
2.2K
0
Michael C. Mozer retweetledi
Thomas Kipf
Thomas Kipf@tkipf·
We are excited to make the jump to complex real-world data with this class of models — and about the potential that slot-based models have for reducing the need for detailed human supervision when learning about the physical world. 6/7
Thomas Kipf tweet media
English
1
1
6
0
Michael C. Mozer retweetledi
Thomas Kipf
Thomas Kipf@tkipf·
Excited to share our work on self-supervised video object representation learning: We introduce SAVi++, a slot-based video model that — for the first time — scales to Waymo Open driving scenes w/o direct supervision. 🖥️ slot-attention-video.github.io/savi++ 📜 arxiv.org/abs/2206.07764 1/7
English
3
33
172
0
Michael C. Mozer
Michael C. Mozer@mc_mozer·
Overcoming temptation: Incentive design for intertemporal choice arxiv.org/abs/2203.05782 We use AI models to help individuals adhere to long-term goals (e.g., retirement savings, weight loss) and avoid giving in to temptation.
English
1
5
14
0