

Inderjit Dhillon
522 posts

@inderjit_ml
Google Fellow(VP) at Google, Professor at UT Austin, Machine Learning Researcher, Ex-VP/Distinguished Scientist at Amazon.




Happy to share new progress in AI for Maths @GoogleDeepMind . In extremal combinatorics, AlphaEvolve has helped establish new lower bounds for FIVE classical Ramsey numbers - a problem so challenging that even Erdős commented on its difficulty. Historically, computationally deriving these bounds required bespoke, human-designed search algorithms. For many of these bounds, the best previous results are at least a decade old. AlphaEvolve changes this by acting as a single meta-algorithm that automatically discovers the search procedures needed to find these new bounds. 📷






Now Google can help with your Googly:)

We partnered with @ICC to show how Gemini 3 Pro can analyze video content. By uploading a segment of the Cricket World Cup, Gemini can seamlessly process visual and audio data to identify key players, explain techniques, and highlight crucial turning points. 🏏


Joint Statement: Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year. After careful evaluation, Apple determined that Google's Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards.

📢 Announcing 𝗦𝗣𝗢𝗧: 𝗦𝗰𝗮𝗹𝗶𝗻𝗴 𝗣𝗼𝘀𝘁-𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗳𝗼𝗿 𝗟𝗟𝗠𝘀 Workshop at #ICLR2026 (@iclr_conf )🚀 🚨 We invite work on the principles of post-training scaling, bridging algorithms, data & systems 📅 Feb 5, 2026 for papers 🌐 spoticlr.github.io 🧵(1/5)



3 Flash delivers frontier performance on benchmarks like GPQA Diamond - evaluating PhD-level reasoning – and Humanity’s Last Exam – testing broad expert knowledge. It’s state-of-the-art on MMMU Pro, with a score comparable to 3 Pro - easily analyzing inputs across videos and images, not just text. And it handles complex tasks significantly faster than 2.5 Pro at a lower cost, using fewer tokens - or units of information - to save time.


Our new experiment uses an advanced version of Gemini 2.5 Deep Think to rigorously verify theoretical computer science papers. 97% of trial participants, authors for #STOC2026, found the feedback helpful for catching errors & improving clarity. More at: goo.gle/3MXslYP



New paper studies when spectral gradient methods (e.g., Muon) help in deep learning: 1. We identify a pervasive form of ill-conditioning in DL: post-activations matrices are low-stable rank. 2. We then explain why spectral methods can perform well despite this. Long thread



