Avinab Saha 🇮🇳

351 posts

Avinab Saha 🇮🇳 banner
Avinab Saha 🇮🇳

Avinab Saha 🇮🇳

@avinab_saha

Research Scientist @GoogleResearch | PhD Student, LIVE @UTAustin @utexasece @MLFoundations | Formerly at @Apple, @samsungresearch | @IITKgp '19

Mountain View, CA Katılım Ocak 2012
1.9K Takip Edilen640 Takipçiler
Sabitlenmiş Tweet
Avinab Saha 🇮🇳
Avinab Saha 🇮🇳@avinab_saha·
Excited to share our recent work accepted to @siggraph 2025! 🎉 📄 FaceExpressions-70k: We introduce the first large-scale public dataset of realistic human faces annotated with perceived expression difference scores, enabling new research in facial expression perception.
Avinab Saha 🇮🇳 tweet media
English
1
1
3
1.5K
Avinab Saha 🇮🇳 retweetledi
Stefano Ermon
Stefano Ermon@StefanoErmon·
Mercury 2 is live 🚀🚀 The world’s first reasoning diffusion LLM, delivering 5x faster performance than leading speed-optimized LLMs. Watching the team turn years of research into a real product never gets old, and I’m incredibly proud of what we’ve built. We’re just getting started on what diffusion can do for language.
English
321
587
4.2K
991.1K
Avinab Saha 🇮🇳 retweetledi
Noam Shazeer
Noam Shazeer@NoamShazeer·
An updated Gemini 3 Deep Think is out today: 📈 Achieves SOTA on ARC-AGI-2, MMMU-Pro, and HLE. 🥇Gold-medal level on Physics & Chemistry Olympiads. It turns out the best way to solve hard problems is still to think about them. Read more: bit.ly/4kzBLqq
Noam Shazeer tweet media
English
39
117
1.2K
109.5K
Avinab Saha 🇮🇳 retweetledi
Siyan Zhao
Siyan Zhao@siyan_zhao·
Introducing 💡On-Policy Self-Distillation💡, a simple method that enables LLM to teach itself with dense per-token feedback on its own on-policy generations—achieving 4-8x more token efficiency vs. GRPO and outperforming both GRPO and SFT/Off-Policy Distillation. Key insight: like a student reviewing solutions, rationalizing them, and correcting prior mistakes, an LLM can be conditioned on privileged info (e.g., correct solution or a reasoning trace) and supervise its weaker self—the version without such access—by matching the privileged-info-induced distribution from itself. 🌐Blog: siyan-zhao.github.io/blog/2026/opsd/ 🧵👇
Siyan Zhao tweet media
English
31
157
921
131.7K
Avinab Saha 🇮🇳 retweetledi
Google
Google@Google·
We’re launching full-length, on demand practice exams for standardized tests in @GeminiApp, starting with the SAT, available now at no cost. Practice SATs are grounded in rigorously vetted content in partnership with @ThePrincetonRev, and Gemini will provide immediate feedback highlighting where you excelled and where you might need to study more. To try it out, tell Gemini, “I want to take a practice SAT test.”
English
696
2.7K
22.9K
6.3M
Avinab Saha 🇮🇳 retweetledi
Google
Google@Google·
Today, we’re introducing Personal Intelligence. With your permission, Gemini can now securely connect information from Google apps like @Gmail, @GooglePhotos, Search and @YouTube history with a single tap to make Gemini uniquely helpful & personalized to *you* ✨ This feature is launching in beta today in the @GeminiApp. See Personal Intelligence in action 🧵 ↓
GIF
English
748
1K
7.5K
4.3M
Avinab Saha 🇮🇳 retweetledi
Josh Woodward
Josh Woodward@joshwoodward·
Crossed 1 billion images Nano Banana Pro images in @GeminiApp! The pro community is moving fast. This model has been out for 53 days. Come for the potassium, stay for more. :)
GIF
English
70
81
1K
102.1K
Avinab Saha 🇮🇳 retweetledi
Aakash Kumar Nain
Aakash Kumar Nain@A_K_Nain·
I have just finished reading the "Next-Embedding Prediction Makes Strong Vision Learners" paper. Here is a summary if you are interested 👇
Aakash Kumar Nain tweet media
English
10
31
392
51.1K
Avinab Saha 🇮🇳 retweetledi
Yushi Hu
Yushi Hu@huyushi98·
Reward models make or break post-training for multimodal omni models (e.g., nano banana), yet there’s surprisingly little research on that‼️ We’re releasing MMRB2: new reward benchmark focusing on omni models, spanning T2I, editing, interleaved, and thinking with images 🧵1/n
Yushi Hu tweet media
English
7
42
156
33.9K
Avinab Saha 🇮🇳
Avinab Saha 🇮🇳@avinab_saha·
State of AI reviews in 2025. Reviewers need to be held responsible, more conferences should adopt practices at @CVPR for reviewer accountability. It will not solve the issue completely, but should help to an extent for sure :)
Peter Richtarik@peter_richtarik

I am an AC for ICLR 2026. One of the papers in my batch was just withdrawn. The authors wrote a brief response, explaining why the reviewers failed at their job. I agree with most of their comments. The authors gave up. They are fed up. Just like many of us. I understand. We pretend the emperor has clothes, but he is naked. Here is the final part of their withdrawal notice. I took the liberty to make it public, to highlight that what we are doing with AI conference reviews these last few years is, basically, madness. --- Comment: We thank the reviewers for their time. However, upon reading the reviews for our paper, it became immediately apparent that the four "reject" ratings are not based on good-faith academic disagreement, but on a critical failure to read the submitted paper. The reviews are rife with demonstrably false claims that are directly contradicted by the text. The core justifications for rejection rely on asserting that key components are "missing" when they are explicitly detailed in the manuscript. Some specific examples are (and many are even fake claims). Claim: Harder tasks like GSM8K are missing. Fact: GSM8K results are in many tables, like Table 2 (Section 4.2) and Appendix G. Claim: The method does not use per-layer ranks. Fact: This is the entire point of our method. The reviewer clearly mistook our method for the baselines. (Section 2, Table 1). Claim: The GP kernel is not specified. Fact: It is specified in Appendix E (Table 6). Claim: There is no ablation of the method's three stages. Fact: Section 4.4 ("Ablation Study") and Appendix J are dedicated to this. Reviewers have a fundamental responsibility to read and evaluate the work they are assigned. The nature of these errors is so fundamental, so systemic in overlooking explicit content, that it goes far beyond what "limited time" or "oversight" can explain. This work has gone through several rounds of revision over the last year. In earlier submissions, the paper usually received borderline or weak-accept scores. Numerous signs strongly suggest that some reviewers are relying entirely on AI tools to automatically generate peer reviews, rather than fulfilling their fundamental responsibility of personally reading and evaluating manuscripts. We strongly protest this. This is a gross disrespect to the authors. It is a flagrant desecration of the reviewer's sacred duty. It fundamentally undermines the integrity of the entire peer-review process. Given that the reviews are not based on the actual content of our paper, we have decided to withdraw the submission. We leave this comment so that future readers of the OpenReview page are aware that the items described as "missing" are already present in the submitted manuscript. These negative reviews for this submission are factually unsound and do not reflect the content of the paper. We cannot and will not accept an assessment that is not based on the work we actually submitted.

English
0
0
0
208
Avinab Saha 🇮🇳 retweetledi
Google Arts&Culture
Google Arts&Culture@googlearts·
Exploring the self-portrait with generative AI. “Self-portrait” is a new video work by artist Ben Cullen Williams in collaboration with Google Arts & Culture and @GoogleDeepMind. Let’s dive into the process. 🧵
English
1
3
5
2.7K
Avinab Saha 🇮🇳
Avinab Saha 🇮🇳@avinab_saha·
Presenting our @CVPR Spotlight paper, Focus-N-Fix: Region Aware Fine-Tuning for Text-to-Image Generation today at 5pm, Ex Hall D, Poster ID# 259! Hope to see many of you!
Avinab Saha 🇮🇳 tweet media
English
0
0
5
300
Avinab Saha 🇮🇳
Avinab Saha 🇮🇳@avinab_saha·
Final invited talk at our @CVPR workshop on Explainable AI for Computer Vision by Junfeng He, @GoogleAI @Google happening at 415pm, Room 107B! The talk promises to be an exciting one focusing on evaluating and improving AI generated content!
Avinab Saha 🇮🇳 tweet media
English
0
1
3
1.3K