

Hritik Bansal
742 posts

@hbXNov
CS PhD @UCLA | Prev: Bachelors @IITDelhi, Intern @MetaAI FAIR, @GoogleDeepMind @AmazonScience | Multimodal ML, Language models | Cricket🏏



Finally defended my Ph.D. thesis! 🥳 A very warm thank you to my family, friends, and advisors — @kaiwei_chang, @adityagrover_, @VioletNPeng, and Hongjing Lu. Next, I will be joining @AnthropicAI as a Member of Technical Staff. My defense slides ⬇️


Finally defended my Ph.D. thesis! 🥳 A very warm thank you to my family, friends, and advisors — @kaiwei_chang, @adityagrover_, @VioletNPeng, and Hongjing Lu. Next, I will be joining @AnthropicAI as a Member of Technical Staff. My defense slides ⬇️



New paper 📢 Most powerful vision-language (VL) reasoning datasets remain proprietary 🔒, hindering efforts to study their principles and develop similarly effective datasets in the open 🔓. Thus, we introduce HoneyBee, a 2.5M-example dataset created through careful data curation. It trains VLM reasoners that outperform InternVL2.5/3-Instruct and Qwen2.5-VL-Instruct across model scales (e.g., an 8% MathVerse improvement over QwenVL at the 3B scale). 🧵👇 Work done during my internship at @AIatMeta w/ 🤝 @ramakanth1729, @Devendr06654102, @scottyih, @gargighosh, @adityagrover_, and @kaiwei_chang.





Video generative models hold the promise of being general-purpose simulators of the physical world 🤖 How far are we from this goal❓ 📢Excited to announce VideoPhy-2, the next edition in the series to test the physical likeness of the generated videos for real-world actions. 🧵