
Utkarsh Singhal
57 posts

Utkarsh Singhal
@utksinghal
Robotics @ Tesla Optimus | Previously: PhD @ UC Berkeley


The toughest moment in a PhD: >spend a year building smth you’re proud of >travel across world with advisors’ support to share it >when your moment finally comes >your mic gets cut because the previous schedule ran late Heartbroken, but thanks to whom stayed for me @NeurIPSConf

Congratulations to Antonio Loquercio (@antoniloq) on receiving the 2025 Mario Gerla Young Investigator Award from @issnaf. Recognized for his research on the pivotal role of perception in building effective world models for decision-making, Loquercio enhances the performance of complex robotic systems. He explores how robots can utilize their own sensor data to refine their world models. Congratulations, Antonio! bit.ly/4pbaUSE


An M.I.T. study found that 95% of companies that had invested in A.I. tools were seeing zero return. It jibes with the emerging idea that generative A.I., “in its current incarnation, simply isn’t all it’s been cracked up to be,” @JohnCassidy writes. nyer.cm/FUZwzw8


INVAE + REG = 7.15 FID @ 100k steps Original SiT-XL/2 gets 8.3 FID @ 7M steps. So something like 70+ times faster training?

Q: How do we scale robustness/invariance to foundation models like CLIP? A: Test-time search! 🔍 Our new work FoCal finds canonical views to boost robustness to complex transforms (e.g. viewpoint): sutkarsh.github.io/projects/focal 📍 ICML Poster: Tue 11–1:30, E. Hall A-B (E-2203) 🧵 1/5

Q: How do we scale robustness/invariance to foundation models like CLIP? A: Test-time search! 🔍 Our new work FoCal finds canonical views to boost robustness to complex transforms (e.g. viewpoint): sutkarsh.github.io/projects/focal 📍 ICML Poster: Tue 11–1:30, E. Hall A-B (E-2203) 🧵 1/5




A biologically-inspired hierarchical convolutional energy model predicts V4 responses to natural videos biorxiv.org/cgi/content/sh… #biorxiv_neursci





what in the actual f*ck this is incredible






