Keshav Badrinath
13 posts

Keshav Badrinath
@keshavbadrinath
cs @ uiuc | incoming creative genai @meta | making stuff @sigrobotics | vla & world model research
Bay & Chicago Katılım Mart 2021
81 Takip Edilen28 Takipçiler

summer internship update:
I’ll be working on VLAs at @droyd_robotics!
Will be researching how different end effectors (to plug cables in) affect the policy.
Might end up writing a paper on it.
If you have any leads/have done similar work lmk!
English

Excited to finally reveal this project, me and jay at @sigrobotics have been working on this tendon-driven hard-soft hand for quite a while!
Unnat Jain@unnatjain2010
CRAFT hand🫳 1. Achieves all 33/33 dexterous grasps > 2x-20x $$ hands! 2. < $600 3. Handles fragile objects 4. Durable under contact 5. Open-sourced craft-hand.github.io @leo_lin6 & @shivanshpatel35 (on market; hire him🚀) will happily share anything else that you may need. Details in 🧵 🌟 Big shout out to @kenny__shaw (Leap & v2), @irmakkguzey (RUKA), @orcahand (ORCA), and many others who helped build this open research community. Thank you!
English

Our team at FAIR, @AIatMeta is looking for a 2026 Summer Intern to work on video pretraining, with related interests in video generation, world models, or robotics.
Given the current timing, we’re especially happy to hear from PhD students who have been prioritizing research last year and may not have had much time for ad-hoc/random trials on internship interviews (I was definitely in that situation during my PhD 😆). Feel free to DM me.
English

Had a lot of fun recreating the 13yr old DeepMind paper on playing Atari with deep RL entirely from scratch!
Here, an agent learns from only raw video frames to match human experts in 29/49 diff. games using the same hyperparameters! The most interesting thing revisiting is how the authors like to motivate experience replay and DQN from a neuroscientific/bio perspective, even though--
English

arxiv.org/pdf/2602.06001
cool paper on world models learning tactile relationships between objects. world models can be very prone to generative errors and frame-to-frame visual inconsistencies and this helps with object permanence and having more robust, hallucination free gens.
English
Keshav Badrinath retweetledi
Keshav Badrinath retweetledi

Excited to announce we @sigrobotics got 1st at the Embodied AI Hackathon hosted by @seeedstudio x @nvidia x @huggingface !
Demo vid of our robot Performative trained on a fine tuned GR00TN1.5 model with inference running on a Jetson Thor :)
Had a blast, can't wait to be back!
English

@iamRezaSayar @sigrobotics @seeedstudio @NVIDIARobotics @VectorWang2 we’re thinking it’s probably mostly because the inference service is running on a different device so network latency is causing this. we also rerecorded all our data after this because some of this jerkiness can def be attributed to jerky teleop during data collection yeah
English

@sigrobotics @seeedstudio @NVIDIARobotics @VectorWang2 awesome!👏🏼question on the slow/jerkiness though: how much (if any) of this is due to inference being slow/cloud delay on your chosen hardware? i know some of it may be solved with smoother tele-op while recording, and/or upgrading the servos, but wondering about the model side
English
Keshav Badrinath retweetledi

Running a GR00t N1.5 model for matcha pouring. Getting ready for hackathon @seeedstudio
@NVIDIARobotics @VectorWang2
English



