AlexC

191 posts

AlexC

AlexC

@Beeg_Brain

PhD Student@Ecole Centrale 🇫🇷 📩 [email protected]

France Katılım Kasım 2021
196 Takip Edilen107 Takipçiler
Sabitlenmiş Tweet
AlexC
AlexC@Beeg_Brain·
Thrilled to share that our paper “Object-Centric Representations Improve Policy Generalization in Robot Manipulation” was accepted to the Generalizable Prior Workshop at #CoRL 2025 🎉 We’ll be presenting our poster on, Sept 27 and I detail the paper below 👇
English
1
0
9
495
AlexC
AlexC@Beeg_Brain·
@huggingface I was preparing something around Lyon🇫🇷 I think this comes at a perfect timing !
English
0
0
0
525
Hugging Face
Hugging Face@huggingface·
Seeing the worldwide demand we are kicking off global applications for Hugging Face Builders! If you're passionate about open AI and love bringing people together, this is your invitation to lead ✉️ Learn more about the program and apply to become a Builder ➡️
Hugging Face tweet media
English
12
28
246
72.9K
AlexC retweetledi
Robotics Papers
Robotics Papers@OWW·
STORM: Slot-based Task-aware Object-centric Representation for robotic Manipulation Alexandre Chapin (LIRIS), Emmanuel Dellandréa (LIRIS), Liming Chen (LIRIS) arxiv.org/abs/2601.20381 [𝚌𝚜.𝚁𝙾]
Robotics Papers tweet media
English
0
1
1
203
AlexC retweetledi
Robotics Papers
Robotics Papers@OWW·
Spotlighting Task-Relevant Features: Object-Centric Representations for Better Generalization in Robotic Manipulation Alexandre Chapin (LIRIS), Bruno Machado (LIRIS), Emmanuel Dellandréa (LIRIS), Liming Chen (LIRIS) arxiv.org/abs/2601.21416 [𝚌𝚜.𝚁𝙾]
Robotics Papers tweet media
English
0
1
2
184
AlexC retweetledi
DailyPapers
DailyPapers@HuggingPapers·
NVIDIA just released C-RADIOv4-H on Hugging Face Combines SigLIP2, DINOv3 and SAM3 into a single vision foundation model
DailyPapers tweet media
English
1
68
554
23.6K
AlexC
AlexC@Beeg_Brain·
@RemiCadene Congrats and thank you for the amazing work on LeRobot, it has been a game changer for robotic accessibility ! 🙂 Good luck for your new projects, I'm sure something amazing will come out of it💪
English
0
0
3
135
Remi Cadene
Remi Cadene@RemiCadene·
I am starting a venture on top of LeRobot! We’re at a pivotal time. AI is moving beyond digital to the physical world. Embodied AI will change our surroundings in ways we can barely imagine. This technology holds the potential to empower everyone. It must not be controlled by just a few. This conviction led me to propose an ambitious open-source AI robotics project to Thom, Clem, and Julien back in 2024. Hugging Face, home to a community of millions of AI builders and a team of experts who brought us transformers, datasets, and the Hugging Face Hub, was the perfect place to launch LeRobot. I’m incredibly grateful for all the support that allowed me to build LeRobot alongside an amazing team and community. In such a short time, we built one of the most adopted open-source robotics platforms, used by startups, universities, and research labs. It is helping countless people take their first steps in robotics. Together, we’ve even assembled the world’s largest open robotics dataset. And this is only the beginning for LeRobot! Building on this momentum, I now feel the urgency to start something new on top of LeRobot. It will push the limit of what robots are capable of and commoditize them within society. Like LeRobot, it will start in Paris, leveraging its vibrant international AI scene. Stay tuned! As LeRobot continues to expand, it’s now in the best possible hands with @AractingiMichel, @pepijn2233 and Steven Palma taking the lead. Watching the team deliver exceptional results over the last weeks has been one of the most rewarding experiences. Their creativity, dedication, and capability to ship fast is proving just how strong the team is today! I am extremely grateful to the many people who contributed to making LeRobot at Hugging Face and within its powerful community. Many thanks to Thom, Clem, Julien, Simon, Rob, Michel, Pepijn, Steven, Gloria, Adil, Martino, Caroline, Marine, Mishig, Guillaume, Pablo, Lysandre, Arthur, Quentin, Florent, Brigitte, Victor, Marina, Mustafa, Francesco, Jess, Jade, Ville, Leo, Max, Julien, Alexander, Flavien, Raphael, Adina, Tao, Dana, Batu, Olivier, Matthieu, Eugene, Theo, Guilherme, Hynek, Loubna, Clémentine, Merve, Vaibhav, Anna, Jeff, Adrien, Emily, Johanne, Adrien and others. There are too many of you to be all named! Thanks again and see you soon!!! :) ~ Remi
Remi Cadene tweet media
English
99
86
764
123.9K
AlexC retweetledi
Dana Aubakirova
Dana Aubakirova@danaaubakir·
🏆 LIBERO VLA Leaderboard is live — the first-ever leaderboard for Vision-Language-Action models! VLAs are growing fast, so we need efficient, fair evaluation on shared benchmarks and an open community space. Now it’s finally possible 👇
English
1
7
9
1.3K
Francesco Capuano
Francesco Capuano@_fracapuano·
btw if anyone wants to contribute an impactful piece of code to @LeRobotHF the whole team is fairly swamped rn and I could use some help to ship a way to process 2TB of data quickly! DMs are open :)
English
2
5
24
4.2K
AlexC retweetledi
AlexC
AlexC@Beeg_Brain·
Would love to hear your thoughts! 👉 Do you see OCRs as the missing link for generalizable robot learning? 👉 Or are there other priors we should be exploring? Excited to discuss at #CoRL! 🗣️
English
1
0
0
90
AlexC
AlexC@Beeg_Brain·
Thrilled to share that our paper “Object-Centric Representations Improve Policy Generalization in Robot Manipulation” was accepted to the Generalizable Prior Workshop at #CoRL 2025 🎉 We’ll be presenting our poster on, Sept 27 and I detail the paper below 👇
English
1
0
9
495
AlexC
AlexC@Beeg_Brain·
@DominiqueCAPaul Yeah of course ! (That's actually the main use case of people working with these sensors in my lab)
English
0
0
1
20
Dominique Paul
Dominique Paul@DominiqueCAPaul·
@Beeg_Brain That's a really good point I hadn't even thought of! Also it would probably help a lot with deformable objects.
English
1
0
0
37
Dominique Paul
Dominique Paul@DominiqueCAPaul·
Why don’t we see pressure sensors in robot grippers? As humans, we know how hard tasks get when our arm falls asleep and we lose touch feedback. I’ve been reading a lot of VLA/robotics papers lately and haven’t come across much on this. Is there a reason?
English
24
0
128
17.5K
AlexC
AlexC@Beeg_Brain·
@DominiqueCAPaul There are indeed really cool use case with such technologies. Such as model more aware of object physics: if you let the gripper "feel" the effect of gravity when grasping something, you can in a way feel the weight of objects to better adapt the way you interact
English
1
0
1
38
Dominique Paul
Dominique Paul@DominiqueCAPaul·
@Beeg_Brain Cool! I'm thinking that the mere fact that the robot has feedback on whether it's got contact with the object (vs inferring from pure visual input) could go a long way.
English
1
0
2
198