Balakumar Sundaralingam

44 posts

Balakumar Sundaralingam banner
Balakumar Sundaralingam

Balakumar Sundaralingam

@balakumar_

Senior Research Scientist @NVIDIA | Robotics PhD @UUtah | Robot Manipulation

San Francisco, CA Katılım Ekim 2018
501 Takip Edilen313 Takipçiler
Enguerand / VitroBot
Enguerand / VitroBot@enguerandvitro·
Every robotics startup talks about point clouds and neural nets We’re betting on 1 cm³ voxels instead It’s slower, older, and less flashy But on a glass facade with no texture, it’s the only representation that still works Sometimes the best answer isn’t new
English
2
0
3
376
Shun Iwase
Shun Iwase@s1wase·
This is a huge step forward in explicit 3D reasoning in policy learning! I just wish Gemini 305 were used for learning-based stereo depth perception from wrist cameras instead. They share the same camera dimensions, but Gemini 305 supports dual RGB streaming unlike D405 thanks to its hardware design, and it provides this quality of depth maps in real time. orbbec.com/gemini-305/
Jiafei Duan@DJiafei

Most capable generalist robotics models today are closed or at best, open weights. But robotics won’t reach its ChatGPT moment without real openness. That GPT moment was built on years of open tools and datasets such as Python, PyTorch, ImageNet and more, that let researchers inspect, reproduce, and build. Today, we’re introducing MolmoAct 2: a fully open-source action reasoning model for real-world robotics. We rethought and reshaped everything! 🧵👇

English
4
16
186
19.2K
eldaniz
eldaniz@s4movar·
@pablovelagomez1 @balakumar_ As far as I know curobo is motion generation library. It also has multiple “world” options (mesh, voxel) and nvblox is one of them.
English
1
0
0
70
Balakumar Sundaralingam
Balakumar Sundaralingam@balakumar_·
@pablovelagomez1 CuRoboV2 implements a TSDF mapper for manipulation (fixed workspace, 5mm voxels, multiple cameras). We designed it for performance (all ops on GPU) and memory efficiency (fp16). This leads to 10x faster rgbd->esdf while using 8x less memory (page 31).
Balakumar Sundaralingam tweet media
English
2
0
7
366
Balakumar Sundaralingam
Balakumar Sundaralingam@balakumar_·
@kevin_zakka Apologies @kevin_zakka, for incorrect mink capabilities. Revision soon with mink changes: Collision, CoM, local IK w/ collision: no → yes Solver: NLLS → QP Out-of-scope capabilities marked "-" instead of "X" Text reframed: retargeting coll. behavior due to GMR's integration.
English
1
0
8
2K
Kevin Zakka
Kevin Zakka@kevin_zakka·
(1/3) New motion planning library from NVIDIA (cuRoboV2) just dropped making categorically wrong claims about mink. Their paper says mink has no collision avoidance and no center-of-mass support. Both have shipped since July 2024 (day 1).
English
5
3
109
7.4K
Balakumar Sundaralingam
Balakumar Sundaralingam@balakumar_·
@kevin_zakka We could not find an example in mink that does the human to humanoid link mapping for retargeting. So we had to use GMR's mapping. We use GMR's mapping for all IK solvers (using the same relative weighting across links). Happy to rerun mink if you have better tuned weights.
English
1
0
1
145
Kevin Zakka
Kevin Zakka@kevin_zakka·
(2/3) What they actually benchmarked is GMR, a 3rd party retargeting library that uses mink but doesn't enable collision avoidance. They tested someone else's default config, got bad numbers, then concluded the features don't exist. Our G1 humanoid example uses both on the exact robot they test against.
English
2
1
28
2K
Balakumar Sundaralingam retweetledi
Nathan Ratliff
Nathan Ratliff@robot_trainer·
New work on vectorizing geometric fabric controllers for RL workflows at scale. DeXtreme: Fabric Guided Policies (FGP). Policies are hard on hardware. We need low-level controllers at deployment, which means we need them during training. FGPs increase hardware lifetime, enable quick iteration on training and deploying policies, and allow us to inject useful inductive bias into the system.
English
3
15
71
16K
Balakumar Sundaralingam
Balakumar Sundaralingam@balakumar_·
Our code for CUDA accelerated motion generation is out! Supercharge your workflows with fast batched robotics modules, including kinematics, collision queries, optimization, and motion planning. #PyTorch #Nvidia #Robots
NVIDIA AI Developer@NVIDIAAIDev

🤖 cuRobo, a new #CUDA accelerated motion generation toolkit, can solve complex #robotics problems in milliseconds. ⚡ It includes implementations of kinematics, collision checking, numerical and trajectory optimization, and more. 👀 #NVIDIAResearch code nvda.ws/3MxDmNG

English
0
19
126
13.7K
Balakumar Sundaralingam retweetledi
Ankur Handa
Ankur Handa@ankurhandos·
DeXtreme is our new work on scaling sim-to-real for contact-rich manipulation with a vision-based state estimation on a robot hand with the infrastructure we have been developing with Isaac Gym over the past one year. arxiv.org/abs/2210.13702 dextreme.org
English
3
40
266
0
Balakumar Sundaralingam retweetledi
Ankur Handa
Ankur Handa@ankurhandos·
Factory: Fast Contact for Robotic Assembly, our recent work, is a set of simulation methods & robot learning tools for contact-rich interactions for robotic assembly. It will be presented at RSS next month. Paper: arxiv.org/abs/2205.03532 Website: sites.google.com/nvidia.com/fac…
English
6
63
347
0
Balakumar Sundaralingam retweetledi
Andreas Orthey
Andreas Orthey@andreas_orthey·
Update on Robotic Conferences due to Coronavirus: RSS -> virtual conference WAFR -> postponed by one year (to June 2021) ICRA -> decision on April 6th (virtual vs. postpone to end of 2020) IROS -> as planned (Oct 2020) Humanoids -> as planned (Dez 2020)
English
1
10
24
0
Balakumar Sundaralingam retweetledi
Rodney Brooks
Rodney Brooks@rodneyabrooks·
Spent the last two days crisscrossing Mumbai for meetings. All those autonomous miles in Chandler, Arizona are easily going to generalize for AV roll outs here. Yeah, right.
English
7
11
108
0
Balakumar Sundaralingam retweetledi
Sebastian Höfer
Sebastian Höfer@hoeferse·
I saw this robotics scientist working today. Commenting his code. Writing unit tests. Testing in simulation first. Operating the robot with his hand on the e-stop, observing carefully. Like a psychopath.
English
4
7
62
0
Balakumar Sundaralingam
Balakumar Sundaralingam@balakumar_·
@faraz_r_khan You could also add AR tags/motion capture markers to corners of lidar and base and use that to get the transform.
English
0
0
0
0
Faraz Khan
Faraz Khan@faraz_r_khan·
@balakumar_ Good call. I guess there's no easy way out. My idea involved localization using Hector maps and them making the robot move in a straight known line. Then computing the angle between slam pose and the direction I travelled to get the transform 🤔
English
2
0
0
0
Faraz Khan
Faraz Khan@faraz_r_khan·
People of Ros: is there a cool way to figure out the transform between my robotic base (base link) and my lidar? How is this generally done if it's non zero? #Robotics #ros
English
8
0
2
0