Tapomayukh "Tapo" Bhattacharjee@TapoBhat
Introducing CLAMP: : a device, dataset, and model that bring large-scale, in-the-wild multimodal haptics to real robots. Haptics / Tactile data is more than just force or surface texture, and capturing this multimodal haptic information can be useful for robot manipulation.
Check out @pranavnnt’s work “CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception”, at #CoRL2025.
The CLAMP device is an open-source, low-cost (<$200), portable (0.59 kg) tool that can sense 5 haptic modalities along with vision and language. Users can take it home and log haptic data via a PiTFT screen and buttons.
As far as we know, the CLAMP dataset is the largest multimodal haptic dataset in the robotics literature, with a total of 12.3 million data points from 5357 objects in 41 homes, collected by 16 CLAMP devices.
The CLAMP model is a material recognition model that outperformed GPT-4o, CLIP, and PG-VLM in our experiments, and generalized to haptic data from three different robot embodiments (WidowX and Franka with different grippers).
A finetuned CLAMP model enabled a 7-DoF Franka Panda to robustly perform three real-world manipulation tasks involving clutter, occlusion, and visual ambiguity.
@EmpriseLab @Cornell_CS @corl_conf
🗣️ Spotlight presentation at #CoRL2025 on Sep 30 (spotlight session 5)
📊 Poster session at #CoRL2025 on Sep 30 (poster session 3)
🌐 Website: emprise.cs.cornell.edu/clamp/
📄 Paper: arxiv.org/pdf/2505.21495
Check this thread for more details (1/6) 🧵