Julia Kim

75 posts

Julia Kim banner
Julia Kim

Julia Kim

@_juliakeem

Co-founder & CEO @OpenGraph_Labs, building touch intelligence | FFC W26 @pearvc | Zhejiang University

Korea Katılım Ekim 2022
157 Takip Edilen970 Takipçiler
Sabitlenmiş Tweet
Julia Kim
Julia Kim@_juliakeem·
Data can’t just be outsourced🤯 To iterate fast, robotics teams must own their data infrastructure Introducing SyncField: turnkey data infrastructure for in-the-wild data collection (Best for UMI-style & Embodied human) #Robotics #UMI #DataCollection
English
8
12
137
13.2K
Julia Kim retweetledi
OpenGraph Labs 🧤
OpenGraph Labs 🧤@OpenGraph_Labs·
Excited to share that @OpenGraph_Labs has been accepted into @NVIDIA’s Inception Program 🚀 Our mission is to build reliable infrastructure for multimodal data capture, powering the next generation of robotics & world models 🌎
OpenGraph Labs 🧤 tweet media
English
0
3
15
749
Julia Kim retweetledi
Jerry Han
Jerry Han@JerryHan_og·
World models can predict the next frame. They can't predict the next touch. That's the gap visuo-tactile world models will close. Is the robot gripping hard enough? Is the surface rigid or soft? When exactly does contact begin and end? Vision doesn't know. Tactile does. We built @OpenGraph_Labs to capture what cameras miss. Egocentric RGB × 5-finger multi-taxel tactile gloves. Frame-synced. Calibrated. In-the-wild. No lab setups. No scripted pick-and-place. Just humans doing real tasks in real stores. Watch the exact moment contact happens. The pressure map lights up in sync. Every touch. Every frame. 👇
English
4
15
117
12K
Julia Kim
Julia Kim@_juliakeem·
Yeah, that’s true. Their gloves and the data collected from them are compatible with their robots. I’m also betting on human data but only when it’s captured as high-quality multimodal data. What I’m working on is a multimodal data capture tool that helps collect high-quality, in-the-wild data while keeping different sensors time synced and handling issues like sensor drift automatically
English
0
0
0
39
Chris Paxton
Chris Paxton@chris_j_paxton·
@_juliakeem @sundayrobotics Yeah i tend to agree. Although it seems approaches which can leverage more sub par data (human videos) have some advantages too
English
1
0
0
55
Bercan
Bercan@bercankilic·
@_juliakeem Very much needed infrastructure!!!
English
1
0
5
615
Julia Kim
Julia Kim@_juliakeem·
Robotics & world models require real-world multi-sensory data at scale. But collecting vision, tactile, and IMU data simultaneously is much harder than it sounds. Each sensor runs at different frequencies, latencies, and clock domains. Integrating them means dealing with hardware quirks, driver inconsistencies, and constant timestamp drift. This is fundamentally a synchronization problem. And it gets harder as more modalities are added and tasks become longer-horizon, because temporal misalignment compounds: the model loses the causal structure of what happened and when. We learned this the hard way building our own pipelines. That experience led us to build a unified platform for multimodal capture, one that handles time alignment, hardware abstraction, and data integrity from day one. @OpenGraph_Labs built 'SyncField - Multimodal Data Capture System " which: ▪️ Supports any hardware configuration (multiple cameras + tactile + IMU) ▪️ Automatic synchronization across all modalities ▪️ Output is fully time-aligned and ready to train on It already powers humanoid robotics teams, data collection companies, and university research labs. If your team is collecting multimodal robotics data, we'd love to talk. (now onboarding teams one by one)
English
17
26
295
16.8K
Asad Memon
Asad Memon@_asadmemon·
I am doubling down on my mission to create the smallest VIO module, here is the latest revision I am working on. - Global shutter camera + IMU - 0.8W - Outputs pose @ 15hz via USB or UART
Asad Memon tweet mediaAsad Memon tweet media
English
44
44
894
37.3K
Julia Kim
Julia Kim@_juliakeem·
With SyncField, you control your data collection from the ground up. Own the infrastructure. Track everything from day one. Scale data quantity on top of a unified infrastructure.
English
1
0
4
874
Julia Kim
Julia Kim@_juliakeem·
Data can’t just be outsourced🤯 To iterate fast, robotics teams must own their data infrastructure Introducing SyncField: turnkey data infrastructure for in-the-wild data collection (Best for UMI-style & Embodied human) #Robotics #UMI #DataCollection
English
8
12
137
13.2K
Julia Kim retweetledi
Jerry Han
Jerry Han@JerryHan_og·
VLMs see everything. Feel nothing. VLMs annotate what looks like contact. Tactile sensors verify what actually is contact. We ran VLM annotation on real manipulation demos. It labeled a grasp as "approach." Skipped release phases entirely. Hallucinated state transitions that never occurred. 6 out of 36 action phases wrong. 17% of your training data, corrupted. Why? Pixels don't know when a fingertip touches a surface. Pixels don't know when grip pressure hits zero. Tactile sensors do. So we built a pipeline that catches every error automatically. Tactile evidence validates every contact transition. Wrong labels corrected, missing phases inserted. Not faster labeling. Truthful labeling. This is one of the core problems we're solving at @OpenGraph_Labs
English
2
5
43
4.3K
Julia Kim retweetledi
Jerry Han
Jerry Han@JerryHan_og·
Robots can't learn if their eyes and hands are out of sync. A 30fps camera and a 1kHz tactile sensor don't speak the same language. Multiple cameras, multiple sensors, all on different clocks at different rates. Jitter from USB and OS scheduling. Drift that compounds every second of recording. We built a multi-modal sync pipeline that aligns all of it to ±2.5ms. Automatically. Every frame matched. Zero sensor samples lost. No hardware triggers needed. Sensor-agnostic. Hardware-agnostic. Just plug in and record. Physical AI needs real hand-eye coordination. Not approximate, precise. We're building this at OpenGraph. opengraphlabs.com
English
5
39
289
20.3K
Julia Kim retweetledi
OpenGraph Labs 🧤
OpenGraph Labs 🧤@OpenGraph_Labs·
Tactile feedback is critical for safe and reliable real-world robot deployment 🤚🧤🤖 Really impressive work from @WiYoungsun demonstrating how a shared latent space can bridge tactile signals from wearable gloves to robot embodiments
Youngsun Wi@WiYoungsun

Dexterous hands vary widely—so do tactile modalities. 🖐️🌈 Our vision on tactile human-to-robot transfer: 🔓 Not tied to specific hardware ♻️ Reuse human tactile demos across embodiments Presenting TactAlign, a cross-sensor tactile alignment for cross-embodiment policy transfer.

English
0
1
5
951