Stephen James

432 posts

Stephen James banner
Stephen James

Stephen James

@stepjamUK

CEO @Neuracore_AI | Assistant Professor @imperialcollege | ex-Director of Dyson Robot Learning Lab | Postdoc @UCBerkeley w/ @pabbeel | PhD ICL w/ @ajdDavison

London, England Katılım Ocak 2010
175 Takip Edilen6.8K Takipçiler
Sabitlenmiş Tweet
Stephen James
Stephen James@stepjamUK·
𝗔𝗳𝘁𝗲𝗿 𝟭𝟬+ 𝘆𝗲𝗮𝗿𝘀 𝗶𝗻 𝗿𝗼𝗯𝗼𝘁 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴, from my PhD at Imperial to Berkeley to building the Dyson Robot Learning Lab, one frustration kept hitting me: 𝗪𝗵𝘆 𝗱𝗼 𝗜 𝗵𝗮𝘃𝗲 𝘁𝗼 𝗿𝗲𝗯𝘂𝗶𝗹𝗱 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗼𝘃𝗲𝗿 𝗮𝗻𝗱 𝗼𝘃𝗲𝗿 𝗮𝗴𝗮𝗶𝗻? 𝗧𝗵𝗲 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 𝗜 𝗸𝗲𝗽𝘁 𝘀𝗲𝗲𝗶𝗻𝗴: • New robotics team starts • Spends 6 months building data collection pipeline • Spends another 3 months debugging synchronization issues • Finally starts collecting task-specific data • Realizes their infrastructure choices limit their flexibility • Starts over 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘁𝗵𝗲 𝘄𝗵𝗼𝗹𝗲 𝗽𝗼𝗶𝗻𝘁 𝗼𝗳 𝗿𝗼𝗯𝗼𝘁 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴: Robot learning is fundamentally data-driven. Whether you're picking strawberries or assembling electronics, the core infrastructure needs are identical. That's actually why I was so interested in pursuing data-driven robotics over a decade ago. 𝗬𝗼𝘂 𝗮𝗹𝘄𝗮𝘆𝘀 𝗻𝗲𝗲𝗱: • Multi-sensor data synchronization across different frequencies • Flexible storage that works with future algorithms • Visualization tools to understand your data • The ability to experiment with different temporal resolutions • Robust logging that captures everything you might need later The trend towards AI in robotics is growing, with robots needing to process and analyze large amounts of sensor data to manage variability and unpredictability in real environments. 𝗕𝘂𝘁 𝗲𝘃𝗲𝗿𝘆 𝘁𝗲𝗮𝗺 𝗯𝘂𝗶𝗹𝗱𝘀 𝘁𝗵𝗶𝘀 𝗳𝗿𝗼𝗺 𝘀𝗰𝗿𝗮𝘁𝗰𝗵. Imagine if every web developer had to build their own database, web server, and deployment pipeline before writing their first line of application code. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝘆 𝗜 𝗳𝗼𝘂𝗻𝗱𝗲𝗱 𝗡𝗲𝘂𝗿𝗮𝗰𝗼𝗿𝗲. Instead of every robotics team spending months on infrastructure, we provide the common tools that let you go from "I have a robot" to "I'm shipping intelligent robot behaviors" in days, not months. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗿𝗼𝗯𝗼𝘁𝗶𝗰𝘀 𝘄𝗼𝗻'𝘁 𝗰𝗼𝗺𝗲 𝗳𝗿𝗼𝗺 𝗲𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝗿𝗲𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗽𝗹𝘂𝗺𝗯𝗶𝗻𝗴. 𝗜𝘁'𝗹𝗹 𝗰𝗼𝗺𝗲 𝗳𝗿𝗼𝗺 𝘁𝗲𝗮𝗺𝘀 𝘄𝗵𝗼 𝗰𝗮𝗻 𝗳𝗼𝗰𝘂𝘀 𝗲𝗻𝘁𝗶𝗿𝗲𝗹𝘆 𝗼𝗻 𝘄𝗵𝗮𝘁 𝗺𝗮𝗸𝗲𝘀 𝘁𝗵𝗲𝗶𝗿 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝘂𝗻𝗶𝗾𝘂𝗲. Robot learning shouldn't be bottlenecked by infrastructure. It should be bottlenecked by creativity. What's the longest you've spent building infrastructure before getting to the actual robotics problem you wanted to solve?
English
19
87
732
52.9K
Stephen James
Stephen James@stepjamUK·
If you're in academia, sign up now for free! And Star us on GitHub → github.com/NeuracoreAI/ne… (Everyone who stars over the next few days gets a special treat added to their account.)
Neuracore@Neuracore_AI

𝗘𝗮𝗿𝗹𝗶𝗲𝗿 𝘁𝗵𝗶𝘀 𝘄𝗲𝗲𝗸, 𝘄𝗲 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲𝗱 𝘄𝗲'𝗿𝗲 𝗲𝘅𝗽𝗮𝗻𝗱𝗶𝗻𝗴 𝗡𝗲𝘂𝗿𝗮𝗰𝗼𝗿𝗲 𝘁𝗼 𝗲𝘃𝗲𝗻 𝗺𝗼𝗿𝗲 𝗶𝗻𝘀𝘁𝗶𝘁𝘂𝘁𝗶𝗼𝗻𝘀! We started with 60 academic institutions. The response from the research community was overwhelming - labs from every corner of the world were asking to join. So we opened the doors across the US, UK, Germany, China, India, France, Italy, Japan, Australia, South Korea, Canada, Scandinavia, Latin America, Africa, and beyond. Neuracore is a data foundation purpose-built for robot learning. Capture, store, visualise, and train on high-fidelity multimodal robotics data, in a cloud-native platform built for large-scale learning. Completely free for academia. No waitlist. No approval process. If you have an academic email at a top research university, you can sign up today. Robot learning shouldn't be bottlenecked by data infrastructure. We built this to fix that. ⭐ Star us on GitHub → lnkd.in/ej4xse5p (Everyone who stars over the next few days gets a special treat added to their account.) Then access your free account at neuracore.com

English
0
3
20
2.9K
Stephen James
Stephen James@stepjamUK·
𝗜𝗻 𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻, 𝘁𝗵𝗲 𝗗𝗶𝗳𝗳𝘂𝘀𝗶𝗼𝗻 𝗣𝗼𝗹𝗶𝗰𝘆 𝘄𝗮𝘀 𝗳𝗹𝗮𝘄𝗹𝗲𝘀𝘀. 𝗜𝗻 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁, 𝗶𝘁 𝘄𝗮𝘀 𝗳𝗮𝗶𝗹𝗶𝗻𝗴 𝗺𝗼𝘀𝘁 𝗼𝗳 𝗶𝘁𝘀 𝘁𝗮𝘀𝗸𝘀. I’ve talked to dozens of teams who have the same story: the Diffusion Policy was perfect in sim, but it’s failing 60% of its tasks in the real world. The immediate reaction is always to assume the model isn't "smart" enough. We think we need more parameters, more compute, or just a massive pile of random new data. 𝗕𝘂𝘁 𝗵𝗲𝗿𝗲’𝘀 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹𝗶𝘁𝘆: 𝗬𝗼𝘂 𝗰𝗮𝗻'𝘁 𝗳𝗶𝘅 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗰𝗮𝗻'𝘁 𝘀𝗲𝗲. Most robotics teams are flying blind. They treat their training data like a black box a directory full of thousands of files they *hope* contain the answer. When the robot stutters or misses a grasp, they have no way of knowing if the model actually saw that lighting condition during training, or if the object's starting pose was a total "blind spot" in the distribution. If you’re debugging a policy by just throwing more random trajectories at it, you’re not engineering you’re gambling. Visibility is the first step to high fidelity control. We built the @Neuracore_AI Dataset Viewer because we were tired of scrubbing through raw, disconnected logs to find a single failure point. We wanted a way to turn those files into a map. It brings vision, joint states, and actions into a single, high-fidelity timeline where everything is perfectly synchronized. It allows you to see exactly where the proprioception drifted from the visual truth, or identify distribution gaps as physical maps rather than abstract numbers. We open-sourced the viewer because we think every researcher should have access to these tools. Is your team's time best spent building custom playback infrastructure from scratch, or would you rather spend it solving the actual physics of manipulation? #Neuracore #Robotlearning
English
1
11
75
6.3K
Stephen James
Stephen James@stepjamUK·
We started with 60 institutions. We're now open to 1,000+ Seeing researchers from every corner of the world build on Neuracore is amazing. The majority of academic labs across the globe can sign up for Neuracore completely free. No waitlist, no approval process. Just an academic email and you're in. Make sure to show us some love and star us on GitHub: github.com/NeuracoreAI/ne… (all new stargazers over the next few days will get a special treat added to their account)
Neuracore@Neuracore_AI

𝗔 𝗳𝗲𝘄 𝗺𝗼𝗻𝘁𝗵𝘀 𝗮𝗴𝗼 𝘄𝗲 𝗼𝗽𝗲𝗻𝗲𝗱 𝗡𝗲𝘂𝗿𝗮𝗰𝗼𝗿𝗲 𝘁𝗼 𝟲𝟬 𝗮𝗰𝗮𝗱𝗲𝗺𝗶𝗰 𝗶𝗻𝘀𝘁𝗶𝘁𝘂𝘁𝗶𝗼𝗻𝘀. Neuracore is a data foundation built to accelerate robot learning. Researchers can capture, store, visualize, and train on high-fidelity multimodal robotics data in a cloud-native platform purpose-built for large-scale learning - completely free for academia. The response from the research community since launch has been incredible, and we heard from labs around the world that they wanted in. So we expanded: 🇺🇸 United States - 190+ institutions 🇬🇧 United Kingdom - 90+ 🇩🇪 Germany - 55+ 🇨🇳 China - 60+ 🇮🇳 India - 49+ 🇫🇷 France - 33+ 🇮🇹 Italy - 40+ 🇯🇵 Japan - 37+ 🇦🇺 Australia & New Zealand - 38+ 🇰🇷 South Korea - 26+ 🇨🇦 Canada - 26+ 🇪🇸 Spain & Portugal - 33+ 🇸🇪 Scandinavia - 36+ 🇧🇷 Latin America - 50+ 🇮🇱 Middle East - 25+ 🇿🇦 Africa - 25+ ...and many more across Southeast Asia, Eastern Europe, and beyond. If you have an academic email at a top research university, you can sign up today. No waitlist, no approval process. Robot learning shouldn't be bottlenecked by data infrastructure. We're here to fix that. ⭐ First, show us some love and star us on GitHub: lnkd.in/ej4xse5p (all new stargazers over the next few days will get a special treat added to their account) Then head over to neuracore.com to sign up for your free account and get started.

English
0
1
12
1.1K
Stephen James
Stephen James@stepjamUK·
If you’re a robotics company and want to share what you’re building with the right audience, drop a comment below and lets continue to grow the community!
Neuracore@Neuracore_AI

𝗔𝘁 𝗡𝗲𝘂𝗿𝗮𝗰𝗼𝗿𝗲, 𝘄𝗲’𝗿𝗲 𝗺𝗮𝗸𝗶𝗻𝗴 𝗶𝘁 𝗮 𝗽𝗿𝗶𝗼𝗿𝗶𝘁𝘆 𝘁𝗼 𝘀𝗽𝗼𝘁𝗹𝗶𝗴𝗵𝘁 𝘁𝗵𝗲 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗿𝗼𝗯𝗼𝘁𝗶𝗰𝘀. We just visited @KAIKAKU_AI to chat with the team behind robotic systems designed to automate repetitive tasks in restaurants, from ingredient dispensing to assembling meals in seconds. We’re getting out of the office and into labs across the robotics ecosystem, creating content with the teams pushing the field forward, and we’ve got plenty more planned. If you’re a robotics company and want to share what you’re building with the right audience, drop a comment below and lets continue to grow the community!

English
5
0
16
1.9K
Neuracore
Neuracore@Neuracore_AI·
𝗧𝗵𝗲 𝗨𝗞 𝗿𝗼𝗯𝗼𝘁𝗶𝗰𝘀 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗶𝘀 𝗺𝗼𝘃𝗶𝗻𝗴 𝗳𝗮𝘀𝘁. 𝗟𝗮𝘀𝘁 𝗻𝗶𝗴𝗵𝘁 𝘄𝗲 𝘀𝗮𝘄 𝗽𝗿𝗼𝗼𝗳. Last night we were at @atomico for @txp_io: Scaling Robotics, powered by Advanced Research + Invention Agency (ARIA). From robot bodies and hand dexterity to construction sites, stonecraft, and the labour market implications of deploying robotics at scale. Technical and policy perspectives in the same conversation, exactly where it needs to be. Great to hear from @jcaread, Rich Walker (@shadowrobot), @mollieclaypool (@automatedarchitecture), @ArkaSerezh (Gondor Industries), Edwin Eyre (@yaya_labs_), @verbine (@Lunar_VC), Matt Davies, and @jujulemons (@BritishProgress). Thank you to TXP and ARIA for bringing the room together. #UKRobotics #ScalingRobotics
Neuracore tweet mediaNeuracore tweet mediaNeuracore tweet media
English
1
2
24
2.2K
Stephen James
Stephen James@stepjamUK·
Great to see the @Neuracore_AI team out alongside some great emerging UK companies last night at the @txp_io Scaling Robotics event!
Neuracore@Neuracore_AI

𝗧𝗵𝗲 𝗨𝗞 𝗿𝗼𝗯𝗼𝘁𝗶𝗰𝘀 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗶𝘀 𝗺𝗼𝘃𝗶𝗻𝗴 𝗳𝗮𝘀𝘁. 𝗟𝗮𝘀𝘁 𝗻𝗶𝗴𝗵𝘁 𝘄𝗲 𝘀𝗮𝘄 𝗽𝗿𝗼𝗼𝗳. Last night we were at @atomico for @txp_io: Scaling Robotics, powered by Advanced Research + Invention Agency (ARIA). From robot bodies and hand dexterity to construction sites, stonecraft, and the labour market implications of deploying robotics at scale. Technical and policy perspectives in the same conversation, exactly where it needs to be. Great to hear from @jcaread, Rich Walker (@shadowrobot), @mollieclaypool (@automatedarchitecture), @ArkaSerezh (Gondor Industries), Edwin Eyre (@yaya_labs_), @verbine (@Lunar_VC), Matt Davies, and @jujulemons (@BritishProgress). Thank you to TXP and ARIA for bringing the room together. #UKRobotics #ScalingRobotics

English
1
0
7
777
Stephen James
Stephen James@stepjamUK·
Excited to see @rhoda_ai_ come out of stealth! As their advisor, I've had a front-row seat of their work on Direct Video-Action Models which reformulates robot control as video generation. The data efficiency here is super promising. Complex industrial tasks learned from just ~10 hours of robot data. Big things ahead!
Rhoda AI@rhoda_ai_

To bring generalist intelligent robots to the real world, we have to overcome the data scarcity problem. At Rhoda, we are solving it by reformulating robot policies as video generation. Today, we introduce the Direct Video-Action Model (DVA)

English
2
3
14
1.3K
Stephen James
Stephen James@stepjamUK·
If you're building in robotics or ML infrastructure and we haven't crossed paths yet, we'd love to hear what you're working on. Drop us a message or comment below!
Neuracore@Neuracore_AI

𝗪𝗲'𝗿𝗲 𝗵𝗮𝗹𝗳𝘄𝗮𝘆 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗼𝘂𝗿 𝗨𝗞 𝗿𝗼𝗯𝗼𝘁𝗶𝗰𝘀 𝘁𝗼𝘂𝗿 𝗮𝗻𝗱 𝗶𝘁'𝘀 𝗯𝗲𝗲𝗻 𝗶𝗻𝗰𝗿𝗲𝗱𝗶𝗯𝗹𝗲. So far we've had the chance to sit down with the teams at @ABBgroupnews, All3, @voyagerobotics, KAIKAKU, and @PaddingtonR7. Every visit has reinforced the same thing: there's serious work happening across the UK robotics ecosystem, and it deserves more visibility. Next up: Autodiscovery, @feather_labs, and Vsim! If you're building in robotics or ML infrastructure and we haven't crossed paths yet, we'd love to hear what you're working on. Drop us a message or comment below.

English
7
2
99
7.7K
Stephen James
Stephen James@stepjamUK·
𝗕𝗶𝗴𝗴𝗲𝗿 𝗱𝗮𝘁𝗮𝘀𝗲𝘁𝘀 𝗮𝗿𝗲𝗻'𝘁 𝘁𝗵𝗲 𝗮𝗻𝘀𝘄𝗲𝗿 𝘁𝗼 𝗿𝗼𝗯𝗼𝘁 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴. 𝗕𝗲𝘁𝘁𝗲𝗿 𝗮𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀 𝗮𝗿𝗲. Last week I got to make that case at @imperialcollege's @nvidia Robotics Day, in front of 300+ students and researchers. My talk focused on sample-efficient learning - how robots can acquire complex manipulation skills from a handful of demonstrations, not thousands. The real world doesn't give you infinite data. Your algorithms need to handle that. It's something we're building around directly at @Neuracore_AI. The day itself was brilliant. Six Imperial labs showcasing work across tactile sensing, surgical AI, in-context imitation learning, adaptive resilient machines and gaze-action models. NVIDIA brought deep dives on Isaac, Newton Physics and Cosmos. Thanks to the School of Convergence Science and NVIDIA for putting it together. imperial.ac.uk/news/articles/…
English
1
2
15
724
Stephen James
Stephen James@stepjamUK·
𝗦𝗰𝗮𝗹𝗶𝗻𝗴 𝘃𝗲𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗯𝗲𝗮𝘁𝘀 𝘀𝗰𝗮𝗹𝗶𝗻𝗴 𝗽𝗿𝗲-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗳𝗼𝗿 𝗿𝗼𝗯𝗼𝘁 𝗽𝗼𝗹𝗶𝗰𝗶𝗲𝘀. Stanford just published CoVer-VLA, showing that training a separate verifier for test-time action selection outperforms fine-tuning the base policy on augmented data - while requiring substantially less compute. Instead of training your policy on 16x more instruction variations, train a contrastive verifier that scores vision-language-action alignment. At deployment, generate multiple action candidates and use the verifier to pick the best one. 𝗖𝗼𝗿𝗲 𝗥𝗲𝘀𝘂𝗹𝘁𝘀:   • 22% improvement on in-distribution tasks vs scaling policy training   • 45% absolute gain on real-world manipulation.   • 7x less training compute than policy augmentation.   • Verifier scales gracefully with model size, batch size, and data. [Check the Project Page and Paper in comments below] 𝗧𝗵𝗲 𝗧𝗿𝗮𝗱𝗲-𝗼𝗳𝗳: Policy scaling: expand your training set with synthetic instructions, retrain the entire VLA model, hope it generalizes. Verification scaling: train a lightweight verifier on the same data, keep your base policy frozen, get better results with less compute. Teams pursuing this approach need platforms that support large-scale contrastive training (they used 32k batch sizes), synthetic data generation pipelines, and rapid iteration on verifier architectures without re-collecting robot data. At @Neuracore_AI, that's exactly the workflow we enable. Collect synchronized, high-fidelity data from your robot, then spin up multiple training runs in parallel our base policy, your verifier, alternative architectures without going back to the physical system. When your data layer is clean from collection, you can iterate on model architectures quickly instead of fighting data quality issues. Credit: @drmapavone @Stanford #Neuracore #Robotlearning
English
2
14
90
6.4K
Stephen James
Stephen James@stepjamUK·
Listen to the full conversation from me and @NimaGard here: youtu.be/ih_zDd2tusk?si…
YouTube video
YouTube
Neuracore@Neuracore_AI

𝗜𝗳 𝗺𝘆 𝗺𝗼𝗱𝗲𝗹 𝗳𝗮𝗶𝗹𝘀 𝗮𝗻𝗱 𝘀𝗰𝗿𝗮𝗽𝘀 𝗮 𝗽𝗮𝗿𝘁, 𝘁𝗵𝗮𝘁’𝘀 𝗳𝗮𝗿 𝘄𝗼𝗿𝘀𝗲 𝘁𝗵𝗮𝗻 𝗮 𝗳𝗮𝗶𝗹𝘂𝗿𝗲 𝗜 𝗰𝗮𝗻 𝗿𝗲𝗽𝗮𝗶𝗿.” 90% success in the lab doesn’t tell you how your model fails in production. @PathRobotics's Director of AI, @NimaGard, explains what actually matters in manufacturing on The Neuracore Podcast: → Mean time between failure → Recovery time → Failure severity → Uptime The hardest edge cases only appear once systems hit production. You need infrastructure to track these metrics and iterate fast. Head to the comments to listen to the full conversation. #Machinelearning #Robotics #Neuracore

English
0
1
4
1K
Stephen James
Stephen James@stepjamUK·
𝗩𝗶𝗱𝗲𝗼-𝗯𝗮𝘀𝗲𝗱 𝗿𝗼𝗯𝗼𝘁 𝗽𝗹𝗮𝗻𝗻𝗶𝗻𝗴 𝗵𝗮𝘀 𝗮 𝗽𝗲𝗿𝗰𝗲𝗽𝘁𝗶𝗼𝗻 𝗿𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. When depth estimates drift or object tracking fails mid-execution, most systems have no mechanism to detect or recover from the failure. They keep executing corrupted plans until the task fails completely. Researchers from Boston Dynamics AI Institute, CMU, and Brown just published NovaPlan a system that achieves 70% success on zero-shot assembly by treating video generation as a closed-loop planner with dynamic recovery. 𝗞𝗲𝘆 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀: • 𝟳𝟬% 𝗭𝗲𝗿𝗼-𝗦𝗵𝗼𝘁 𝗦𝘂𝗰𝗰𝗲𝘀𝘀: Completes 4-layer stacking and FMB assembly without demonstrations—purely from video rollouts and closed-loop recovery. • 𝗛𝘆𝗯𝗿𝗶𝗱 𝗧𝗿𝗮𝗰𝗸𝗶𝗻𝗴 𝗦𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴: When object flow becomes unstable (occlusion, rotation >45°), switches to hand pose tracking—maintaining execution stability when the target object isn't visible. • 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗥𝗲𝗰𝗼𝘃𝗲𝗿𝘆: VLM critic detects execution failures and synthesizes corrective actions including non-prehensile behaviors like finger poking discovered autonomously. • 𝗖𝗹𝗼𝘀𝗲𝗱-𝗟𝗼𝗼𝗽 𝗩𝗲𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻: Compares actual state transitions against planned outcomes and triggers re-planning when they diverge. Long-horizon success requires execution layers that handle perception failures gracefully, not just better video models. The paper identifies "depth estimation error," "tracking drift," and "geometric flickering" as primary bottlenecks. They built extensive compensation systems CVD optimization to eliminate flickering, RANSAC calibration to align depth streams, dual-anchor corrections for scale drift. When your RGB captures at T=0, your depth sensor at T=30ms, and your joint encoders at T=50ms, the data doesn't represent a single moment in time. Tracking becomes unreliable. Depth flickers between frames. You need calibration layers just to make the data usable. [Paper and Project Page in comments Below] At @Neuracore_AI, we solve this at the data layer. Your camera frame corresponds to the exact same moment as your depth estimate and joint state. No flickering. No calibration workarounds. No hybrid tracking strategies to compensate for unreliable modalities. When your data captures the same instant across all sensors, your policies work when you deploy them. Congrats @jiahuifu_carol and team at @rai_inst, @CarnegieMellon, @BrownUniversity & @Penn for this great work.
English
1
10
42
3.9K
Stephen James
Stephen James@stepjamUK·
Great to see the @Neuracore_AI team up at @UniOfYork this week!
Neuracore@Neuracore_AI

𝗥𝗼𝗯𝗼𝘁 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗶𝘀 𝘁𝗮𝗸𝗶𝗻𝗴 𝗼𝗳𝗳 𝗮𝘁 𝘁𝗵𝗲 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗶𝘁𝘆 𝗼𝗳 𝗬𝗼𝗿𝗸. This week, our robot learning team member Ke Wang visited @jihongzh at YorRobots and gave a talk at the ISA Design and Verification Seminar on how Neuracore is built to accelerate academic research in robotics. Around 20 researchers joined in person and online. After the talk, Ke ran a live demo of the platform and the team have since started exploring robot learning research, using Neuracore to support it. This is what the Neuracore Academic Program is for. Less time on infrastructure. More time on the science. If you're a university lab working on robot learning, we'd love to hear from you. @UniOfYork

English
0
0
2
163
Stephen James
Stephen James@stepjamUK·
𝗔𝘀 𝗮 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿, 𝗜 𝗰𝗮𝗻 𝗹𝗶𝘃𝗲 𝘄𝗶𝘁𝗵 𝗮 𝗳𝗮𝗶𝗹𝘂𝗿𝗲. 𝗜 𝗰𝗮𝗻𝗻𝗼𝘁 𝗹𝗶𝘃𝗲 𝘄𝗶𝘁𝗵 𝗮 𝗳𝗮𝗶𝗹𝘂𝗿𝗲 𝗜 𝗰𝗮𝗻’𝘁 𝗲𝘅𝗽𝗹𝗮𝗶𝗻. The most dangerous variable is the one you don’t know you’re missing. For years, the robotics industry has tolerated an "Infrastructure Tax"a mess of timestamp jitters, serial latency, and clock drift that we’ve quietly tried to "train away" with bigger models. It doesn't work. You can’t solve a physics problem with a larger Transformer if the input data is messy. Robotic data isn't just numerical; it has tangible, physical meaning. If you cannot visualize exactly how your data maps to the hardware in real-time, you will overlook the very physics you are trying to master. We decided to treat infrastructure as a first-class citizen of the research stack   • We’ve seen a fundamental shift in how our users operate. By offloading the "Infrastructure Tax" to Neuracore, they have stopped burning engineering hours on custom logging scripts and data-wrangling.   • We’ve integrated Smart Allocation into the stack, allowing you to configure your Compute Resources based on the stage of your project. We built @Neuracore_AI because If we are going to reach General Purpose Robotics, we need a foundation that prioritizes empirical integrity over a polished demo The real world is messy. Your data foundation shouldn't be.
English
0
1
16
1.8K
Stephen James
Stephen James@stepjamUK·
@Neuracore_AI is growing! We're looking for two engineers to join us as we scale the platform that helps robotics teams collect data, train models, and deploy faster than ever before: -> DevOps Engineer -> Robotics Engineer If you or someone you know wants to build the backbone of robot learning apply via the links below.
English
1
3
9
968