Peter Okiokpa

3.8K posts

Peter Okiokpa banner
Peter Okiokpa

Peter Okiokpa

@peterokiokpa

Making robotics education and innovation accessible. Building @thesoftmatichq @theaibothub @roemai_io. Contrib. @cybernetic_lab.

Worldwide Katılım Aralık 2023
196 Takip Edilen112 Takipçiler
Peter Okiokpa retweetledi
Cybernetic Labs
Cybernetic Labs@cybernetic_lab·
Robotics has three hard problems: semantics, planning, and real-time control. Language models cracked the first one. Planning got better fast. Control in dynamic environments is still a work in progress.
Cybernetic Labs tweet media
English
0
3
3
58
Peter Okiokpa retweetledi
Emerson S
Emerson S@Em_Nomadic·
If you’re building a robot right now, this is for you. @tnkrdotai Build Hours. Every Tuesday. Same time. Bring your questions, your half-finished builds, your “does anyone know why this isn’t working” moments. Real feedback from builders who care. Link in comments.
Emerson S tweet media
English
1
3
13
971
Peter Okiokpa retweetledi
Lukas Ziegler
Lukas Ziegler@lukas_m_ziegler·
100% open-source robotic arm! 🦾 Seeed Studio released reBot-DevArm, a robotic arm project lowering the barrier to learning robotics... or how people call it these days: physical AI. Everything is open-sourced. Hardware blueprints include sheet metal and 3D printed parts. Detailed BOM covers every screw with purchase links. Software and algorithms include Python SDK, ROS1/2, Isaac Sim, and LeRobot. Let's have a look at robot specs. So it has 1.5 kg payload, 650 mm max reach, 4.5 kg weight, and less than 0.2 mm repeatability with 6 DoF plus gripper. This is true open source for robotics. When every screw, CAD file, motor driver, and algorithm is freely available, desktop robotic arms become accessible to students, researchers, and developers worldwide. ‼️ Start your career in robotics today: github.com/Seeed-Projects… ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
Lukas Ziegler tweet media
English
7
85
559
30.9K
Peter Okiokpa retweetledi
Fola Aina
Fola Aina@folanski·
Dear young Nigerian, Take a closer look, you may not be where you wish to be yet, but you are definitely not where you used to be. That’s PROGRESS. Keep your hope and dreams alive. You'll make. Not in your own strength, but with God by your side. Your best days are ahead of you!
English
4
21
54
604
Peter Okiokpa retweetledi
KuphDev
KuphDev@KuphDev·
@lukas_m_ziegler @peterokiokpa I’m pretty hyped about this! Pretty sure it’s gonna be my next robot arm. The SO-101 has been fun, but feels more like a “toy” than something I can do advanced projects with.
English
1
1
3
674
Peter Okiokpa retweetledi
そぞら@Raspberry Pi 電子工作
Raspberry Pi Zero 2 Wを使ったGPSの実験。外に持ち出すのが楽しみ。電源は小型のモバイルバッテリーを使います。
そぞら@Raspberry Pi 電子工作 tweet media
日本語
5
10
235
9.2K
Peter Okiokpa retweetledi
Joe Harris
Joe Harris@_joe_harris_·
if you're new to actuators, study these companies: strain-wave reducers (rotary transmission): - Harmonic Drive Systems, Japan - Leaderdrive, China - Green Harmonic, China planetary roller screws (linear transmission): - Rollvis, Switzerland - Schaeffler / Ewellix, Germany - SKF, Sweden frameless BLDC motors: - Kollmorgen, US - maxon, Switzerland - TQ-RoboDrive, Germany - CubeMars, China bearings: - THK, Japan - IKO, Japan torque sensors: - ATI Industrial, US - FUTEK, US complete actuator modules: - Hyundai Mobis, Korea - Apptronik, US - Unitree, China - AgiBot, China encoders: - RLS / Renishaw, UK - Celera Motion, US Japan, China, Switzerland, Germany dominate let me know who im missing
Joe Harris@_joe_harris_

Tesla says 56% of Optimus's cost is actuators. America manufactures almost none of them. Generational opportunity for builders in this space.

English
36
151
1.3K
129.9K
Peter Okiokpa retweetledi
Junfan Zhu 朱俊帆 ✈️ CVPR
🤖🍇 Robotics & World Model Reading Club 06 Recap—SF0502 @saturdayrobotic Video World Models as Simulator & Policy—Keynote from @tongzhou_mu (@RhodaAI), cohost @junfanzhu98, @aurorafeng_01 Robotics is no longer about learning actions—it’s about selecting actions from predicted futures. 🎬 Two Roles of Video World Models - Simulator → learn physics from data, generate experience - Policy → drive decisions via video-conditioned action generation Goal: inject physical common sense from web-scale video into control. 🧪 I. Video Models as Learned Simulators 1) Data Synthesis (DreamGen, GR00T) Pipeline = RAG + rollout + IDM labeling - Prompt → retrieve similar robot videos → generate new task rollouts - Inverse Dynamics Model (IDM) → pseudo actions → train policy ✅ Cheap, scalable, safe edge cases ❌ Open-loop → hallucination + error accumulation ⚠️ Insight: IDM is NOT the bottleneck → inverse mapping is easy; forward world modeling is hard → works with teleop / eval / random data; generalizes across robots 2) Inference-Time Planning (V-JEPA2) - Action-conditioned video model -Sample action sequences → rollout in latent → score vs goal - Replan iteratively (receding horizon) ✅ Test-time scaling (more samples = better plans) ❌ Heavy compute vs real-time control 👉 Pipeline: Policy eval → prune → planning 3) Policy Evaluation (Veo, Ctrl-World) - Use video model as simulator for scoring trajectories - Acts as action filter / value proxy ✅ Unlimited rollouts (vs traditional sim limits) ❌ Less accurate than physics engines 🚨 Not real-time → offline selection before planning 🤖 II. Video Models as Policy 4 Paradigms (1) Joint Video + Action Generation (DreamZero, GR1/2) - Diffusion / flow matching over video+action - Shared denoising → cross-modal reasoning ⚠️ Open: pretrained video ≠ pretrained for action (2) Representation → Action (VPP, Video Policy) - Video model = feature extractor - Small diffusion policy = action head ⚡ Fast inference ⚠️ Partial denoising = control authority allocation - none → action head decides - full → video dominates - partial → shared decision boundary (3) Open-loop Generation (UniPi) Generate full future video → IDM → actions ✅ Uses video model as-is ❌ Plan fixed → no reaction → brittle (4) Closed-loop Generation (DVA, LingBot) Generate → act → replace with real observation → repeat ✅ Continuous grounding → avoids hallucination ❌ Requires causal modeling + heavy infra 🔥 Core insight: Video model ≠ decision maker → it proposes futures → policy = selecting among futures via translation/scoring 🧠 System-Level Truths - Failure ≠ video problem → usually translation / constraint / IDM issue - Action space is task-dependent (position vs others) - Closed-loop = continuous alignment with reality ⚙️ Deployment Reality Infra > model tweaks (latency, kernel fusion, precision) 100 robots ↔ 1000 GPUs? edge vs cloud tradeoff Data unclear: UMI / egocentric promising but not converged Perception bottleneck: camera latency, resolution, depth ⚠️ Fundamental Tensions Latent vs Pixel → latent efficient but may drop task-critical info → representation choice caps capability RL Warning → learned simulator ≠ ground truth → RL will exploit model bias (simulator hacking) Tactile vs Vision → easy to add, but no internet-scale data → video dominates 🚀 Emerging Directions - Diffusion distillation → faster generation - Auto-regressive diffusion transformers - Video models as simulator + policy + value function + data engine 👉 A unified computational primitive 🧩 Video world models are not just better perception—they redefine control: From “predict → act” → to “generate futures → select actions.” The bottleneck has shifted: ❌ Not model capability ✅ Grounding decisions into real-world constraints
Junfan Zhu 朱俊帆 ✈️ CVPR tweet mediaJunfan Zhu 朱俊帆 ✈️ CVPR tweet mediaJunfan Zhu 朱俊帆 ✈️ CVPR tweet mediaJunfan Zhu 朱俊帆 ✈️ CVPR tweet media
Junfan Zhu 朱俊帆 ✈️ CVPR@junfanzhu98

x.com/i/article/2050…

English
0
10
38
3.9K
Peter Okiokpa retweetledi
Peter Okiokpa retweetledi
Emerson S
Emerson S@Em_Nomadic·
The most underrated thing about what @tnkrdotai is building every robot that gets built and connected becomes a node in a fleet that fleet collects real world data that data trains better models that's the flywheel. builders → robots → data → intelligence worth paying attention to
Emerson S@Em_Nomadic

The bottleneck for most people who want to build a robot isn't motivation it's not knowing where to start what parts. what order. what software stack. Tnkr solves exactly that — open source robot projects with step-by-step assembly, CAD, firmware, everything tnkr.ai/explore

English
1
6
23
1.4K
Peter Okiokpa retweetledi
Seeed Studio
Seeed Studio@seeedstudio·
A daily window into the universe. 🌌📟 Daily Cosmic Explorer by Young brings NASA’s Astronomy Picture of the Day, image credit, date, and a short cosmic explanation to your ePaper display. Built with our SenseCraft HMI, no code. Best for reTerminal E1004. Explore the design 👉 sensecraft.seeed.cc/hmi/template/1…
Seeed Studio tweet media
English
0
1
15
1.3K
Peter Okiokpa retweetledi
BlindVia
BlindVia@blind_via·
There should be an electronics conference that is also a rave party!
English
9
1
36
1.6K
Peter Okiokpa retweetledi
Arduino
Arduino@arduino·
The biggest challenge in AI today isn't building a model — it's making that model work on a device with low latency and high efficiency for real-world projects. We’ve partnered with @Qualcomm Academy to bridge that gap. Using @EdgeImpulse and Arduino UNO Q, this certification gives you the blueprint to turn raw data into a fully functional edge AI application. Ready to get certified? Enroll now to master the UNO Q's "dual-brain" architecture and develop systems that think as fast as they act: academy.qualcomm.com/course-catalog…
Arduino tweet media
English
1
12
58
5.7K
Peter Okiokpa retweetledi
NVIDIA Robotics
NVIDIA Robotics@NVIDIARobotics·
The Physical AI Robotics GR00T‑X Embodiment Sim dataset has surpassed 10 million downloads on @HuggingFace. 🥳 A huge shoutout to the global research and developer community exploring the future of embodied AI and robotics with this open dataset — you made this milestone possible. 📥 Try it on Hugging Face 👉 nvda.ws/3Qv64Ul
NVIDIA Robotics tweet media
English
10
28
176
15.5K
Peter Okiokpa retweetledi
Matt Hartman
Matt Hartman@MattHartman·
Really cool
Lukas Ziegler@lukas_m_ziegler

Princeton's Introduction to Robotics! 🎓 @Princeton University released their full Introduction to Robotics course publicly with lecture videos, notes, slides, and assignments. This course provides fundamental theoretical and algorithmic principles behind robotic systems with hands-on experience. Topics covered: → Feedback Control (dynamics, PD control, Linear Quadratic Regulator) → Motion Planning (discrete planning with BFS/DFS, optimal planning with Dijkstra/A*) → State Estimation, Localization, and Mapping (Bayes filtering, Kalman filtering, particle filtering, SLAM) → Vision and Learning (optical flow, deep learning, convolutional networks, reinforcement learning), and broader topics including robotics and law, ethics, and economics. Assignments include theory, programming, and hardware implementation components. The final project has students program drones for vision-based navigation with attached cameras transmitting real-time images. All lecture videos, notes, slides, and assignments are freely available. Prerequisites include multivariable calculus, linear algebra, basic probability, basic differential equations, and some programming experience in Python. ‼️ GO FOR IT: irom-lab.princeton.edu/intro-to-robot… ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com

English
0
5
32
4.5K