

tunct .grvt🍏
4K posts

@tunct101
Even in the depths of the darkest oceans, some light always pierces through All in @PrismaXai | @DonutAI












I just finished a little something for the @ritualnet community A website where you can create your own Ritual Role Card There you'll relive your first day in Ritual and receive some "reminders" for the next level on this journey It's nothing too complicated, just a way for everyone to look back on their journey and see how far they've come I made it because I really enjoy being part of this community where everyone learns, builds, and shares together. Try creating your own Ritual card here: ritual.gjunn.xyz And honestly, I really cherish this community ❤️ @joshsimenhoff / @Jez_Cryptoz / @0xMadScientist / @ericgudboy









Put two generations of robots in the same room and let them move. You don't need benchmarks or specs. The difference shows up on its own. At Stanford University recently, MIT Mini Cheetah and Unitree Go2 were operating in the same space. No direct comparison needed, just observation. What stands out isn't better vs worse, but how the approach has evolved: > movement patterns feel different over time > control systems interact with the environment differently > the overall behavior is becoming more integrated And more importantly, the focus is shifting beyond just the robot itself. Projects like @PrismaXai point toward a broader direction: → robotics + AI systems → real-world deployment as a baseline → building full-stack intelligence, not just machines When two generations run side by side like that, it becomes clear: robotics isn't changing in leaps; it's compounding, quietly.

PrismaX has officially joined the NVIDIA Inception Program a small milestone, but one that carries significant meaning when viewed in the bigger picture of AI for robotics. What I find interesting isn’t just the fact that they’ve joined a prestigious program, but the timing of it. We’re entering a phase where physical AI is moving out of the lab and into the real world. And at that point, the question is no longer “how smart is the model?” but rather “how do robots operate, interact, and learn in real environments?” From my perspective, what @PrismaXai is working on touches the hardest yet most critical part of this problem: real world data. Over the past year, they’ve been running large scale human in the loop robotics systems to generate real world interaction data. And one insight that really stood out to me is: 👉 The value of robotics data doesn’t lie in its quantity, but in how the system is deployed. More specifically: - Robot embodiment determines the types of interactions it can perform effectively - Sensor configuration (camera placement, depth, etc.) directly impacts how models perceive and understand the world - Task design decides whether the data is actually learnable or just noise - And human interaction within the loop is also a key variable In other words, collecting more data doesn’t automatically make AI better. If the setup is wrong, you’re just generating a large but useless dataset. What’s compelling is that over time, these patterns begin to form practical standards: - How robots should be configured - How environments should be set up - How tasks should be designed - Where humans should be involved in the loop And @PrismaXai is positioning itself to define these standards. To me, joining the NVIDIA Inception Program isn’t just about access to resources (tools, infrastructure, training…), but also about plugging into a broader ecosystem one that brings together builders, researchers, and investors to accelerate the path of physical AI into the real world. Personally, I think this is a space worth watching closely. If software based AI has already reshaped the world, then AI embedded in the physical world (robotics, automation) could drive even bigger transformations over the next 5 - 10 years. Bullish On @PrismaXai 🚀🚀🚀


For decades, robotics has relied on one core idea: predict the future, then act accordingly. In classical control systems like Model Predictive Control (MPC), a robot uses a simplified model of physics to simulate how it will move over the next few seconds. It then solves an optimization problem to choose the best sequence of actions executing only the first step before repeating the process. This approach is powerful, but it comes with limits: - It depends on simplified (often linear) models of reality - It requires heavy computation to run in real time - It struggles with the complexity and unpredictability of the real world Today, we are seeing a fundamental shift. Instead of separating modeling and control, modern systems learn them together. A single model can now take: past observations + desired outcomes → and directly generate actions This changes everything. Rather than explicitly solving equations, the robot learns from data-simulations, real-world interactions, and even human demonstrations. It doesn’t just react; it anticipates. It doesn’t just follow rules; it adapts. But scaling this approach introduces a new challenge: data. Training robots in the real world is expensive and slow. Collecting millions of interactions-especially for complex tasks like cooking or assembly-is often impractical. The breakthrough comes from learning in latent space. Instead of predicting raw video frames, modern models predict compact representations of actions-capturing concepts like grasp, move, or pick up, while ignoring unnecessary visual detail. These representations are: - Faster to compute - Easier to generalize - Transferable across robots and even from humans to robots This is the foundation behind NVIDIA’s GR00T architecture: 1⃣ A vision-language model interprets the scene 2⃣ A diffusion model predicts future latent actions 3⃣ A lightweight decoder converts those into motor commands The result is a system that can reason about what should happen next - and act on it efficiently. We are moving from: physics-driven control → data-driven intelligence From carefully engineered pipelines → to learned, end-to-end behavior. And this shift is what will take robotics beyond controlled environments-into kitchens, factories, and everyday life. Not just robots that can walk. But robots that can understand, adapt, and truly assist. @PrismaXai

