Robora

96 posts

Robora banner
Robora

Robora

@UseRobora

First in Modular Robotics https://t.co/EsSbOYVGUV

Singapore Katılım Temmuz 2025
2 Takip Edilen3.2K Takipçiler
Robora
Robora@UseRobora·
Building the Foundation for Real-World 3D Capture We’re making strong progress on Robora’s next step: enabling people to easily capture real-world 3D data using affordable hardware. 1. Smarter Camera Setup We’re finalizing a simple multi-camera system that works together with a depth sensor. This setup allows anyone to record high-quality, multi-angle scenes without expensive gear, making 3D data collection more accessible. 2. Hybrid Processing Pipeline Captured footage is lightly processed on the device (to blur faces, remove license plates, and protect privacy) before being uploaded. The heavy lifting (turning that footage into detailed 3D environments) happens on powerful cloud servers using advanced reconstruction methods. This design keeps the hardware simple while still producing professional-grade visual results. 3. Data Protection & Transparency We’re also adding a compliance layer to ensure all collected data meets privacy and legal standards. Contributors will know exactly how their data is used, how long it’s stored, and what rights or royalties they hold over it. Why This Matters This setup lays the groundwork for Robora’s ecosystem, where creators, developers, and robotics teams can capture and share real-world 3D data safely and efficiently. It’s a major step toward large-scale, decentralized 3D scene generation. What’s Next We’ll begin real-world testing of the capture kit and cloud reconstruction pipeline, fine-tuning the process before opening it up to early contributors.
Robora tweet media
English
25
27
81
8.1K
Robora
Robora@UseRobora·
We’ve officially decided to keep the buy tax to 0% indefinitely to support our community and encourage continued growth. The sell tax remains at 4% as normal, helping sustain the project and its future developments.
Robora tweet media
English
20
26
88
7.2K
Robora
Robora@UseRobora·
Buy and sell tax is now 0%. This decision follows detailed financial modeling confirming our ability to sustain operations. New revenue streams are being developed to ensure long term stability. Removing the tax aligns incentives across holders, traders and partners as we move toward a more organic, utility-driven growth model.
Robora tweet media
English
42
31
119
9.1K
Robora
Robora@UseRobora·
1. Objective This experiment was designed as a proof of concept to show that our VLA model (Pi0) can be successfully fine-tuned for robotic tasks and that our training pipeline, including Weights & Biases (W&B) tracking, works as intended. This integration enables comprehensive tracking of training metrics, hyperparameters, and system utilization, helping ensure better reproducibility and optimization in future fine-tuning runs. 2. Why This Matters This run proved that: The Pi0 VLA model learns properly on real robotic data. W&B analytics integration works, giving us full visibility into loss curves, gradients, GPU usage, and training behavior. We now have a baseline VLA fine-tuning setup that can be repeated and improved. For more technical information, please read on👇 A. Experiment Setup A fine-tuning session was conducted on the Pi0 model, known for its strong embodiment and visuomotor grounding capabilities. The training used the SOARM101 dataset and was executed on an NVIDIA H200 GPU. Key configuration details: - Model: Pi0 - Paligemma backbone (3B) + Gemma based action expert (300M) - Dataset: SOARM101 (robotic manipulation tasks) for POC - Training Steps: 20,000 - Peak VRAM Utilization: Around 35 GiB - Optimizer: AdamW - Learning Rate Scheduler: CosineAnnealing - Framework: Robora VLA fine-tuning pipeline B. Parameter-Efficient Fine-Tuning (PEFT) Integration In addition to the baseline run, follow-up experiments are planned using PEFT techniques such as LoRA and QLoRA. These will evaluate trade-offs between VRAM efficiency and fine-tuning performance. The goal is to establish optimal configurations for efficient adaptation of embodied vision-language-action (VLA) models under limited compute conditions. C. Observations and Analytics The W&B analytics provided detailed visibility into multiple training aspects, including: - Loss convergence across 20K steps - Gradient stability and magnitude distribution - Learning rate dynamics under cosine annealing LR Scheduler wrapper. - GPU utilization and memory efficiency - Optimizer–scheduler interaction effects on overall training performance These analytics will guide the selection of optimal optimizer–scheduler pairs and hyperparameter configurations for upcoming fine-tuning cycles. D. Conclusion and Next Steps This session successfully validated W&B analytics integration and established a strong baseline for Pi0 model fine-tuning on the SO-ARM101 dataset as a proof-of-concept. As soon as our Sim-data pipeline will be ready, we will be able to run even more robust and good fine-tuning session for different robot morphology and tasks. The next phase will include comparative evaluations using LoRA and QLoRA to analyze efficiency and performance trade-offs. Future experiments will also track inference latency, task completion accuracy, and embodied control performance metrics.
Robora tweet media
English
11
22
87
5.1K
Robora
Robora@UseRobora·
We’re closing another strong month at Robora, one filled with real progress, new collaborations and innovation. To everyone who’s been part of this journey, thank you. Your support drives the mission of building a verifiable, open future for robotics. Together, we’re showing how far a community can go when driven by a shared mission. Here’s a look at what we’ve been working on this October, in our latest Medium article👇 robora.medium.com/monthly-recap-…
Robora tweet media
English
16
23
80
5.3K
Robora
Robora@UseRobora·
VLA Fine-tuning Module Update A. Fine-Tuning Pipeline Enhancements - Added default configurations and state management for multiple optimizers, including SGD, SGD with Momentum, Adam, and AdamW, enabling end-users to experiment with different optimization strategies. github.com/RoboraDev/VLA_… - Integrated several learning rate scheduler wrappers -> StepLR, CosineAnnealing, LinearDecay, and ExponentialDecay, allowing seamless selection and comparison through Wandb for fine-tuning analytics. github.com/RoboraDev/VLA_… - Default implementations for Pi0, Pi0.5, and SmolVLA support a maximum action and observation dimension of 32. For high-complexity robotic agents such as Humanoid, the action encoder, state encoder, and action decoder feature dimensions will need to be scaled accordingly, these can be done easily as these are 3 lines of pytorch code but have to be implemented from scratch. For each model, this will be specified inside config.py and the corresponding WithExpert.py file will inherit it. github.com/RoboraDev/VLA_… B. PI0 Policy Implementation - Implemented the Pi0 Policy architecture in PyTorch, referencing the Physical Intelligence OpenPI repository for core design principles & configurations. - Implemented PI0Config, will also be added yaml, json support for custom configs in fine-tuning. github.com/RoboraDev/VLA_… C. VLA Util helper functions Implementation - get_device_info, torch_device, device_name for accelerators - JSON Serialize & deserialize functions to store the optimizers state on disk. - Parameters utility helpers : get device, dtype, output shape etc github.com/RoboraDev/VLA_… Plan for Tomorrow: - Support for SmolVLA and Pi0.5 will be implemented next, as these architectures share a common Vision-Language Model (VLM) architecture with minor variations in module structure and feature dimensions. - Lerobot dataset framework integration for dataset management: will integrate online dataset as a Proof-of-concept.
Robora tweet media
English
25
33
99
4.6K
Robora
Robora@UseRobora·
This week, we’ll begin pilot deployments of the capture pipeline within our simulation environment, an intermediary step before scaling to real-world data collection. These tests will validate the full end-to-end flow: from visual capture and 3D reconstruction, to data upload, fine-tuning and behavioral analysis of the VLA model. The purpose is clear: to evaluate how well our models adapt when exposed to semi-synthetic, simulation-anchored data that mimics real-world complexity. By introducing variability in lighting, geometry and object dynamics, we can measure the model’s domain adaptation efficiency and its capacity to generalize across unseen conditions, which is a key metric for narrowing the sim-to-real gap. We’ll also launch the first fine-tuning experiments through the VLA SDK, processing captured scenes and quantifying shifts in model behavior compared to purely simulated inputs. This will help us refine both our data pipeline and our fine-tuning methodology before scaling up to large-scale physical capture campaigns. In parallel, work begins on the Robora Data Policy & Contributor Incentive Layer, designed to credit and reward those who supply valuable visual data. This framework will connect to Robora’s on-chain architecture, establishing verifiable proof-of-contribution and preparing for tokenized incentive mechanisms. The ultimate goal: to transform data collection from a passive process into an active, community-driven ecosystem where every user helps teach embodied intelligence to see, understand and evolve.
Robora tweet media
English
25
26
94
5.2K
Robora
Robora@UseRobora·
It’s great to see Larry establishing the foundational partnerships that will enable Robora to thrive as a leading platform for embodied AI. These collaborations are not only laying the groundwork for future innovation but will also bring significant recognition and expanded hardware capabilities to support Robora’s long-term growth and impact. Stay tuned for more confirmed partnerships coming soon.
Larry@purewallet@LarryPureLabs

Dubai a familiar friend @BlLife_Forum for the next four days! NS Labs & @PureWalletPlus $RBR $JOS! Enjoying the weather and the networking and partnerships to come!

English
17
24
85
6.8K
Robora
Robora@UseRobora·
Highlights this week: • Lemorele P300 (R) Integration The Lemorele P300 (R) officially joins Robora’s hardware suite, turning every user into a real-world data contributor. It enables real-time, high-definition video capture from any camera (robot, drone, or handheld) and wirelessly streams it to devices running the Robora VLA interface or data capture app. These feeds are then uploaded directly into Robora’s cloud or local VLA nodes, powering real-world model training and fine-tuning. • Smart Contract Audits Completed We finalized our DApp and successfully conducted audits on the smart contracts that will be deployed, through @SolidProof_io . This solidifies the foundation for our upcoming releases. • VLA Fine-Tuning SDK Pipeline Implemented the full code pipeline of the VLA Fine-Tuning SDK, connecting live capture, processing, and model fine-tuning into a single, automated flow. • 3D Reconstruction & Scene Understanding Advanced work on building complete 3D scenes from raw visual data, reconstructing geometry, texture, and spatial layout for real-world mapping and model sharing. On top of that, scene-understanding algorithms now analyze these reconstructions for object segmentation, semantic labeling, pose estimation, and environment profiling, creating structured data that robots can actually learn from. This dual process (reconstruction + understanding) is grounded in the latest robotics research showing that blending real-scene 3D data with simulation sharply reduces the sim-to-real performance gap, enabling more robust perception and adaptability in embodied AI systems.
Robora tweet media
English
25
26
96
3.9K
Robora retweetledi
Larry@purewallet
Larry@purewallet@LarryPureLabs·
There will be signs! Big news coming from @UseRobora $RBR ! Top research facility in APAC! Real hardware ✅ Real Research✅ Real Tangibles ✅ Stay Tuned!!! 👀
Larry@purewallet tweet media
English
16
10
49
2.8K
Robora
Robora@UseRobora·
Real-world data is the missing link between simulation and physical intelligence. With the Lemorele P300 (R), Robora is closing that gap, turning every user into a contributor to embodied AI evolution. It enables real-time, high-definition video capture from any camera, whether it’s mounted on a robot prototype, a drone, or a handheld camera. This visual feed is transmitted wirelessly and losslessly to a connected device (tablet, phone, or PC) that runs the Robora VLA interface or data capture app. The device then streams or uploads these feeds directly into Robora’s cloud or local VLA processing node. By collecting data from diverse environments and use cases, Robora gains the foundation to train and fine-tune its models on real-world sensory information, moving beyond purely simulated data. This approach is key to reducing the sim-to-real gap, the performance difference between robots trained in simulation and those operating in complex physical environments.
Robora tweet media
English
37
35
152
13.6K
Robora
Robora@UseRobora·
This week, Robora made solid progress on multiple fronts as we continued to refine our robotics stack, improve performance and move closer to real-world deployment. Our focus has been on improving model adaptability, advancing physical intelligence and strengthening the hardware foundation that supports our platform. We introduced Ege, an industrial designer from Istanbul who plays a key role in shaping Robora’s hardware systems. His background in aviation and product design helps bridge creativity and engineering, designing structural systems that bring our robots to life both in simulation and in real-world environments. Significant strides were also made in hardware integration. We successfully addressed three out of five major challenges for vision-to-prompt integration, including signal communication, wireless connectivity and onboard SDK support. These improvements bring us closer to a fully functional hardware layer that connects directly with our software stack. On the development side, two key components of our Physical AI architecture saw major updates. The first is the VLA Fine-Tuning and Adaptation Pipeline, which enables our models to support different robotic hardware configurations while preserving their reasoning capabilities. Using efficient fine-tuning techniques like QLoRA, we can retrain only the action layer, allowing a single VLA model to adapt to new modules such as grippers, arms or mobility units. In parallel, we continued building a PyBullet-powered simulation environment for reinforcement learning. This platform trains low-level control policies for locomotion, balance and stability in complex and dynamic conditions, using advanced algorithms like PPO and SAC. Our approach relies on a clear separation of control: the VLA model serves as the high-level planner, understanding visual input and task context, while RL-based controllers handle precise, real-time execution. This layered design allows our robots to combine intelligence with physical robustness and adapt to a wide range of environments and mechanical configurations.
Robora tweet media
English
30
34
118
11.3K
Robora
Robora@UseRobora·
We’re actively working on several fronts to accelerate development and ensure maximum efficiency, with the goal of reaching the market quickly. This week, we focused on the following areas: 1. VLA Fine-Tuning & Adaptation Pipeline We are working on our VLA SDK to support open-weight models including SmolVLA and Pi0 for now, with GrootN1.5 planned for near future, each offering different trade-offs between model size, inference speed, and generalization. Our current focus is on implementing action-head only fine-tuning using QLoRA (where required) a technique that allows efficient training on consumer-grade GPUs while preserving the pretrained vision-language backbone. This approach enables us to remap the model’s action space for different robotic configurations, essentially allowing a single VLA to learn how to control new hardware modules (arms, grippers, mobility units, etc.) without degrading its multimodal reasoning ability. By isolating adaptation to the action head, we maintain the core representation and generalization power of the model while making it contextually aware of our new robotic action space. 2. Reinforcement Learning for Low-Level Control Policy Parallel to the VLA pipeline, we are working on setting up a PyBullet-based physics simulation environment designed for large-scale reinforcement learning experiments. This environment trains neural control policies using Proximal Policy Optimization (PPO) and SAC algorithm, two state-of-the-art algorithms for continuous control. These RL policies are being trained to handle locomotion, balance and stability under dynamically changing environments and turbulences, leveraging parallel simulation for faster convergence and robustness. 3. The key architectural principle here is hierarchical separation of control: The VLA acts as the top-level planner, interpreting natural language commands, visual input, and task context. The RL policy serves as the low-level actuator, executing smooth, stable movements in real time at higher action frequencies. This separation allows the system to combine semantic intelligence with physical resilience making our robots adaptable to diverse terrains, mechanical modules and environmental uncertainties, far beyond what conventional PID or trajectory-based controllers can achieve. Together, these two components form the backbone of our Physical AI stack , a system designed to reason, adapt, and act seamlessly across our robotics stack.
Robora tweet media
English
37
35
145
14.8K
Robora retweetledi
Larry@purewallet
Larry@purewallet@LarryPureLabs·
$RBR is providing physical hardware integration that is useable and available now! We have checked off 3/5 main challenges for vision to prompt integrations via signal, wireless integration, and onboard SDK! @UseRobora
Larry@purewallet tweet mediaLarry@purewallet tweet media
English
14
12
55
10.6K
Robora
Robora@UseRobora·
Last week, we introduced Quy as part of our ongoing effort to share more about the people behind Robora, as we uphold transparency in how we work and who we are. This week, we're excited to continue that journey by introducing another key team member: Ege. Meet Ege, an industrial designer from Istanbul shaping the hardware behind Robora’s robotics platform. Starting his journey in aviation, he explored how shape, balance, and materials can push the limits of movement and efficiency. At Robora, he combines creativity and engineering to design the structural systems that bring our robots to life in Simulation Environment and in the real world. In this video, he shares more about himself and his role within Robora.
English
31
31
150
11.4K
Robora
Robora@UseRobora·
This week, Robora continued to evolve from an ambitious idea into a functioning ecosystem. Our focus was on building out the core technology, improving tools for developers and laying the groundwork for the next stage of growth. Step by step, the project is becoming a foundation for Physical AI where robotics, intelligence and blockchain connect into a single verifiable system. We shared a clear look at what’s coming next as Robora enters a pivotal phase. The updated roadmap highlights several major goals ahead including the release of Whitepaper V2, the launch of our Dapp and 3D Builder, new technical videos and deeper development of key modules. All of this is part of our mission to create a participatory robotics ecosystem where actions, data and contributions are transparent and verifiable on-chain. Transparency also remained a priority this week as we introduced Quy, one of the engineers behind Robora’s Vision Module. With strong experience in computer vision, multi-object tracking and 3D reconstruction he plays an important role in building the perception layer that allows robots to understand and interact with their surroundings. We also presented Robora Sim, a new simulation environment built on PyBullet that helps robots learn and adapt before they’re deployed in the real world. By combining VLA-based planning with motion control it supports imitation and reinforcement learning, domain randomization and synthetic data generation all aimed at closing the gap between simulation and reality.
Robora tweet media
English
32
38
149
14.1K
Robora retweetledi
Larry@purewallet
Larry@purewallet@LarryPureLabs·
East Meets West: Robotics! Follow the money! By @LarryHashpowerX Powered by @UseRobora In September 2025, the humanoid robot world exploded with liquidity —over 21 financing deals racked up more than 1.7billion dollars/ 10 billion yuan in disclosed funding, smashing the single-month record according to the Humanoid Robot Scene Application Alliance. It's wild how capital is piling into the big dogs, speeding up industry shake-ups as tech and commercial momentum hit key turning points. Case in point: US startup Figure AI snagged over $1 billion (about 7.1 billion yuan) all by itself, cornering 70% of the month's total. That underscores how investors are laser-focused on top players. Here's a quick rundown of the standout deals: - Figure AI wrapped up a massive Series C on Sept 16, pulling in >$1B led by Parkway Venture Capital. Big names like Nvidia, Intel Ventures, LG Tech, Qualcomm, Salesforce, T-Mobile, Alien Ventures, Tamarack Global, Brookfield, and Macquarie jumped in. Post-round, they're valued at $39B (∼277.4B yuan). (Check out: Figure breaks records with $1B raise, nearing $40B val!) - Variable Robot scored nearly 1B yuan in an A+ round on Sept 8, led by Alibaba Cloud and Guoke Investment, with CDB Financial, Sequoia China, and Strategy Capital following. Old backers like United Investments, Lenovo Star, and Jun Lian Capital doubled down. Cool note: Alibaba Cloud's first dip into embodied intelligence. Cash goes to model training and hardware tweaks. (More: Alibaba Cloud leads; Variable grabs nearly 1B yuan A+!) - Dyna Robotics, that Silicon Valley gem, closed $120M on Sept 16, led by Robostrategy, CRV, and First Round Capital. Investors included Salesforce Ventures, Nvidia, Amazon, Samsung, and LG Technology Ventures. Now valued over $600M (∼4.27B yuan). (Deets: Nvidia, Amazon, and Samsung bet big; upstart hits $600M val!) - One Star Robot nailed a seed round worth hundreds of millions yuan, blending investors like BV Baidu Venture Capital and Tongchuang Weiye (market pros), Galaxy General and Landai Technology (industry folks), plus China New Group. Founded by Li Xingxing, with Pan Yunbin as chairman. - Starborn Intelligent Robot raised 200M yuan in an angel round from Zhongke Chuangxing, Hillhouse, Yuanhe Origin, Yuansheng Venture Capital, and industrial players like Zhiyuan Robot, Solenoid Capital, SAIC Capital, and Zhongli Shiqiao. Born from Beijing Zhiyuan Research Institute (launched Aug 1, 2025), they're all about multi-modal spatial smarts and building a universal embodied brain. Funds fuel R&D and real-world rollouts. - First Shape Technology (aka AheadForm) finished a multi-billion yuan round on Sept 29—their third this year! Led by Ant Group, co-invested by Jinqiu Fund, with Heavy Snow Capital, Honghui Fund, and Pengcheng Vision Fund in the mix. Oldies like Shun Capital, China Merchants Venture Capital, and Taihill oversubscribed; Deep Blue Capital advised. Started in 2024, they bridge human-machine interaction gaps. Money for empathy model upgrades and app expansions. - Lewin Intelligent Technology (Enjoy Tech) grabbed 200M yuan in an angel++ round on Sept 28—their third in nine months, pushing total angel funding near 500M yuan. Zhongding Capital led, IDG Capital followed. Consumer-focused embodied intel outfit; funds hit core parts, robot bodies, motion control, and bionic model speed-ups for mass rollout. This whole funding frenzy ties right into bigger trends, like crypto meeting robotics. Take @UseRobora—a modular robotics platform with on-chain ownership and $RBR token—it's surfing the vertical wave as a bridge builder. Its market cap jumped from $3.87M to $13.6M in just over a week, fueled by sector excitement from monster raises in capital. With so much regained momentum flowing into Robora's token ecosystem, $RBR's uptrend gives everyday people liquid access to the trillion-dollar robotics boom. DYOR!
Larry@purewallet tweet mediaLarry@purewallet tweet media
English
8
15
44
5.5K
Robora
Robora@UseRobora·
Robora Sim: A PyBullet-Powered Environment for Learning Robotic Physical Intelligence We are currently building our Robora simulation environment setup for our sim based learning, leveraging PyBullet, an industry-standard physics engine widely used in AI-driven robotics research and development. The environment is optimized with GPU-accelerated learning algorithms, enabling high-speed imitation learning and reinforcement learning within a safe and controlled virtual setup before shipping out to real world. This simulation platform allows our models to learn, adapt, and generalize across different robot morphologies, terrain types and task objectives - all before deployment to the real world. At it's core, the system combines a VLA-powered high-level planner with low-level motion control algorithms, working cohesively to produce emergent, physically intelligent behaviors. This synergy between simulation, learning, and real-world transfer marks a major step forward in our pursuit of adaptive and intelligent robotic systems. Through advanced domain randomization and synthetic data generation, the Robora Simulation Environment ensures that policies trained in simulation transfer effectively to real-world robots, minimizing the sim-to-real gap. Moreover, users will be able to test and integrate their own hardware kits within selected simulation environments in the Robora Dapp, ensuring seamless compatibility and safer real-world implementation.
English
50
59
171
23.4K
Robora
Robora@UseRobora·
We promised more transparency. Now it’s time to meet another brilliant mind behind Robora. Quy has been contributing to Robora since 2025, bringing his expertise in computer vision, AI, and robotics to the development of the Vision Module, a core component that powers the platform’s intelligent visual perception. With a Master’s degree in Artificial Intelligence from Seoul National University of Science and Technology, South Korea, he combines strong academic foundations with hands-on engineering experience. At Robora, he focuses on integrating object detection, tracking, and scene understanding into the robotics framework, drawing on his deep experience in multi-object tracking, 3D reconstruction, and camera calibration to design systems that are both accurate and efficient. He also works closely with other engineers to ensure the vision infrastructure is scalable and aligned with Robora’s broader technical goals Before joining Robora, Quy worked as an AI Engineer at EM&AI, where he developed speech-to-text, name entity recognition and sentiment analysis systems. Additionally, he served as a researcher at MINT Lab (mint-lab.github.io), publishing papers in computer vision and AI. His research on camera calibration, 3D motion tracking, and edge AI has been recognized at major conferences and journals. MINT Lab is an research group for perception, intelligence, and actions for mobile agents and devices such as robots, cars, drones, and also smartphones. MINT Lab belongs to Computer Science and Engineering Department in Seoul National University of Science and Technology (shortly SEOULTECH). Publications of Quy are, amongst others: - Cong Quy Nguyen, Sunglok Choi, MINT Camera Calibration Toolbox, Korea Robotics Society Annual Conference (KRoC), 2025 - Cong Quy Nguyen, Sunglok Choi, Generalized Camera Calibration: Camera Model Selection and Calibration with Effective Image Sampling, IEEE Sensors Journal, Vol. 25, No. 15, 2025 DOI Github: github.com/ncquy With a strong background in machine learning, signal processing, and decentralized AI, Quy continues to bridge the gap between AI research and robotics applications, helping advance Robora’s mission to build intelligent, autonomous systems.
English
41
43
173
11K
Robora
Robora@UseRobora·
We’re entering a defining phase at $RBR. Robora is evolving into a verifiable robotics ecosystem, an open platform where anyone can contribute, build, and earn. By merging AI, robotics, and blockchain, we’re creating a new layer of Physical-AI infrastructure where every robot’s action, dataset, and contribution is transparent, traceable, and rewarded on-chain. Our vision is simple: Turn the robotics lifecycle into a participatory economy. Here's what's ahead in the coming weeks: Whitepaper V2 – A transparent blueprint of our next development stage. Tech Videos – Showcasing key elements of Robora's Framework. Partnerships & Onboardings – Expanding our global builder and research ecosystem. Full transparency from the team - Including detailed profiles, and personal video introductions from core members. Dapp Launch + 3D Builder – Letting anyone visualize, design, and interact with robots directly on-chain. Further development on the Modules - where each module functions independently yet connects into a unified verifiable framework. 3D Reconstruction Toolkit - Introducing a 3D Reconstruction Toolkit, a collaborative project that lets anyone contribute to the future of real-world simulation environments and earn royalties in return.
Robora tweet media
English
68
43
183
17.5K