Ryan Mclaughlin

2.3K posts

Ryan Mclaughlin

Ryan Mclaughlin

@vtecthis01

Grand Prairie, TX 가입일 Ekim 2021
300 팔로잉113 팔로워
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
We’ve officially decided to keep the buy tax to 0% indefinitely to support our community and encourage continued growth. The sell tax remains at 4% as normal, helping sustain the project and its future developments.
Robora tweet media
English
20
24
90
7.6K
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
Buy and sell tax is now 0%. This decision follows detailed financial modeling confirming our ability to sustain operations. New revenue streams are being developed to ensure long term stability. Removing the tax aligns incentives across holders, traders and partners as we move toward a more organic, utility-driven growth model.
Robora tweet media
English
42
30
121
9.4K
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
We’re closing another strong month at Robora, one filled with real progress, new collaborations and innovation. To everyone who’s been part of this journey, thank you. Your support drives the mission of building a verifiable, open future for robotics. Together, we’re showing how far a community can go when driven by a shared mission. Here’s a look at what we’ve been working on this October, in our latest Medium article👇 robora.medium.com/monthly-recap-…
Robora tweet media
English
16
23
81
5.4K
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
VLA Fine-tuning Module Update A. Fine-Tuning Pipeline Enhancements - Added default configurations and state management for multiple optimizers, including SGD, SGD with Momentum, Adam, and AdamW, enabling end-users to experiment with different optimization strategies. github.com/RoboraDev/VLA_… - Integrated several learning rate scheduler wrappers -> StepLR, CosineAnnealing, LinearDecay, and ExponentialDecay, allowing seamless selection and comparison through Wandb for fine-tuning analytics. github.com/RoboraDev/VLA_… - Default implementations for Pi0, Pi0.5, and SmolVLA support a maximum action and observation dimension of 32. For high-complexity robotic agents such as Humanoid, the action encoder, state encoder, and action decoder feature dimensions will need to be scaled accordingly, these can be done easily as these are 3 lines of pytorch code but have to be implemented from scratch. For each model, this will be specified inside config.py and the corresponding WithExpert.py file will inherit it. github.com/RoboraDev/VLA_… B. PI0 Policy Implementation - Implemented the Pi0 Policy architecture in PyTorch, referencing the Physical Intelligence OpenPI repository for core design principles & configurations. - Implemented PI0Config, will also be added yaml, json support for custom configs in fine-tuning. github.com/RoboraDev/VLA_… C. VLA Util helper functions Implementation - get_device_info, torch_device, device_name for accelerators - JSON Serialize & deserialize functions to store the optimizers state on disk. - Parameters utility helpers : get device, dtype, output shape etc github.com/RoboraDev/VLA_… Plan for Tomorrow: - Support for SmolVLA and Pi0.5 will be implemented next, as these architectures share a common Vision-Language Model (VLM) architecture with minor variations in module structure and feature dimensions. - Lerobot dataset framework integration for dataset management: will integrate online dataset as a Proof-of-concept.
Robora tweet media
English
24
32
99
4.7K
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
This week, we’ll begin pilot deployments of the capture pipeline within our simulation environment, an intermediary step before scaling to real-world data collection. These tests will validate the full end-to-end flow: from visual capture and 3D reconstruction, to data upload, fine-tuning and behavioral analysis of the VLA model. The purpose is clear: to evaluate how well our models adapt when exposed to semi-synthetic, simulation-anchored data that mimics real-world complexity. By introducing variability in lighting, geometry and object dynamics, we can measure the model’s domain adaptation efficiency and its capacity to generalize across unseen conditions, which is a key metric for narrowing the sim-to-real gap. We’ll also launch the first fine-tuning experiments through the VLA SDK, processing captured scenes and quantifying shifts in model behavior compared to purely simulated inputs. This will help us refine both our data pipeline and our fine-tuning methodology before scaling up to large-scale physical capture campaigns. In parallel, work begins on the Robora Data Policy & Contributor Incentive Layer, designed to credit and reward those who supply valuable visual data. This framework will connect to Robora’s on-chain architecture, establishing verifiable proof-of-contribution and preparing for tokenized incentive mechanisms. The ultimate goal: to transform data collection from a passive process into an active, community-driven ecosystem where every user helps teach embodied intelligence to see, understand and evolve.
Robora tweet media
English
24
23
93
5.3K
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
Highlights this week: • Lemorele P300 (R) Integration The Lemorele P300 (R) officially joins Robora’s hardware suite, turning every user into a real-world data contributor. It enables real-time, high-definition video capture from any camera (robot, drone, or handheld) and wirelessly streams it to devices running the Robora VLA interface or data capture app. These feeds are then uploaded directly into Robora’s cloud or local VLA nodes, powering real-world model training and fine-tuning. • Smart Contract Audits Completed We finalized our DApp and successfully conducted audits on the smart contracts that will be deployed, through @SolidProof_io . This solidifies the foundation for our upcoming releases. • VLA Fine-Tuning SDK Pipeline Implemented the full code pipeline of the VLA Fine-Tuning SDK, connecting live capture, processing, and model fine-tuning into a single, automated flow. • 3D Reconstruction & Scene Understanding Advanced work on building complete 3D scenes from raw visual data, reconstructing geometry, texture, and spatial layout for real-world mapping and model sharing. On top of that, scene-understanding algorithms now analyze these reconstructions for object segmentation, semantic labeling, pose estimation, and environment profiling, creating structured data that robots can actually learn from. This dual process (reconstruction + understanding) is grounded in the latest robotics research showing that blending real-scene 3D data with simulation sharply reduces the sim-to-real performance gap, enabling more robust perception and adaptability in embodied AI systems.
Robora tweet media
English
24
26
94
3.9K
Ryan Mclaughlin 리트윗함
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
This week, Robora made solid progress on multiple fronts as we continued to refine our robotics stack, improve performance and move closer to real-world deployment. Our focus has been on improving model adaptability, advancing physical intelligence and strengthening the hardware foundation that supports our platform. We introduced Ege, an industrial designer from Istanbul who plays a key role in shaping Robora’s hardware systems. His background in aviation and product design helps bridge creativity and engineering, designing structural systems that bring our robots to life both in simulation and in real-world environments. Significant strides were also made in hardware integration. We successfully addressed three out of five major challenges for vision-to-prompt integration, including signal communication, wireless connectivity and onboard SDK support. These improvements bring us closer to a fully functional hardware layer that connects directly with our software stack. On the development side, two key components of our Physical AI architecture saw major updates. The first is the VLA Fine-Tuning and Adaptation Pipeline, which enables our models to support different robotic hardware configurations while preserving their reasoning capabilities. Using efficient fine-tuning techniques like QLoRA, we can retrain only the action layer, allowing a single VLA model to adapt to new modules such as grippers, arms or mobility units. In parallel, we continued building a PyBullet-powered simulation environment for reinforcement learning. This platform trains low-level control policies for locomotion, balance and stability in complex and dynamic conditions, using advanced algorithms like PPO and SAC. Our approach relies on a clear separation of control: the VLA model serves as the high-level planner, understanding visual input and task context, while RL-based controllers handle precise, real-time execution. This layered design allows our robots to combine intelligence with physical robustness and adapt to a wide range of environments and mechanical configurations.
Robora tweet media
English
29
31
100
11.3K
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
We’re actively working on several fronts to accelerate development and ensure maximum efficiency, with the goal of reaching the market quickly. This week, we focused on the following areas: 1. VLA Fine-Tuning & Adaptation Pipeline We are working on our VLA SDK to support open-weight models including SmolVLA and Pi0 for now, with GrootN1.5 planned for near future, each offering different trade-offs between model size, inference speed, and generalization. Our current focus is on implementing action-head only fine-tuning using QLoRA (where required) a technique that allows efficient training on consumer-grade GPUs while preserving the pretrained vision-language backbone. This approach enables us to remap the model’s action space for different robotic configurations, essentially allowing a single VLA to learn how to control new hardware modules (arms, grippers, mobility units, etc.) without degrading its multimodal reasoning ability. By isolating adaptation to the action head, we maintain the core representation and generalization power of the model while making it contextually aware of our new robotic action space. 2. Reinforcement Learning for Low-Level Control Policy Parallel to the VLA pipeline, we are working on setting up a PyBullet-based physics simulation environment designed for large-scale reinforcement learning experiments. This environment trains neural control policies using Proximal Policy Optimization (PPO) and SAC algorithm, two state-of-the-art algorithms for continuous control. These RL policies are being trained to handle locomotion, balance and stability under dynamically changing environments and turbulences, leveraging parallel simulation for faster convergence and robustness. 3. The key architectural principle here is hierarchical separation of control: The VLA acts as the top-level planner, interpreting natural language commands, visual input, and task context. The RL policy serves as the low-level actuator, executing smooth, stable movements in real time at higher action frequencies. This separation allows the system to combine semantic intelligence with physical resilience making our robots adaptable to diverse terrains, mechanical modules and environmental uncertainties, far beyond what conventional PID or trajectory-based controllers can achieve. Together, these two components form the backbone of our Physical AI stack , a system designed to reason, adapt, and act seamlessly across our robotics stack.
Robora tweet media
English
36
32
128
14.9K
Ryan Mclaughlin 리트윗함
Kito
Kito@cryptologyKito·
$RBR held decently during this last wipeout. I can definitely see it reaching these levels in the next 20-30 days & Build back to ATH levels.
Kito tweet media
English
50
29
115
4.8K
Ryan Mclaughlin 리트윗함
Robora
Robora@UseRobora·
This week, Robora continued to evolve from an ambitious idea into a functioning ecosystem. Our focus was on building out the core technology, improving tools for developers and laying the groundwork for the next stage of growth. Step by step, the project is becoming a foundation for Physical AI where robotics, intelligence and blockchain connect into a single verifiable system. We shared a clear look at what’s coming next as Robora enters a pivotal phase. The updated roadmap highlights several major goals ahead including the release of Whitepaper V2, the launch of our Dapp and 3D Builder, new technical videos and deeper development of key modules. All of this is part of our mission to create a participatory robotics ecosystem where actions, data and contributions are transparent and verifiable on-chain. Transparency also remained a priority this week as we introduced Quy, one of the engineers behind Robora’s Vision Module. With strong experience in computer vision, multi-object tracking and 3D reconstruction he plays an important role in building the perception layer that allows robots to understand and interact with their surroundings. We also presented Robora Sim, a new simulation environment built on PyBullet that helps robots learn and adapt before they’re deployed in the real world. By combining VLA-based planning with motion control it supports imitation and reinforcement learning, domain randomization and synthetic data generation all aimed at closing the gap between simulation and reality.
Robora tweet media
English
31
37
125
14.1K
Ryan Mclaughlin 리트윗함
0xdaveeee
0xdaveeee@0xdaveeee__·
@cryptolimbo You should also add some $RBR, chart screaming new highs soon Lots of catalyst still to be unveiled, robotics far from over. Sub 6M is a good level to bid if you missed first leg x.com/UseRobora/stat…
Robora@UseRobora

We’re entering a defining phase at $RBR. Robora is evolving into a verifiable robotics ecosystem, an open platform where anyone can contribute, build, and earn. By merging AI, robotics, and blockchain, we’re creating a new layer of Physical-AI infrastructure where every robot’s action, dataset, and contribution is transparent, traceable, and rewarded on-chain. Our vision is simple: Turn the robotics lifecycle into a participatory economy. Here's what's ahead in the coming weeks: Whitepaper V2 – A transparent blueprint of our next development stage. Tech Videos – Showcasing key elements of Robora's Framework. Partnerships & Onboardings – Expanding our global builder and research ecosystem. Full transparency from the team - Including detailed profiles, and personal video introductions from core members. Dapp Launch + 3D Builder – Letting anyone visualize, design, and interact with robots directly on-chain. Further development on the Modules - where each module functions independently yet connects into a unified verifiable framework. 3D Reconstruction Toolkit - Introducing a 3D Reconstruction Toolkit, a collaborative project that lets anyone contribute to the future of real-world simulation environments and earn royalties in return.

English
7
6
8
79
EllioTrades
EllioTrades@elliotrades·
Today is October 6 If you believe in the 4 year cycle this should be the Bitcoin TOP Are you selling it all right now or are you doubling down for Alt Season?
EllioTrades tweet media
English
251
105
1.4K
164.7K