Vangrid

15 posts

Vangrid banner
Vangrid

Vangrid

@vangrid_io

Building the decentralized perception grid for sovereign defense and autonomous physical agents.

Katılım Nisan 2026
0 Takip Edilen1.2K Takipçiler
Vangrid
Vangrid@vangrid_io·
$100,000 Rewards Program is Live. Vangrid is building the decentralized perception grid for sovereign defense and autonomous physical agents. Get In Early: hub.vangrid.io
English
851
1.1K
1.4K
24.1K
Vangrid
Vangrid@vangrid_io·
Before an autonomous drone flies in the real world, it trains in a simulation. That simulation is only as good as its spatial constraints. We provide the 3D physics of reality.
Vangrid tweet media
English
2
37
60
1.9K
Vangrid
Vangrid@vangrid_io·
Human motion is a mispriced asset class. We turn civilian walking into verified, RLHF-ready spatial data, settled instantly on-chain. The GDP of the physical world, digitized. 📈
Vangrid tweet media
English
1
9
20
1K
Vangrid
Vangrid@vangrid_io·
Dashcams only see roads. Humanoid robots don't walk on highways. They navigate sidewalks, stairs, and alleys. If you want to train autonomous agents, you need pedestrian-level spatial data. We have it.
Vangrid tweet media
English
0
5
11
695
Vangrid
Vangrid@vangrid_io·
Billions of civilian sensors. Edge-computed ML blurring faces in real-time. Verified spatial geometry routed on-chain. We don’t collect PII, we extract the static infrastructure layer.
Vangrid tweet media
English
0
1
9
543
Vangrid
Vangrid@vangrid_io·
Google has created the perfect "brain" for the physical world. But this brain has a fundamental problem: it lacks relevant data beyond roads (Street View) and greenhouse labs. A brilliant brain in a blind body is utterly useless. For ER-1.6 and similar models to work in a real city, they require dense, multi-view spatial data (multi-view spatial geometry). Vangrid is the only mathematically scalable (Zero-CapEx) way to digitize this reality through smartphones. DeepMind built the algorithm. Vangrid supplies it with ground-truth data. We are Spatial Cortex. ⚙️
Lukas Ziegler@lukas_m_ziegler

JUST IN: @GoogleDeepMind launches Gemini Robotics ER 1.6! 🧠 GDM introduced Gemini Robotics-ER 1.6, a reasoning-first model that enables robots to understand environments through spatial reasoning and multi-view understanding. The model specializes in visual and spatial understanding, task planning, and success detection. It acts as the high-level reasoning model for robots, capable of calling tools like Google Search, vision-language-action models, or any third-party user-defined functions. New capabilities like instrument reading, enabling robots to read complex gauges and sight glasses, discovered through collaboration with Boston Dynamics. Precision object detection and counting, relational logic, motion reasoning, and constraint compliance. The model uses points as intermediate steps to reason about complex tasks. It enables agents to intelligently choose between retrying failed attempts or progressing to the next stage. The model advances multi-view reasoning, understanding multiple camera streams and relationships between them even in dynamic or occluded environments. Super important are the safety improvements. They have included superior compliance with safety policies, better adherence to physical safety constraints (safer decisions about which objects can be manipulated), and improved hazard identification. 🚧 So the high-level planning that calls lower-level execution models, versus the end-to-end visuomotor control approach of models like π0 and GEN-1. It's getting interesting! 🔥 More details here: deepmind.google/blog/gemini-ro… ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com

English
0
2
7
494
Vangrid
Vangrid@vangrid_io·
Satellites provide top-down maps. Vangrid provides tactical ground truth. When GPS is denied and skies are compromised, our decentralized perception grid sees what orbital sensors cannot. 🛰️
Vangrid tweet media
English
0
0
2
339
Vangrid
Vangrid@vangrid_io·
Autonomous systems are starving for ground-truth data beyond the curb. Stairs, alleys, and dynamic obstacles require a live spatial cortex. pcmag.com/news/delivery-…
English
0
0
0
276
Vangrid
Vangrid@vangrid_io·
Deploying fleets of corporate vehicles to map the world is a capital allocation failure. There are 3 billion smartphones with advanced photogrammetry already in pockets. We just turn them on.
Vangrid tweet media
English
0
0
1
253
Vangrid
Vangrid@vangrid_io·
You can’t train embodied AI on 2D images and Wikipedia. World Models require continuous, high-fidelity 3D geometry. Vangrid is the spatial cortex for physical AI.
Vangrid tweet media
English
0
0
0
244
Vangrid
Vangrid@vangrid_io·
Corporate fleets are too slow and too expensive. The data wall breaks when you turn 3 billion civilian pockets into the ultimate sensor swarm. Good to read: forbes.com/sites/sabbirra…
English
0
0
0
199
Vangrid
Vangrid@vangrid_io·
The infrastructure for Physical AI cannot be built in a centralized lab. You can't simulate the noise of reality. The decentralized perception grid is inevitable.
Natasha Malpani 👁@natashamalpani

there is no hugging face for robotics data. no standardized pipeline for collecting, labeling, versioning, training on real-world robot data at scale. no tooling that handles contact dynamics and material deformation well enough for industrial manipulation. no teleoperation infrastructure where human supervisor intervention automatically becomes training data. no vertical-specific manipulation datasets for any specific industrial task. the actual bottleneck in physical AI is the data and the infrastructure to generate it. and this is a structural problem. for language AI, training data was the internet. abundant, cheap, already labeled by human intent. for robotics, the gap between where foundation models are and where they need to be cannot be closed by deploying more robots. three bets are being made right now: simulation-first works brilliantly for locomotion. domain randomization has essentially solved quadruped walking in unstructured terrain. but it breaks down completely for manipulation. simulated cameras have no noise, blur, or friction error. real cameras and grippers have all of it. cable insertion, fabric folding, dexterous assembly are exactly where simulation fails. teleoperation as data collection is the second move. deploy semi-autonomous robots, capture human-guided trajectories, iterate. theoretically sound. but the capital math is brutal and the execution evidence isn't there yet. human video as proxy is the third. if robots could learn from watching humans, you tap unlimited data. the problem: human hand geometry and force feedback don't map onto robot actuators. you're learning the shape of motion without the physics that make it work. what's actually working today is locomotion. narrow manipulation in structured environments. inspection and sensing. quadrupeds doing thermal inspection. no general-purpose manipulation required. the hardware race is loud, capital-intensive, winner-take-few. but the data infrastructure race is quiet, undercapitalized, wide open.

English
0
0
1
260
Vangrid
Vangrid@vangrid_io·
OpenAI read the entire internet. Now what? LLMs hit the text wall. The next multi-trillion-dollar race is Physical AI, and it’s starving for ground-truth spatial data. We are building the pipeline.
Vangrid tweet media
English
1
0
0
182
Vangrid
Vangrid@vangrid_io·
LeCun’s $1B seed proves one thing: the LLM era is over World Models are the new frontier. But algorithms don't generate their own training data. Without decentralized, zero-CapEx spatial capture, physical AI remains blind. Vangrid is building the supply chain for AMI.
Ricardo@Ric_RTP

The man who INVENTED modern AI just made a billion dollar bet that ChatGPT, Claude, and every AI company on earth is building the wrong technology. Yann LeCun won the Turing Award in 2018 for creating the neural networks that made AI possible. He spent a decade running AI research at Meta. Oversaw the creation of Llama and PyTorch, the tools that half the AI industry runs on. Then he quit. And raised $1.03 billion in a seed round. The LARGEST seed round in European history. $3.5 billion valuation before generating a single dollar of revenue. Bezos wrote the check. So did Nvidia. Samsung. Toyota. Temasek. Eric Schmidt. Mark Cuban. Tim Berners-Lee (the guy who invented the internet). His new company is called AMI Labs. And it's built on one thesis: Every AI company spending billions on large language models is wasting their money. ChatGPT, Claude, Gemini, Grok. They all work the same way. They predict the next word in a sequence. See "the cat sat on the" and predict "mat." Scale that to trillions of words and you get something that sounds intelligent. But LeCun says it doesn't UNDERSTAND anything. It can't reason. It can't plan. It can't predict what happens when you push a glass off a table. A two year old can do that. GPT-5 cannot. That's why AI hallucinates. It doesn't have a model of how the world actually works. It just predicts words. His solution? Something called JEPA. Instead of predicting words, it learns how the PHYSICAL WORLD works. Abstract representations of reality. Not language but physics. Think about what that means. Current AI can write your emails. LeCun's AI could design a car, run a factory, operate a robot, or diagnose a patient without hallucinating and killing someone. The CEO of AMI said it perfectly: "Factories, hospitals, and robots need AI that grasps reality. Predicting tokens doesn't cut it." And here's what's really crazy to me... LeCun isn't some outsider throwing rocks. He literally built the foundations that ChatGPT runs on. He knows exactly how these systems work because he helped create them. And after watching the entire industry sprint in one direction for three years, he raised a billion dollars to run the OPPOSITE way. No product. No revenue. No timeline. Just pure research. He told investors it could take YEARS to produce anything commercial. But they funded it anyway in just four months. Meanwhile OpenAI just raised $120 billion and still can't stop their models from making things up. Anthropic is building AI so dangerous they're afraid to release it. Google is burning billions trying to catch up. And the guy who started it all says they're all solving the wrong problem. Two Turing Award winners raised $2 billion in three weeks betting AGAINST the entire LLM approach. LeCun at AMI. Fei-Fei Li at World Labs. The smartest people in AI are quietly building the exit from the technology everyone else is betting their future on. Either they're wrong and the trillion dollar LLM industry keeps printing. Or they're right and every AI company on earth just built on a foundation that's about to crack.

English
0
0
1
259
Vangrid
Vangrid@vangrid_io·
Hey, Its Vangrid. The Spatial Cortex for Physical AI. We are building the decentralized perception grid for sovereign defense and autonomous systems.
Vangrid tweet media
English
0
0
0
150