Yuncong Yang
40 posts

Yuncong Yang
@YuncongYY
Second-year CS PhD student at UMass Amherst, advised by @gan_chuang | Intern @MSFTResearch

AI’s next frontier is Spatial Intelligence, a technology that will turn seeing into reasoning, perception into action, and imagination into creation. But what is it? Why does it matter? How do we build it? And how can we use it? Today, I want to share with you my thoughts on building and using world models to unlock spatial intelligence in this essay below. 1/n

Test-time scaling nailed code & math—next stop: the real 3D world. 🌍 MindJourney pairs any VLM with a video-diffusion World Model, letting it explore an imagined scene before answering. One frame becomes a tour—and the tour leads to new SOTA in spatial reasoning. 🚀 🧵1/

We raised $28M seed from Threshold Ventures, AIX Ventures, and NVentures (Nvidia's venture capital arm) —alongside 10+ unicorn founders and top AI researchers— to build reasoning models that generate real-time simulations and games. Models are bottlenecked by practical simulations that can act as Reinforcement Learning environments. Human self-expression is bounded by tools that let us create alternate realities. At Moonlake, we are building a future where anyone can create interactive worlds, bring their child-like wonder to life, learn within them, and most importantly, share experiences with people we care about. More in 🧵



Test-time scaling nailed code & math—next stop: the real 3D world. 🌍 MindJourney pairs any VLM with a video-diffusion World Model, letting it explore an imagined scene before answering. One frame becomes a tour—and the tour leads to new SOTA in spatial reasoning. 🚀 🧵1/

Test-time scaling nailed code & math—next stop: the real 3D world. 🌍 MindJourney pairs any VLM with a video-diffusion World Model, letting it explore an imagined scene before answering. One frame becomes a tour—and the tour leads to new SOTA in spatial reasoning. 🚀 🧵1/


MindJourney Test-Time Scaling with World Models for Spatial Reasoning

MindJourney Test-Time Scaling with World Models for Spatial Reasoning






World Simulator, reimagined — now alive with humans, robots, and their vibrant society unfolding in 3D real-world geospatial scenes across the globe! 🚀 One day soon, humans and robots will co-exist in the same world. To prepare, we must address: 1️⃣ How can robots cooperate or compete intelligently? 2️⃣ How do humans build social bonds and communities? 3️⃣ How can both co-exist in an open, dynamic world? Announcing Virtual Community Project — a social-physical world simulator, where human characters and robotic agents can interact, grow, and co-evolve within open-world societies, stretching from London to New York, and beyond! Key features include: ✅ Unified multi-agent physics simulations for rich social + physical interactions of humans and robots ✅ Massive auto-generated 3D scenes grounded with the rea-world geospatial data ✅ Agent communities populated by robots and LLM-driven human characters with rich appearances, personalities, and social ties. 🌍 Enter our Virtual Community, an open world to study embodied AI at scale— one social-physical world model at a time! 🔗 Project: virtual-community-ai.github.io 💻 Code: github.com/UMass-Embodied… Paper: virtual-community-ai.github.io/paper.pdf 1/n