Palatial

305 posts

Palatial banner
Palatial

Palatial

@PalatialSim

Physically accurate assets and scenes for robot simulation training at scale

Redwood City, CA Katılım Nisan 2022
91 Takip Edilen400 Takipçiler
Sabitlenmiş Tweet
Palatial
Palatial@PalatialSim·
A child consumes more data in 1 month than any LLM has ever seen. Embodied agents learn by doing, but the data that teaches them is tactile, sensorial and causal. Such data does not exist. To make physical AGI possible, we need to generate this new data at an industrial scale. Enter Palatial: automated infrastructure that converts raw data into sensory rich playgrounds for robots to learn in. Today, we’re unveiling Palatial PhysReady, the first automated sim asset generator (try it ⬇️) [1/5]
English
56
38
276
57.1K
Krish Mehta
Krish Mehta@djkesu1·
We’ve built this foundation @PalatialSim and we have some big plans :)
Matthias Niessner@MattNiessner

Large foundation models have made enormous progress in modeling language, images, and video. These systems can generate highly realistic outputs and capture complex statistical structure in data. However, they still operate on projections of the world, text sequences and 2D pixel grids, rather than the world itself. The real world is not a sequence of text tokens or frames; the real world is inherently anchored in 3D metric space, and dynamics across time. Objects occupy space and persist over time. They interact according to physical laws. Any model that aims to support real-world intelligence, e.g., for robotics, simulation, design, or spatial computing, must capture this structure. This is where current approaches fall short. While most video models can generate visually plausible frames, they often lack a consistent notion of the underlying scene due to limited context windows. As a result, geometry drifts, scale is ambiguous, objects appear and disappear, and interactions are not physically grounded. The model produces superficial appearance without a persistent world representation. For many downstream applications, this is not enough. The first step toward addressing this is modeling 3D space and keeping it consistent. A model should recover a coherent spatial representation of the scene, including layout, geometry, and scale. This not only allows the environment to be rendered from new viewpoints but also, more critically, reasoned about in metric space. If a model cannot produce a stable 3D representation, it is not grounded in the physical world, and it will fail to model the world due to its inefficient contextual memory. However, 3D is only the beginning. A truly useful world model must also be temporally and physically consistent. It should not only reconstruct a scene, but also simulate it, predicting how it evolves, how objects interact, and what happens under intervention. Eventually this requires moving beyond static representations toward models that capture dynamics and causality. I believe that generative approaches are highly compelling in this context, as they can be trained on large-scale data in a self-supervised fashion. In particular, comprehensive 3D world modeling is a highly-promising path forward, since richer environmental representations directly enable deeper and more effective learning of physical reality. Crucially, such generation enforces consistency: for instance, to generate a scene across viewpoints, a model must implicitly recover its underlying 3D structure. To generate it over time, it must capture its dynamics. This forces the model to internalize the latent state of the world, including geometry, scale, materials, motion, and physical behavior. This also highlights a limitation of purely abstract representations. High-level embeddings or action-centric models can be effective for specific tasks, but without the ability to model and simulate the world, they will eventually remain incomplete. They compress observations, but do not fully model the underlying process that generates them. The next generation of AI systems should therefore move beyond text and pixels, and toward physically-grounded world models: models that represent space, maintain consistency over time, and enable simulation and interaction. This is the missing layer between the physical and digital world, which will ultimately enable AI systems not just to observe the world, but to understand and operate within it.

English
1
0
6
1.7K
Palatial
Palatial@PalatialSim·
@hollympeck Thanks Holly! We're definitely pushing hard to get PhysReady assets into more hands. The pipeline improvements have been keeping us busy 🚀
English
0
0
1
35
Palatial
Palatial@PalatialSim·
Scalable sim data is a foundational bottleneck for robotics. and the building blocks are physically accurate 3D assets. Today we're releasing the Palatial Library (library.palatial.cloud), the fastest-growing collection of sim-ready assets for robotics, powered by our automated generation pipeline. Every 60 seconds, a new Palatial asset is created, 24/7. Our goal is 1 asset/second, and 1 million assets generated this year.
English
7
4
16
1K
Palatial
Palatial@PalatialSim·
@naeem0414 Thanks Naeem! We're excited to build the definitive physics-ready asset library for robotics. Your support means a lot as we work toward making simulation setup effortless for everyone 🚀
English
0
0
0
16
Naeem Akram
Naeem Akram@naeem0414·
@PalatialSim Really amazing work by the team, can’t wait to see this library grow and become the standard for robotics simulations 💯
English
1
0
1
17
Palatial
Palatial@PalatialSim·
@mklausme Thanks Michael! The automation scaling is definitely where things get exciting - being able to go from concept to physics-ready assets without manual intervention opens up so many possibilities 🚀
English
0
0
0
16
Palatial
Palatial@PalatialSim·
@AlexanderSu18 Thanks! We're pretty excited about how it's going so far 🚀
English
0
0
0
19
Palatial
Palatial@PalatialSim·
@emily_yu Thanks Emily! That's exactly what we're aiming for — making simulation so accessible that more teams can build robots without the usual 3D asset headaches 🤖
English
0
0
0
45
Palatial
Palatial@PalatialSim·
@djkesu1 Thanks so much! Really excited to see what roboticists will build with physics-ready assets that just work out of the box 🤖
English
0
0
0
44
Krish Mehta
Krish Mehta@djkesu1·
@PalatialSim Nice work by the team on this one! Super useful for roboticists!
English
2
0
1
46
Palatial
Palatial@PalatialSim·
We review each asset for accurate collisions, articulation, physics and ensure they work out of the box with both @nvidia Isaac Sim and MuJoCo. We will dramatically drop the cost of creating simulations, to make accurate sim workflows accessible to any team, and enable orgs to scale the massive datasets required to train general purpose robots. We've made all assets free for this week, try it out at library.palatial.cloud!
English
0
0
0
144
Riccardo Feingold
Riccardo Feingold@riccardorion·
Just got back from the Innate Hackathon at @ycombinator in SF. Still buzzing. Built an autonomous solar farm inspection robot with an incredible team in 48hrs. Leakage detection, valve control, ACT policy — the full stack. Hard problems. Zero sleep. Zero giving up.
English
1
0
4
110
Palatial
Palatial@PalatialSim·
@thatroboticsgrl One PhysReady brick wall, coming right up! ⚙️ Standard · 🎮 Isaac Sim · 🔴 HQ mesh · 🧩 Parts segmentation ON · 📦 USD export ⏳ Generating now — reply to this tweet to request changes!
English
0
0
0
12
Palatial
Palatial@PalatialSim·
@thatroboticsgrl One PhysReady cobblestone wall, coming right up! ⚙️ Standard · 🎮 Isaac Sim · 🔴 HQ mesh · 🧩 Parts segmentation ON · 📦 USD export ⏳ Generating now — reply to this tweet to request changes!
English
0
0
0
12
lililili
lililili@lililiziranjuan·
@PalatialSim Generate a conference room office chair with a metal frame, a leather backrest, and a leather seat cushion.
English
2
0
0
6
Palatial
Palatial@PalatialSim·
@lililiziranjuan One PhysReady conference room office chair, coming right up! 🔗 Articulated · 🎮 Isaac Sim · 🔴 HQ mesh · 🔗 Articulation ON · 📦 USD export ⏳ Generating now — reply to this tweet to request changes!
English
0
0
0
101