Jan Held

81 posts

Jan Held banner
Jan Held

Jan Held

@janheld14

💻 Building 3D worlds at @SpAItial_AI 🤖 Prev projects: Triangle Splatting & MeshSplatting & 3D Convex Splatting & more

Katılım Aralık 2024
190 Takip Edilen381 Takipçiler
Sabitlenmiş Tweet
Jan Held
Jan Held@janheld14·
🚀 I’m excited to share my final work as a PhD student: 𝙈𝙚𝙨𝙝𝙎𝙥𝙡𝙖𝙩𝙩𝙞𝙣𝙜: 𝘿𝙞𝙛𝙛𝙚𝙧𝙚𝙣𝙩𝙞𝙖𝙗𝙡𝙚 𝙍𝙚𝙣𝙙𝙚𝙧𝙞𝙣𝙜 𝙬𝙞𝙩𝙝 𝙊𝙥𝙖𝙦𝙪𝙚 𝙈𝙚𝙨𝙝𝙚𝙨 - Arxiv: arxiv.org/abs/2512.06818 - Code: github.com/meshsplatting/… - Project page: meshsplatting.github.io
English
15
138
890
77.7K
Jan Held retweetledi
Matthias Niessner
Matthias Niessner@MattNiessner·
Many 3D generators output Gaussian Splats (3DGS) for fast rendering, flexible deployment, and high visual fidelity. Static 3DGS aren't world models (no dynamics/semantics) but a true world model must allow distilling 3D-consistent representations for any given time step (3DGS/meshes). This post-distillation serves a dual purpose: 1) validates physical consistency of the model. 2) extracting explicit representations avoids continuously running a heavy generator, thus saves compute and facilitates real-time interaction.
English
5
32
285
19.3K
GitHub Projects Community
GitHub Projects Community@GithubProjects·
Turn a single photograph into a full 3D gaussian splatting model
GitHub Projects Community tweet media
English
6
44
552
30.1K
Jan Held retweetledi
Matthias Niessner
Matthias Niessner@MattNiessner·
The concept of creating an exact digital replica of the physical world has always fascinated me: environments that look and behave exactly like our everyday reality, precisely captured in the digital domain. This is the essence of 𝐖𝐨𝐫𝐥𝐝 𝐌𝐨𝐝𝐞𝐥𝐬, simulated realities indistinguishable from our own. Generating these models is the core mission behind what we are building at @SpAItial_AI. True World Models must capture both photorealistic appearance and underlying physics, spatially-consistent across the environment. For static scenes, current models already deliver impressive results, unlocking downstream applications from gaming to 3D design. However, the true frontier lies in modeling dynamics, which will enable the training of AI agents whose learned behaviors can bridge the sim-to-real gap, thus unlocking countless real-world applications.
English
6
23
197
20.3K
Jan Held
Jan Held@janheld14·
🚀 We’re hiring Research Scientists in Munich or London. Check out some of our recent work here 👉 spaitial.ai/gallery Feel free to message me if you have questions or would like to chat 🙂
Matthias Niessner@MattNiessner

🚀🚀Want to build 𝐖𝐨𝐫𝐥𝐝 𝐒𝐢𝐦𝐮𝐥𝐚𝐭𝐨𝐫𝐬?🚀🚀 We're hiring in Munich or London! Check it out: spaitial.ai/careers SpAItial is pioneering the next generation of World Models, pushing the boundaries of generative AI, computer vision, and the simulation of reality. We are moving beyond 2D pixels to build models that natively understand the physics and geometry of our world. Our mission is to redefine how industries, from robotics and AR/VR to gaming and cinema, generate and interact with physically-grounded 3D environments. We’re looking for individuals who are bold, innovative, and driven by a passion for pushing the boundaries of what’s possible. You should thrive in an environment where creativity meets challenge and be fearless in tackling complex problems. Our team is built on a foundation of dedication and a shared commitment to excellence, so we value people who take immense pride in their work and place the collective goals of the team above personal ambition. As a part of SpAItial, you’ll be at the forefront of the AI revolution in generative AI technology, and we want you to be excited about shaping the future of this dynamic field. If you’re ready to make an impact, embrace the unknown, and collaborate with a talented group of visionaries, we want to hear from you. #worldmodels #GenAI #3D #spatialintelligence

English
1
0
6
609
Jan Held
Jan Held@janheld14·
Excited to co-organize this year’s CVPR 3DMV Workshop on Learning 3D with Multi-View Supervision 🚀 Paper submissions are now open! If you’re working on 3D vision, multi-view learning,... this is a great opportunity to share your research with the community!
Abdullah Hamdi@Eng_Hemdi

The paper submissions for the @CVPR Third Workshop for Learning 3D with Multi-View Supervision (3DMV) are now OPEN. 3DMV includes archival at CVPR proceedings for papers upon acceptance at the workshop. Topics, deadlines, a fantastic lineup of speakers, and a tentative schedule below🚀 details: 3dmv.org/2026/

English
0
0
3
576
Jan Held
Jan Held@janheld14·
We are organizing the 3rd Workshop on Learning 3D with Multi-View Supervision (3DMV) at CVPR 2026. Mark it in your calendar so you do not miss it, looking forward to great talks and papers!
Abdullah Hamdi@Eng_Hemdi

Happy to share that the THIRD Workshop for Learning 3D with Multi-View Supervision (3DMV) is coming to @CVPR #CVPR2026 in Colorado this summer! The workshop will feature great speakers, posters , discussions, and paper publications at CVPR proceedings! 3dmv.org/2026/

English
0
0
6
470
Jan Held
Jan Held@janheld14·
🚀Check out our latest paper: Nexels: Neurally-Textured Surfels for Real-Time Novel View Synthesis with Sparse Geometries Nexels decouples geometry from appearance, enabling real-time rendering with far fewer primitives, while maintaining high visual quality.
Victor Rong@victor__rong

Introducing Nexels: Neurally-Textured Surfels for Real-Time Novel View Synthesis with Sparse Geometries Nexels render in real-time at high quality without needing millions of primitives. Site: lessvrong.com/cs/nexels/ Paper: arxiv.org/pdf/2512.13796 Code: github.com/victor-rong/ne…

English
1
1
34
2.9K
Jan Held
Jan Held@janheld14·
🔥 Big congrats to the SpAItial AI team on launching Echo. Excited to join the team in January and work with such amazing people. 🚀
SpAItial AI@SpAItial_AI

🚀 Announcing Echo — our new frontier model for 3D world generation. Echo turns a simple text prompt or image into a fully explorable, 3D-consistent world. Instead of disconnected views, the result is a single, coherent spatial representation you can move through freely. This is part of a bigger shift in AI: from generating pixels and tokens to generating spaces. Echo predicts a geometry-grounded 3D scene at metric scale, meaning every novel view, depth map, and interaction comes from the same underlying world — not independent hallucinations. Once generated, the world is interactive in real time. You control the camera, explore from any angle, and render instantly — even on low-end hardware, directly in the browser. High-quality 3D world exploration is no longer gated by expensive equipment. Under the hood, Echo infers a physically grounded 3D representation and converts it into a renderable format. For our web demo, we use 3D Gaussian Splatting (3DGS) for fast, GPU-friendly rendering — but the representation itself is flexible and can be easily adapted. Why this matters: consistent 3D worlds unlock real workflows — digital twins, 3D design, game environments, robotics simulation, and more. From a single photo or a line of text, Echo builds worlds that are reliable, editable, and spatially faithful. Echo also enables scene editing and restyling. Change materials, remove or add objects, explore design variations — all while preserving global 3D consistency. Editing no longer breaks the world. This is only the beginning. Echo is the foundation for future world models with dynamics, physical reasoning, and richer interaction — environments that don’t just look right, but behave right. Explore the generated worlds on our website and sign up for the closed beta. The era of spatial intelligence starts here. 🌍 #Echo #WorldModels #SpatialAI #3DFoundationModels Check it out: spaitial.ai

English
2
1
32
5.4K
Jan Held
Jan Held@janheld14·
@rak87501112 Good question, we never investigated that direction.
English
0
0
0
39
rak
rak@rak87501112·
@janheld14 Cool works . How does rendering object with hair compare to using Gaussian splatting?
English
1
0
1
143
Jan Held
Jan Held@janheld14·
🚀 I’m excited to share my final work as a PhD student: 𝙈𝙚𝙨𝙝𝙎𝙥𝙡𝙖𝙩𝙩𝙞𝙣𝙜: 𝘿𝙞𝙛𝙛𝙚𝙧𝙚𝙣𝙩𝙞𝙖𝙗𝙡𝙚 𝙍𝙚𝙣𝙙𝙚𝙧𝙞𝙣𝙜 𝙬𝙞𝙩𝙝 𝙊𝙥𝙖𝙦𝙪𝙚 𝙈𝙚𝙨𝙝𝙚𝙨 - Arxiv: arxiv.org/abs/2512.06818 - Code: github.com/meshsplatting/… - Project page: meshsplatting.github.io
English
15
138
890
77.7K
Jan Held
Jan Held@janheld14·
@xmagher Those specific cases are definitely a challenge for fully opaque triangles. I think with a better appearance model you might be able to fix those cases but with sh, the model is not able to perfectly reconstruct transparent objects but it's still decent
English
0
0
1
33
Xavier Magher
Xavier Magher@xmagher·
@janheld14 In this step , how are the non opaque triangles affected? Like a glass vace within a opaque scene. Other than that this method looks incredible!
English
1
0
1
25
Jan Held
Jan Held@janheld14·
@krz9000 We never compared them to Reality Scan :)
English
0
0
0
320
krz9000
krz9000@krz9000·
@janheld14 this puts this field into useful territory. how does it compare to other photogrammetry tools like Reality Scan from Epic?
English
2
0
1
355
Jan Held retweetledi
Jan Held
Jan Held@janheld14·
With fully opaque triangles, object extraction becomes simple and straightforward.
English
1
1
17
1.2K