Or Hirschorn

38 posts

Or Hirschorn

Or Hirschorn

@Or_Hirsch

PhD Student @ Tel Aviv University

เข้าร่วม Ocak 2018
86 กำลังติดตาม47 ผู้ติดตาม
Or Hirschorn รีทวีตแล้ว
Inbar Huberman-Spiegelglas
🎉Splatent has been accepted to #CVPR2026 Get sharp, high-quality reconstructions from diffusion latent space. Research by Amazon Prime Video: pape:r arxiv.org/abs/2512.09923 project page: orhir.github.io/Splatent/ Huge thanks to the amazing team @Or_Hirsch @FritzLior @omeriko_
Or Hirschorn@Or_Hirsch

Happy to share Splatent, a new research done during my internship at Amazon @PrimeVideo! 🎬 We tackle a key issue in 3D generation: getting sharp reconstructions directly from diffusion latent space. 📄 Paper: arxiv.org/abs/2512.09923 🌐 Page: orhir.github.io/Splatent/

English
0
3
14
769
Or Hirschorn
Or Hirschorn@Or_Hirsch·
Splatent is accepted to #CVPR2026!! 🎉🚀 Huge thanks to the incredible collaborators who made this happen: @FritzLior, @inbarhub, and the rest of the team.
Or Hirschorn@Or_Hirsch

Happy to share Splatent, a new research done during my internship at Amazon @PrimeVideo! 🎬 We tackle a key issue in 3D generation: getting sharp reconstructions directly from diffusion latent space. 📄 Paper: arxiv.org/abs/2512.09923 🌐 Page: orhir.github.io/Splatent/

English
1
8
43
4.3K
Or Hirschorn รีทวีตแล้ว
Nimrod Shabtay
Nimrod Shabtay@NimrodShabtay·
Excited to share CLIMP - the first fully Mamba-based contrastive vision-language model. Unlike CLIP's ViTs, Mamba's state-space formulation favors locality & smoothness—better retrieval and OOD robustness. with @ItamarZimerman, @Eli_Schwartz and @RGiryes arxiv.org/abs/2601.06891
English
1
12
14
676
Or Hirschorn รีทวีตแล้ว
Kwang Moo Yi
Kwang Moo Yi@kwangmoo_yi·
Segre and Hirschorn et al., "Multi-View Foundation Models" Train an adapter to 2D foundational models like DINO/SAM/CLIP that allows turning them into "multi-view" versions. More reliable results when doing multi-view tasks.
Kwang Moo Yi tweet media
English
3
10
89
6.7K
Or Hirschorn
Or Hirschorn@Or_Hirsch·
What can you do with it? 1️⃣ Robust Matching: Features lock onto 3D points across drastic view changes. 2️⃣ Segmentation: Click once, and MV-SAM segments across all views. 3️⃣ Geometry: Estimate globally consistent surface normals directly from features.
Or Hirschorn tweet media
English
1
0
0
62
Or Hirschorn
Or Hirschorn@Or_Hirsch·
We achieve SOTA results on sparse and dense benchmarks! 🏆 Big thanks to my team for the guidance. We hope this aids future efficient 3D pipelines. #3DGS #GenerativeAI
English
0
0
1
92
Or Hirschorn
Or Hirschorn@Or_Hirsch·
💡 The Solution: Instead of forcing perfect 3D geometry, Splatent uses 3DGS for structure but offloads fine detail recovery to a 2D multi-view refinement step. By fixing artifacts in 2D renders rather than 3D space, we bypass VAE inconsistencies. 🧠
Or Hirschorn tweet media
English
1
0
1
143
Or Hirschorn รีทวีตแล้ว
Guy Yariv
Guy Yariv@guy_yariv·
[1/8] Recent work has shown impressive Image-to-Video (I2V) generation results. However, accurately articulating multiple interacting objects and complex motions remains challenging. In our new work, we take a step toward addressing this challenge.
English
7
26
80
9.2K
Or Hirschorn
Or Hirschorn@Or_Hirsch·
4/ We hope EdgeCape inspires new ideas in category-agnostic pose estimation research. 🙌 Feel free to reach out if you have questions or are interested in collaborations! I'd love to hear your thoughts.
English
0
0
0
66
Or Hirschorn
Or Hirschorn@Or_Hirsch·
3/ 📊 Results: EdgeCape outperforms state-of-the-art methods on the MP-100 benchmark, achieving: ✅New SOTA in 1-shot settings ✅Superior performance among similar-sized methods in 5-shot settings. We also shine in cross-category and occlusion scenarios! 💪
English
1
0
0
94