Chen Feng

72 posts

Chen Feng banner
Chen Feng

Chen Feng

@simbaforrest

Institute Associate Professor at NYU | Co-Director of NYU CREO | Amazon Scholar @Amazon FAR

Manhattan, NY شامل ہوئے Mart 2011
197 فالونگ751 فالوورز
پن کیا گیا ٹویٹ
Chen Feng
Chen Feng@simbaforrest·
🤖We are hiring multiple Summer'26 Research Interns at @amazon FAR to work on open-world navigation and robot foundation models, especially in neural rendering & simulation/predictive world models/reasoning & agency/real-world evaluation/long-term autonomy!
Chen Feng tweet mediaChen Feng tweet mediaChen Feng tweet media
English
13
17
391
36.6K
Chen Feng
Chen Feng@simbaforrest·
🤖We are hiring multiple Summer'26 Research Interns at @amazon FAR to work on open-world navigation and robot foundation models, especially in neural rendering & simulation/predictive world models/reasoning & agency/real-world evaluation/long-term autonomy!
Chen Feng tweet mediaChen Feng tweet mediaChen Feng tweet media
English
13
17
391
36.6K
Chen Feng ری ٹویٹ کیا
Yunsheng Tian
Yunsheng Tian@yunshengtian·
We are hiring Summer 2026 Research Interns at @amazon Frontier AI & Robotics (FAR) to work on open-world navigation in the era of robot foundation models and Internet-scale data! We are especially interested in candidates with strengths in robotic navigation, multi-modal foundation models, reasoning and agency, neural rendering, or real-world evaluation. If interested, please email me (ystian@amazon.com) and @simbaforrest (nycfeng@amazon.com) with the following subject line: [FAR Intern - Navigation] Your full name + School, attach your CV and your maximum availability window, along with a brief note about your background and interests.
English
6
33
514
41.4K
Chen Feng
Chen Feng@simbaforrest·
@amazon If excited, please email both @YunshengTian (ystian@amazon.com) and me (nycfeng@amazon.com) with the subject line: [FAR Intern - Navigation] Your full name + School. Do attach your CV and your maximum availability window, and a brief note about your background and interests.
English
1
0
19
1.7K
Chen Feng
Chen Feng@simbaforrest·
@amazon A robot that can navigate and vacuum your living room may still be lost in the real world. True open-world navigation is not just a problem of geometry: it requires spatial understanding, memory, and reasoning at scale.
English
1
0
3
1.8K
Chen Feng
Chen Feng@simbaforrest·
Have you ever been lost in a new place and wished a robot could reliably guide you? Or waited for a sidewalk delivery bot, only to learn that it got confused, stuck, or gave up halfway?
English
0
0
7
2.2K
Chen Feng ری ٹویٹ کیا
DailyPapers
DailyPapers@HuggingPapers·
EgoPush A learning framework that enables mobile robots to perform long-horizon multi-object rearrangement using only egocentric vision—no global maps or external tracking needed. Uses object-centric latent states and privileged RL teacher distilled into a visual student, with zero-shot sim-to-real transfer.
English
1
8
73
5.6K
Chen Feng
Chen Feng@simbaforrest·
To advance embodied AI, we need more high-quality photo-realistic and geometry-realistic simulation environments, especially for reproducible closed-loop evaluation. Wanderland is our first step to address this urgent need. Join us in this open-source effort and scan more places!
Xinhao Liu@xinhao6iu

🌍 Is YouTube + VGGT + 3DGS enough for simulative environments? 🧩 What’s missing in Real2Sim in 2026? 🔁 How do we evaluate embodied agents in a reliable closed-loop? 🔮 Meet Wanderland: a Real2Sim framework, benchmark, dataset, and environment for mobile agents! [1/n]

English
6
1
7
462
Chen Feng
Chen Feng@simbaforrest·
No matter whether you are with NYU or not, please feel free to contact us and join this fun open-source effort! It will pull you out of your coach and walk you in your city🚶‍♂️‍➡️, so let's scan more places to advance robotics and embodied AI together!
English
0
0
0
118
Chen Feng
Chen Feng@simbaforrest·
And many thanks to SkylandX & Dr. Jie PAN for supporting us with the amazing hand-held scanning devices, MetaCam!
English
0
0
0
102
Chen Feng
Chen Feng@simbaforrest·
Our amazing students and researchers at NYU Center for Robotics and Embodied Intelligence (CREO) and our collaborators scanned 500+ outdoor & indoor places with high fidelity, large scene diversity, and scale, covering totally about 4 million square meters, and 400K+ frames.
English
0
0
0
89
Chen Feng
Chen Feng@simbaforrest·
@chrisoffner3d @ducha_aiki @jwjung2317 @QianqianWang5 It should. Depending on how much view of such frames overlaps with other frames, the fixed threshold might need to be adjusted. If no overlaps (i.e. no covisibility), then it should work out of the box. Worth trying more, for sure!
English
0
0
1
47
Chris Offner
Chris Offner@chrisoffner3d·
@ducha_aiki @jwjung2317 @QianqianWang5 @simbaforrest Does this work with “distractor” images that aren’t from a totally different environment? The interesting practical use case would be to filter images from the same sequence/scene but with low/no covisibility with the rest.
English
3
0
1
328
Chen Feng ری ٹویٹ کیا
Yiming Li
Yiming Li@YimingLi9702·
🤔Visual-spatial reasoning requires a shift from a disembodied, passive paradigm to an embodied, active one: 🤖Grounding V* in humanoid agents! 🚀Introducing H* - a dataset, benchmark, and baseline to enable human-like visual search in real 360° environments! 🧵👇[1/n]
Yiming Li tweet media
English
2
26
164
53K
Chen Feng ری ٹویٹ کیا
Xiangyu Han
Xiangyu Han@HanXiangyu615·
🚀Presenting EUVS Benchmark (ai4ce.github.io/EUVS-Benchmark/) at Poster #386 (Thu, afternoon session) at #ICCV2025! 🚗🚙🚕 We introduce a novel, extensive real-world benchmark for quantitatively and qualitatively evaluating extrapolated novel view synthesis in large-scale urban scenes.
Xiangyu Han tweet media
English
0
3
7
960
Chen Feng ری ٹویٹ کیا
Zhenfei (Jeremy) Yin
Zhenfei (Jeremy) Yin@9LdROhjZE56jSh9·
🚨 We’re thrilled to announce our ICCV 2025 Workshop: MMRAgI – Multi-Modal Reasoning for Agentic Intelligence! 🚨 🌐 Homepage: agent-intelligence.github.io/agent-intellig… 📥 Submit: openreview.net/group?id=thecv… 🗓️ Submission Deadline (Proceeding Track): June 24th 2025 23:59 AoE 🗓️ Submission Deadline (Non-Proceeding Track): July 24th 2025 23:59 AoE AI Agents are evolving fast — but true intelligence needs reasoning across modalities. Vision, language, audio… it’s time to unify them. From digital and virtual agents to wearable and physical embodiments, agentic intelligence is reshaping how AI interacts with the world. As agents increasingly engage in 3D perception and geo-centric reasoning, bridging modalities with spatial understanding is more critical than ever. 💡 Join us to explore the frontiers of multi-modal agents: • Reasoning with MFM-powered agents • Applications in OS copilots, Scientific Agents, Digital Agents, Virtual Agents, Wearable Agetns and Embodied Agents! • Challenges in alignment, evaluation, efficiency, and robustness 📝 Call for Papers is now OPEN! 📅 Workshop: Oct 19–20 2025 Whether you work on models, methods, or applications — we want to hear from you! #ICCV2025 #MMRAgI #MultimodalAI #AIagents #LLM #MFM #EmbodiedAI #3DVision
Zhenfei (Jeremy) Yin tweet media
English
0
9
23
14.5K
Chen Feng ری ٹویٹ کیا
Animesh Garg
Animesh Garg@animesh_garg·
A post on indirect cost is making a lot of rounds Of all the people who think this is a rip-off, let us play a simple game: Could you perform the same research with the same amount of money (in direct cost) starting out in your garage? Think through on all pieces that are needed beyond the core tech from comms, space, equipment, hr, admin,... This is a reason why deep tech startups need so much more money and time to even get on par with university labs. And even then they operate with for-profit pressures, limiting the topics they can study and length of time they have before calling quits! Yes there are pieces of the administration process that could be improved, but by and large American universities have a tremendous impact per dollar on economy and society @elonmusk you have been a role model for a generation of tech folks. Of all the people, we had hoped you would encourage not dismantle american reasearch leadership based out of non-profit universities. many of the deep tech at your companies was & is built by university types! AI, robotics, energy - are built on decades of nonprofit science before tech was mature enough to even be iterate upon in product focussed settings PS: This post is causing anxiety in academic circles! While the 60% is not from the net but from the total requested direct costs but from the total requested direct costs A lot of academics tried making nuanced arguments. But changing opinions with informed debate here may be tricky!
Elon Musk@elonmusk

Can you believe that universities with tens of billions in endowments were siphoning off 60% of research award money for “overhead”? What a ripoff!

English
4
3
27
7.9K