Jie-Ying Lee 李杰穎

17 posts

Jie-Ying Lee 李杰穎 banner
Jie-Ying Lee 李杰穎

Jie-Ying Lee 李杰穎

@jayinnn

CS @ NYCU | SWE @ Google

Taiwan Katılım Ağustos 2014
272 Takip Edilen209 Takipçiler
Jie-Ying Lee 李杰穎 retweetledi
Zhenjun Zhao
Zhenjun Zhao@zhenjun_zhao·
🚀 Thrilled to share our survey paper: Advances in Global Solvers for 3D Vision The FIRST systematic survey unifying global optimization for 3D vision, covering 400+ papers across 60+ years (1960–2025) 3 paradigms × 10 tasks × global solutions 📄 Paper: arxiv.org/abs/2602.14662 💻 Paper List & Tutorial Code : github.com/ericzzj1989/Aw… 1/7
Zhenjun Zhao tweet media
English
2
16
66
11.5K
Jie-Ying Lee 李杰穎 retweetledi
Zhenjun Zhao
Zhenjun Zhao@zhenjun_zhao·
🎉 Thrilled to share our ICLR 2026 paper: 🔹 NPC Neural Predictor-Corrector: Solving Homotopy Problems with Reinforcement Learning 🚀 The first unified framework that reveals robust optimization, global optimization, polynomial root-finding, and sampling all share a common predictor-corrector structure 📄 Paper: arxiv.org/abs/2602.03086 💻 Code: [Coming soon] 1/
Zhenjun Zhao tweet media
English
1
28
173
9.8K
Jie-Ying Lee 李杰穎 retweetledi
Robert Youssef
Robert Youssef@rryssf_·
Holy shit… this might be the most unreal academic-writing upgrade I’ve ever seen 🤯 A team from NUS just dropped PaperDebugger an in-editor, multi-agent system that lives inside Overleaf and rewrites your paper with you in real time. Not copy-paste. Not a sidebar chatbot. Actual agentic editing inside your LaTeX editor. Here’s why this is insane 👇 → You highlight a messy paragraph, and it launches a full critique + rewrite pipeline → Returns clean before–after diffs like Git, then patches your document instantly → Runs Reviewer, Enhancer, Scoring, and Researcher agents in parallel → Uses Kubernetes pods to scale multi-agent reasoning inside the editor → Taps an MCP toolchain for literature search, reference lookup, and section-level enhancement Deep research mode is even crazier: It pulls relevant arXiv papers, summarizes them, compares your method against them, and generates citation-ready tables… all inline while you're writing. It’s basically a mini committee of reviewers embedded in your document rewriting, critiquing, sourcing, and polishing without ever breaking flow. If this scales, Overleaf stops being an editor… and becomes a full AI-assisted research environment.
Robert Youssef tweet media
English
85
552
4K
1.2M
Daniel Lichy
Daniel Lichy@daniel_lichy·
@jayinnn @mkturkcan @janusch_patas The paper mentions satelliteSfM and MoGe depth. Did you run these, and does the repo include them? I’d love to try this on my own data, but I didn’t see any instructions for how to do that.
English
1
0
0
54
Jie-Ying Lee 李杰穎
Jie-Ying Lee 李杰穎@jayinnn·
@mkturkcan @janusch_patas Oh great! We are using appearance embedding, so it is needed to fuse the embedding into actual SH color. Yes, it should be okay to directly run save_fused_ply after training, will add this to training scripts and README. Thanks again for playing around!
English
1
0
0
58
Jie-Ying Lee 李杰穎
Jie-Ying Lee 李杰穎@jayinnn·
@mkturkcan @janusch_patas Hi @mkturkcan ! I am the author of this project. Thanks for your interest of our project. I wonder is it possible to share your Columbia datasets, I am very curious why the output looks over saturated. Also, our method requires z-axis to be perpendicular to the ground plane.
English
2
0
1
76
Mehmet Kerem Turkcan
Mehmet Kerem Turkcan@mkturkcan·
@janusch_patas I've run it on a dataset I have for Columbia Morningside campus: poly.cam/capture/d655bd… not really matching their output quality, also needed to adapt their repo to work with Ampere GPUs. Maybe needs extremely clean datasets instead of colmap output.
English
4
0
3
232
Jie-Ying Lee 李杰穎
Jie-Ying Lee 李杰穎@jayinnn·
🛰️ Excited to share Skyfall-GS - the FIRST method to create real-time navigable 3D cities from satellite imagery alone! We transform multi-view satellite images into immersive 3D scenes you can freely fly through! 🚁✨ 🌐 Project Page: skyfall-gs.jayinnn.dev 1/5
English
7
70
369
63.7K
Jie-Ying Lee 李杰穎
Jie-Ying Lee 李杰穎@jayinnn·
Thanks Bilawal for sharing Skyfall-GS!
Bilawal Sidhu@bilawalsidhu

So these researchers figured out you can basically hallucinate 3D cities into existence using just satellite photos & a diffusion model. The problem's pretty straightforward: satellites only see rooftops. Building facades? Invisible. Street-level detail? Doesn't exist. But people want flyable 3D environments, which means you need all that occluded geometry. When I worked on google maps photogrammetry, we could only use satellite-based 3D for isolated stuff like the pyramids - anything city-scale required airplane flyovers. Which is fine until you hit aerial-denied regions where you literally can't fly. Huge chunks of the world just unavailable. Their trick is honestly kind of beautiful. They train gaussian splats on satellite views, but as it descends toward ground level, the renders turn to absolute garbage - artifacts everywhere. Instead of fighting this, they just treat those nightmare renders as the input to a diffusion model. Basically - "hey FLUX, fix this mess." Then here's where it gets clever: they generate multiple diffusion samples per view instead of committing to one. Because any single denoising path is probably wrong in 3D space, but if you generate a couple and let the GS optimization find consensus across them, you get actual geometric consistency. They do this in episodes, curriculum style - start high, gradually descend (hence the name Skyfall-GS!). With each iteration the ground-level views get less fucked. By the end you've got real-time flyable cities that look surprisingly real, and the geometry still matches the satellite input. No 3D training data. No street-level photos. Just satellites + diffusion doing what it does best - filling in the blanks. It's like neural scene completion but actually practical, and it unlocks basically the entire world.

English
2
2
16
6.7K
Jie-Ying Lee 李杰穎 retweetledi
MrNeRF
MrNeRF@janusch_patas·
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery TL;DR: Skyfall-GS converts satellite images to explorable 3D urban scenes using diffusion models, with real-time rendering performance. Contributions: • We introduce Skyfall-GS, the first method to synthesize immersive, real-time, free-flight navigable 3D urban scenes solely from multi-view satellite imagery using generative refinement. • An open-domain refinement approach leverages pre-trained text-to-image diffusion models without domain-specific training. • A curriculum-learning-based iterative refinement strategy progressively enhances reconstruction quality from higher to lower viewpoints, significantly improving visual fidelity in occluded areas.
English
9
91
594
65.7K
Jie-Ying Lee 李杰穎
Jie-Ying Lee 李杰穎@jayinnn·
🎯 Why this matters: Skyfall-GS enables SCALABLE 3D city generation with applications in: - AR/VR experiences - Autonomous driving simulation - Robotics training - Virtual entertainment Real-time rendering makes it practical for deployment! 🚀 4/5
English
1
1
25
2.2K
Jie-Ying Lee 李杰穎 retweetledi
Maths Ed
Maths Ed@MathsEdIdeas·
Q: How many seconds will have passed this year by the end of today, after exactly 6 weeks of the year has passed? A: 10! = 6 weeks = 6×7 days = 6×7×24 hours = 6×7×24×60 mins = 6×7×24×60×60 seconds = 6×7×(8×3)×(3×10×2)×(5×4×3) seconds = 10×9×8×7×6×5×4×3×2 seconds = 10! seconds
English
6
53
323
0