

Ismini Lourentzou
250 posts

@Ismini_L
Assistant Prof. @iSchoolUI Multimodal ML, Computer Vision, NLP Research. Previously @VT_CS @SanghaniCtrVT @IBMResearch. PhD @illinoisCDS. Opinions are my own.



🚀🚀My new work: 𝐃𝐫𝐞𝐚𝐦𝐏𝐚𝐫𝐭𝐆𝐞𝐧 is out! A semantically grounded framework for part-level text-to-3D generation that jointly models part geometry, appearance, and inter-part relations through collaborative latent denoising. ⭐️Synchronized relational + geometric latents for part-aware reasoning ⭐️PartRel3D: 300K relational triplets across 175 object categories ⭐️More coherent, controllable, and interpretable 3D generation from language ⭐️Enables fine-grained part editing, articulated object generation, and mini-scene synthesis 🌐 plan-lab.github.io/projects/dream… 🤗 huggingface.co/papers/2603.19… #3DGen #TextTo3D #ComputerVision #AIResearch

🚀 Call for Papers & Challenge: CV4CHL @CVPR 2026 in Denver! 🏔️ We’re bridging the gap in Computer Vision for children’s health, education & development. Join us! 👶💻 🏆 Children's Gait Competition: Win an NVIDIA RTX 5080 GPU or Latest AR Glasses! 🎮🔥 📅 Deadlines (AoE): • Proceeding Papers: Mar 10 • Non-Proceeding: May 10 • Challenge Submission: May 10 🔗 Details & Submission: pediamedai.com/cv4chl + A fantastic keynote speaker lineup! 🤩 #CVPR2026 #ComputerVision #AIforGood #NVIDIA #RTX5080 #MedAI #CV4CHL

🚀 Call for Papers & Challenge: CV4CHL @CVPR 2026 in Denver! 🏔️ We’re bridging the gap in Computer Vision for children’s health, education & development. Join us! 👶💻 🏆 Children's Gait Competition: Win an NVIDIA RTX 5080 GPU or Latest AR Glasses! 🎮🔥 📅 Deadlines (AoE): • Proceeding Papers: Mar 10 • Non-Proceeding: May 10 • Challenge Submission: May 10 🔗 Details & Submission: pediamedai.com/cv4chl + A fantastic keynote speaker lineup! 🤩 #CVPR2026 #ComputerVision #AIforGood #NVIDIA #RTX5080 #MedAI #CV4CHL


"Part²GS: Part-aware Modeling of Articulated Objects using 3D Gaussian Splatting" TL;DR: introduces a part-aware Gaussian splatting representation with physics-guided motion constraints to model articulated digital twins with high fidelity and coherent part movement.







🚀Introducing UniDFlow, a unified discrete flow-matching model for multimodal understanding, generation, and instruction-based editing under a single probabilistic interface. 🤗: huggingface.co/papers/2602.12… 🌐: plan-lab.github.io/projects/unidf… #Multimodal #Diffusion #ImageGeneration #VLM



🚀🚀Reasoning Meets 3D! CoRe3D is now released! 🤩🤩A collaborative reasoning 3D framework that unifies understanding, generation, and editing 💯 It can synthesize from complex or indirect prompts.😮⬇️ 📄arxiv.org/abs/2512.12768 🌐plan-lab.github.io/projects/core3…


Excited to share our #AAAI2026 Oral paper Hierarchical Dataset Selection for High-Quality Data Sharing! 🤔 How do you decide which datasets to train on when data comes from many noisy, heterogeneous sources? 🧵

SpatialReasoner-R1 is now released! 🚀 Can Large Language Models truly understand and navigate the physical world? While LLMs have mastered logic and coding, complex spatial reasoning—interpreting 3D structures, relative orientations, and navigation—remains a bottleneck.

🚨 MOCHA is now released. ☕️️ A new benchmark for evaluating code LLM safety under multi-turn attacks. Can your model resist malware requests when the intent is hidden across seemingly harmless steps? 🧵⬇️

🚀 PLAN Lab refresh! new website + new logo are here! 🌐 plan-lab.github.io 👩💻 Hiring interns? Check our member profiles + projects #team" target="_blank" rel="nofollow noopener">plan-lab.github.io/index.html#team
🔥Interested in collaborating with PLAN Lab? We are actively pursuing industry collaborations in multimodal AI (VLMs, 3D, generative, reasoning) and are recruiting students to join us!