Cristián Llull

28 posts

Cristián Llull banner
Cristián Llull

Cristián Llull

@cllullt

PhD student in Computer Science - U. of Chile Passionate to learn the workings of the world and interact with the environment through computers

Katılım Kasım 2022
298 Takip Edilen12 Takipçiler
Cristián Llull retweetledi
Computer Graphics arXiv
Computer Graphics arXiv@Animation·
VoroUDF: Meshing Unsigned Distance Fields with Voronoi Optimization Ningna Wang, Zilong Wang, Xiana Carrera, Xiaohu Guo, Silvia Sellán arxiv.org/abs/2602.02907 [𝚌𝚜.𝙶𝚁]
Computer Graphics arXiv tweet media
Filipino
0
2
3
204
Cristián Llull
Cristián Llull@cllullt·
Great news! Really excited to continue the research and academic travel :) --- Excelente noticia! Agradecido del apoyo recibido. Con todo el ánimo de seguir dando la mejor. --- Els meus més sincers agraïments als qui m'han ajudat en aquest trajecte. Cap amunt!
Shape Vision Lab UChile@ShapeVisionLab

Congratulations to our student Cristián Llull for successfully passing his doctoral qualification exam! This achievement marks an important first step on his path earning a Ph.D. We are confident he will continue to demonstrate dedication, research excellence and academic rigor.

Català
0
0
1
9
Cristián Llull retweetledi
Shape Vision Lab UChile
Shape Vision Lab UChile@ShapeVisionLab·
Congratulations to our student Cristián Llull for successfully passing his doctoral qualification exam! This achievement marks an important first step on his path earning a Ph.D. We are confident he will continue to demonstrate dedication, research excellence and academic rigor.
English
1
1
1
31
Cristián Llull retweetledi
Gabriele Berton
Gabriele Berton@gabriberton·
Quite literally all of 3D vision This includes VGGT, Dust3r and friends which are trained on COLMAP-generated data Also gaussian splatting and NERFs in most cases use COLMAP-generated poses
Gabriele Berton tweet media
English
5
36
439
19.3K
Sven Elflein
Sven Elflein@s_elflein·
🚀 Exciting news! We’re introducing VGG-T³: a scalable model for offline feed-forward 3D reconstruction that finally tackles the "quadratic bottleneck." Ever wanted to have VGGT reconstruct a 1,000-image scene in seconds instead of 10 minutes and use it for visual localization?
GIF
English
7
69
467
32.4K
Cristián Llull
Cristián Llull@cllullt·
@s_elflein Awesome project! We are hosting the SHREC26 Track on 3D Reconstruction and your work could be a valuable addition to the track. Challenge: reconstruct objects from 2D images. After, we'll evaluate their quality using a novel feature-aware metric. See: shapevision.dcc.uchile.cl/cllull-shrec20…
English
1
0
0
27
Ruilong Li
Ruilong Li@ruilong_li·
With this you can run reconstruction on 1000 images with 1x GPUs in 1 minute🔥. Also this allows for arbitrary tradeoff between runtime and VRAM so you never have to suffer with OOM with even more images. Or throw in more GPUs and further scaling/speed up!
Sven Elflein@s_elflein

🚀 Exciting news! We’re introducing VGG-T³: a scalable model for offline feed-forward 3D reconstruction that finally tackles the "quadratic bottleneck." Ever wanted to have VGGT reconstruct a 1,000-image scene in seconds instead of 10 minutes and use it for visual localization?

English
2
12
155
15K
Cristián Llull
Cristián Llull@cllullt·
SECOND CALL: SHREC'26 Challenge on 3D Reconstruction Our dataset features intricate geometries, ideal for benchmarking of high-frequency detail recovery. All participants will co-author a joint paper submitted to Computers & Graphics. Track Details shapevision.dcc.uchile.cl/cllull-shrec20…
Cristián Llull@cllullt

SHREC 2026: reconstruct high-frequency geometry from 90 views (COLMAP poses). Dataset out now. Registration → cllull@dcc.uchile.cl. Submissions due Apr 3, 2026. Details: shapevision.dcc.uchile.cl/cllull-shrec20… #ComputerVision #3DReconstruction #Photogrammetry

English
0
0
1
42
Cristián Llull retweetledi
rsasaki0109
rsasaki0109@rsasaki0109·
[SIGGRAPH Asia 2025 - TOG] Official implementation of MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction github.com/Anttwo/MILo Our method introduces a novel differentiable mesh extraction framework that operates during the optimization of 3D Gaussian Splatting representations. At every training iteration, we differentiably extract a mesh—including both vertex locations and connectivity—only from Gaussian parameters. This enables gradient flow from the mesh to Gaussians, allowing us to promote bidirectional consistency between volumetric (Gaussians) and surface (extracted mesh) representations. This approach guides Gaussians toward configurations better suited for surface reconstruction, resulting in higher quality meshes with significantly fewer vertices. Our framework can be plugged into any Gaussian splatting representation, increasing performance while generating an order of magnitude fewer mesh vertices. MILo makes the reconstructions more practical for downstream applications like physics simulations and animation.
rsasaki0109 tweet media
English
1
16
114
5.8K
Alexandre Morgand
Alexandre Morgand@Almorgand·
"Reflect3r: Single-View 3D Stereo Reconstruction Aided by Mirror Reflections" TL;DR: uses mirror reflections as auxiliary virtual views to generate stereo cues and improve 3D reconstruction from a single image.
English
2
8
79
4.5K
Andrea Tagliasacchi 🇨🇦
📢📢📢 new paper alert: Mesh Splatting meshsplatting.github.io Yet another step closer to classical computer graphics for differentiable rendering. No need to write custom shaders… just a classical mesh renderer! Great job @janheld14 Was a pleasure to have you at SFU
Jan Held@janheld14

🚀 I’m excited to share my final work as a PhD student: 𝙈𝙚𝙨𝙝𝙎𝙥𝙡𝙖𝙩𝙩𝙞𝙣𝙜: 𝘿𝙞𝙛𝙛𝙚𝙧𝙚𝙣𝙩𝙞𝙖𝙗𝙡𝙚 𝙍𝙚𝙣𝙙𝙚𝙧𝙞𝙣𝙜 𝙬𝙞𝙩𝙝 𝙊𝙥𝙖𝙦𝙪𝙚 𝙈𝙚𝙨𝙝𝙚𝙨 - Arxiv: arxiv.org/abs/2512.06818 - Code: github.com/meshsplatting/… - Project page: meshsplatting.github.io

English
4
37
251
22.6K
Cristián Llull retweetledi
Jackson Atkins
Jackson Atkins@JacksonAtkinsX·
My brain broke when I read this paper. A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2. It's called Tiny Recursive Model (TRM) from Samsung. How can a model 10,000x smaller be smarter? Here's how it works: 1. Draft an Initial Answer: Unlike an LLM that writes word-by-word, TRM first generates a quick, complete "draft" of the solution. Think of this as its first rough guess. 2. Create a "Scratchpad": It then creates a separate space for its internal thoughts, a latent reasoning "scratchpad." This is where the real magic happens. 3. Intensely Self-Critique: The model enters an intense inner loop. It compares its draft answer to the original problem and refines its reasoning on the scratchpad over and over (6 times in a row), asking itself, "Does my logic hold up? Where are the errors?" 4. Revise the Answer: After this focused "thinking," it uses the improved logic from its scratchpad to create a brand new, much better draft of the final answer. 5. Repeat until Confident: The entire process, draft, think, revise, is repeated up to 16 times. Each cycle pushes the model closer to a correct, logically sound solution. Why this matters: Business Leaders: This is what algorithmic advantage looks like. While competitors are paying massive inference costs for brute-force scale, a smarter, more efficient model can deliver superior performance for a tiny fraction of the cost. Researchers: This is a major validation for neuro-symbolic ideas. The model's ability to recursively "think" before "acting" demonstrates that architecture, not just scale, can be a primary driver of reasoning ability. Practitioners: SOTA reasoning is no longer gated behind billion-dollar GPU clusters. This paper provides a highly efficient, parameter-light blueprint for building specialized reasoners that can run anywhere. This isn't just scaling down; it's a completely different, more deliberate way of solving problems.
Jackson Atkins tweet media
English
345
2K
11.9K
2.2M
Cristián Llull retweetledi
Cristián Llull retweetledi
Web3Aible
Web3Aible@Web3Aible·
@Alibaba_Qwen I keep looking at this and wonder how selfless Qwen has been, open Sourcing everything. Everyone is guilty if not supporting Qwen. SOTA open source model every single time!. May all the people behind Qwen live for many years.
Web3Aible tweet media
English
2
17
290
10.7K
Cristián Llull retweetledi
Sully
Sully@SullyOmarr·
deepseek + websearch has basically replaced perplexity for me probably the best coding + search product available rn and its free
Sully tweet media
English
171
421
4.9K
535.2K