Shape Vision Lab UChile

5 posts

Shape Vision Lab UChile banner
Shape Vision Lab UChile

Shape Vision Lab UChile

@ShapeVisionLab

Research on 3D Vision, Shape Analysis and Generative Models. Computer Science Department, Uni of Chile. We study Geometry, Symmetry and real-world Applications.

Santiago, Chile 加入时间 Ekim 2025
30 关注1 粉丝
Shape Vision Lab UChile
Shape Vision Lab UChile@ShapeVisionLab·
Congratulations to our student Cristián Llull for successfully passing his doctoral qualification exam! This achievement marks an important first step on his path earning a Ph.D. We are confident he will continue to demonstrate dedication, research excellence and academic rigor.
English
1
1
1
31
Shape Vision Lab UChile 已转推
机器之心 JIQIZHIXIN
机器之心 JIQIZHIXIN@jiqizhixin·
What if we could model vision like a wave moving through space? Researchers from Peking & Tsinghua Universities present WaveFormer. They treat image features as signals governed by a wave equation, explicitly controlling how low-to-high frequency details evolve across network layers. This new Wave Propagation Operator outperforms standard Vision Transformers in image classification, detection, and segmentation, achieving up to 1.6x higher throughput with 30% fewer computations. WaveFormer: Frequency-Time Decoupled Vision Modeling with Wave Equation Paper: arxiv.org/abs/2601.08602 Code: github.com/ZishanShu/Wave… Our report: mp.weixin.qq.com/s/xFoj94IIG4xj… 📬 #PapersAccepted by Jiqizhixin
机器之心 JIQIZHIXIN tweet media
English
6
42
297
15.2K
Shape Vision Lab UChile 已转推
alphaXiv
alphaXiv@askalphaxiv·
DeepSeek just dropped a banger paper to wrap up 2025 "mHC: Manifold-Constrained Hyper-Connections" Hyper-Connections turn the single residual “highway” in transformers into n parallel lanes, and each layer learns how to shuffle and share signal between lanes. But if each layer can arbitrarily amplify or shrink lanes, the product of those shuffles across depth makes signals/gradients blow up or fade out. So they force each shuffle to be mass-conserving: a doubly stochastic matrix (nonnegative, every row/column sums to 1). Each layer can only redistribute signal across lanes, not create or destroy it, so the deep skip-path stays stable while features still mix! with n=4 it adds ~6.7% training time, but cuts final loss by ~0.02, and keeps worst-case backward gain ~1.6 (vs ~3000 without the constraint), with consistent benchmark wins across the board
alphaXiv tweet media
English
67
373
2.3K
279.9K