
La #UPC deixa de publicar a X per mantenir la seva comunicació en entorns que garanteixin la qualitat i la veracitat de la informació. Una decisió que ha pres per consens el #ConsellGovernUPC, el 19 de febrer. 🔗upc.edu/ca/sala-de-pre…
Xavi Giró
6.5K posts

@DocXavi
Applied scientist at @amazonscience Barcelona, Catalonia. Made at @la_upc & @columbia. Promoting @dlbcnai. Opinions my own.

La #UPC deixa de publicar a X per mantenir la seva comunicació en entorns que garanteixin la qualitat i la veracitat de la informació. Una decisió que ha pres per consens el #ConsellGovernUPC, el 19 de febrer. 🔗upc.edu/ca/sala-de-pre…









As you write your #CVPR2026 rebuttal, please note the policies below. Good luck ✍️

🌟NEW PAPER🌟 Do you know that changing a visual marker from red to blue can completely reorder VLM leaderboards? In our most recent work, we explore the fragility of visually prompted benchmarks. lisadunlap.github.io/vpbench/





Google DeepMind 🤝 @BostonDynamics Our new research partnership will bring together our advancements in Gemini Robotics’s foundational capabilities to their new Atlas® humanoids. 🦾 Find out more → goo.gle/49paguA


🚀 Cut Your Image Review Costs with Smart AutoQA! ✨ The magic formula: As long as your AutoQA precision beats your GenAI accuracy, you're saving money and time. arxiv.org/abs/2510.16179



DeepSeek just dropped a banger paper to wrap up 2025 "mHC: Manifold-Constrained Hyper-Connections" Hyper-Connections turn the single residual “highway” in transformers into n parallel lanes, and each layer learns how to shuffle and share signal between lanes. But if each layer can arbitrarily amplify or shrink lanes, the product of those shuffles across depth makes signals/gradients blow up or fade out. So they force each shuffle to be mass-conserving: a doubly stochastic matrix (nonnegative, every row/column sums to 1). Each layer can only redistribute signal across lanes, not create or destroy it, so the deep skip-path stays stable while features still mix! with n=4 it adds ~6.7% training time, but cuts final loss by ~0.02, and keeps worst-case backward gain ~1.6 (vs ~3000 without the constraint), with consistent benchmark wins across the board

