Visual-Intelligence

432 posts

Visual-Intelligence banner
Visual-Intelligence

Visual-Intelligence

@VI_Journal_CSIG

Official journal of China Society of Image and Graphics (CSIG). The jouarnl is published by Springer, sponsored by CSIG. E-ISSN 2731-9008.

Beijing Se unió Şubat 2025
2.5K Siguiendo175 Seguidores
Visual-Intelligence
Visual-Intelligence@VI_Journal_CSIG·
Prof. Hao Su delivered a keynote speech at China3DV 2026 on April 17. The key to next-generation technologies is developing intervenable interactions, and AI systems are capable of successfully acting in unknown physical environments. #HaoSu #ai #machineintelligence
Visual-Intelligence tweet media
English
0
0
2
52
Visual-Intelligence
Visual-Intelligence@VI_Journal_CSIG·
🎉🎉🎉Visual Intelligence is now in ESCI ! A huge thank you to our authors, reviewers, and readers for your support. We are excited to reach this major milestone! CiteScoreTracker 2025 10.6 All papers since 2023 will be indexed! link.springer.com/journal/44267/…
Visual-Intelligence tweet media
English
0
2
2
126
Visual-Intelligence retuiteado
Hezhen Hu @ CVPR2026
Hezhen Hu @ CVPR2026@AlexHu0212·
Join the 1st Workshop on Generative AI for Sign Language (GenSign) at CVPR 2026 @CVPR. 🚀 Paper submissions are NOW OPEN! We welcome papers on sign language processing, human-centric generative models, datasets/benchmarks, and ethics. 📌 Proceedings track DDL: Mar 14, 2026 (AoE) 📌 Non-proceedings track DDL: Apr 4, 2026 (AoE) 🌐 genai4sl.github.io ✉️ gensign.workshop@gmail.com #CVPR2026 #Sign_language #Human_centric #Benchmarks #Workshop
Hezhen Hu @ CVPR2026 tweet media
English
1
5
10
2.9K
Visual-Intelligence
Visual-Intelligence@VI_Journal_CSIG·
HDVS online! The authors present a novel framework called heterogeneous dual-branch voting supervision (HDVS), which is designed to enhance the reliability of pseudo-labels and mitigate the issues arising from pseudo-labeling. link.springer.com/article/10.100… @SpringerEng
English
0
0
1
36
Felix
Felix@felix1987_·
✈️to ICLR'25 at Singapore. @JinaAI_ have two works (JinaCLIP-v2, and ReaderLM-v2) to present at the SCI-FM workshop. Looking forward to meeting you all and discussing efficient (small)LLMs, MLLM at #ICLR25
Felix tweet mediaFelix tweet media
English
1
1
1
553
Shoubin Yu
Shoubin Yu@shoubin621·
On compositional video reasoning, CREMA is significantly efficient (with ~97% fewer trainable parameters) yet outperforms strong MLLM baselines on both 3D-associated (table1) and audio-associated (table2) video reasoning.
Shoubin Yu tweet mediaShoubin Yu tweet media
English
2
0
3
230
Shoubin Yu
Shoubin Yu@shoubin621·
How to efficiently & flexibly inject diverse new modalities (RGB+audio/3D/depth/flow/...) to improve Video Reasoning? Introducing ☕️ CREMA, a substantially efficient (~3% trainable params) & modular fusion framework (w/o complex architecture changes) arxiv.org/abs/2402.05889 🧵
Shoubin Yu tweet media
English
2
49
124
37.3K
Visual-Intelligence
Visual-Intelligence@VI_Journal_CSIG·
Cover Story: The research team proposed DriveMLM, an LLM-based autonomous driving framework that can perform close-loop autonomous driving in realistic simulators. Visual Intelligence, 2025, Volume 3, Article no. 22. Learn more here: link.springer.com/article/10.100… @SpringerEng
Visual-Intelligence tweet media
English
0
3
6
153