Conference on Parsimony and Learning (CPAL)

279 posts

Conference on Parsimony and Learning (CPAL) banner
Conference on Parsimony and Learning (CPAL)

Conference on Parsimony and Learning (CPAL)

@CPALconf

CPAL is a new annual research conference focused on the parsimonious, low dimensional structures that prevail in ML, signal processing, optimization, and beyond

Katılım Mayıs 2023
1.6K Takip Edilen964 Takipçiler
Sabitlenmiş Tweet
Conference on Parsimony and Learning (CPAL)
Calling all parsimony and learning researchers 🚨🚨 The 3rd annual CPAL will be held in Tübingen Germany March 23–26, 2026! Check out this year's website for all the details cpal.cc
Conference on Parsimony and Learning (CPAL) tweet media
English
1
7
18
5.6K
Conference on Parsimony and Learning (CPAL) retweetledi
Dong Wang
Dong Wang@DongWan06935465·
Arriving Tübingen for CPAL 2026! We'll be presenting 2 papers there: 🚀 FOSL: A Foldable Sparse-and-Low-Rank Method for Efficient LLM Pre-training 🚀 GRAIL: Post-hoc Compensation by Linear Reconstruction for Compressed Networks If you’re here, ping me to grab a ☕️ and chat.
Dong Wang tweet mediaDong Wang tweet media
English
0
1
3
115
Conference on Parsimony and Learning (CPAL) retweetledi
Yi Ma
Yi Ma@YiMaTweets·
Attending CPAL in Tubingen, Germany this week.
Yi Ma tweet mediaYi Ma tweet mediaYi Ma tweet media
English
4
2
127
12K
Conference on Parsimony and Learning (CPAL) retweetledi
Yann Traonmilin
Yann Traonmilin@YTraonmilin·
We got best paper award at CPAL. I am sure Ali will do a great job at presenting this work. There are some quite interesting insights in it.
Yann Traonmilin@YTraonmilin

Good week ! Our paper with A. Joundi and J.-F. Aujol "From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent" hal.science/hal-05401157v1 has been accepted to conf. on Parsimony and Learning @CPALconf

English
1
1
7
395
Conference on Parsimony and Learning (CPAL) retweetledi
Red Hat AI
Red Hat AI@RedHat_AI·
Two Red Hat AI research papers accepted to major 2026 conferences. 1️⃣ Panza: Design and Analysis of a Fully-Local Personalized Text Writing Assistant (Accepted to @CPALconf 2026) • A fully local, personalized writing assistant that learns your style and runs entirely on-device. • LoRA-style fine-tuning + quantization • Built for sensitive data • Includes a Gmail plugin • Paper: arxiv.org/abs/2407.10994 • Code: github.com/IST-DASLab/Pan… Congrats to Red Hat AI teams @_EldarKurtic, @il_markov, @nir_shavit, and @DAlistarh. 2️⃣ Bridging the Gap Between Promise and Performance for Microscaling FP4 Quantization (Accepted to @iclr_conf 2026) • New FP4 approach for MXFP and NVFP • Optimized for latest AMD and NVIDIA GPUs, including DGX B200 • Introduces Micro-Rotated-GPTQ with block-wise Hadamard transforms • Paired with QuTLASS kernels for fast inference with negligible overhead • Paper: arxiv.org/abs/2509.23202 Congrats to Red Hat AI teams @RobertoL_Castro, @_EldarKurtic, Shubhra Pandit, Alexandre Marques, @markurtz_, and @DAlistarh. Fully local AI. Cutting-edge quantization. Real performance work shipping upstream.
English
0
8
31
1.6K
Conference on Parsimony and Learning (CPAL) retweetledi
Yann Traonmilin
Yann Traonmilin@YTraonmilin·
Good week ! Our paper with A. Joundi and J.-F. Aujol "From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent" hal.science/hal-05401157v1 has been accepted to conf. on Parsimony and Learning @CPALconf
Yann Traonmilin tweet media
English
2
1
6
754
Conference on Parsimony and Learning (CPAL) retweetledi
Nityanand Mathur
Nityanand Mathur@nityanandmathur·
Where does pronunciation live in a large language model (LLM)–based text-to-speech (TTS) system, and how can we surgically modify it for specific texts while preserving all other model behavior? We’ve been trying to answer this question for a while, and after a lot of failed ideas, debugging, and rethinking assumptions, we finally hit a breakthrough, and I am excited to share that our work has been accepted to @CPALconf 2026. @A_a_yush @harshan2002 @AkshatMandloi10 @kamath_sutra
Nityanand Mathur tweet media
English
12
12
49
7.7K
feliwang
feliwang@feliwang177207·
@CPALconf Hello! I found that there is no registration button on the CPAL website. Will it be open in the next few weeks?
English
1
0
1
38
Conference on Parsimony and Learning (CPAL)
Calling all parsimony and learning researchers 🚨🚨 The 3rd annual CPAL will be held in Tübingen Germany March 23–26, 2026! Check out this year's website for all the details cpal.cc
Conference on Parsimony and Learning (CPAL) tweet media
English
1
7
18
5.6K
Conference on Parsimony and Learning (CPAL)
🚨 The deadline for CPAL has been extended by a week: it is now December 12th, 2025 (AOE) 🚨 If you've been frustrated by your experience submitting to ICLR or other big conferences, you still have TIME to submit to CPAL
Conference on Parsimony and Learning (CPAL) tweet media
English
0
1
7
691
Conference on Parsimony and Learning (CPAL)
Submitting to CPAL: - You get high quality reviewers in your field - You get non-overwhelmed Area Chairs carefully evaluating the reviews - You get to be part of a small, interdisciplinary community that cares about your work
Ariel@redtachyon

Submitting to ICLR: - You get some AI-generated reviews - You review AI-generated papers - You write a rebuttal that gets ignored - You get doxxed Submitting to arXiv: - You get to post on x dot com - You might get posted by others - You get to argue with idiots in comments

English
1
3
16
2.3K
Conference on Parsimony and Learning (CPAL) retweetledi
Conference on Parsimony and Learning (CPAL) retweetledi
Shiwei Liu
Shiwei Liu@Shiwei_Liu66·
Love seeing @OpenAI highlight sparse circuits — sparsity is finally getting the attention it deserves. In our earlier work, we showed how sparse training can unlock robustness, efficiency, and better scaling: ICML’21 • NeurIPS’21 • ICLR'22 • ICML'24 • ICLR'23. Many great papers fly from @VITAGroupUT 🔗in-time over-parameterization: proceedings.neurips.cc/paper/2021/fil… 🔗 Granet: proceedings.mlr.press/v139/liu21y/li… 🔗 Random sparse training: openreview.net/forum?id=VBZJ_… 🔗 Outlier-weighed LLM pruning: arxiv.org/abs/2310.05175 🔗 Sparsity May Cry: arxiv.org/abs/2303.02141 The future is sparse. #Sparsity #DeepLearning
Leo Gao@nabla_theta

Excited to share our latest work on untangling language models by training them with extremely sparse weights! We can isolate tiny circuits inside the model responsible for various simple behaviors and understand them unprecedentedly well. openai.com/index/understa…

English
3
12
82
21.4K