Carolin Holtermann

21 posts

Carolin Holtermann

Carolin Holtermann

@CarolinHolterm

PhD Student at the Data Science Chair in Hamburg

Hamburg, Germany เข้าร่วม Aralık 2021
73 กำลังติดตาม67 ผู้ติดตาม
ทวีตที่ปักหมุด
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
🧠 Which knowledge module do I need and how much of it ⁉️ 🌟 Check out our latest paper, "What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition", which has been officially accepted for Findings of the EACL Conference! arxiv.org/pdf/2401.12756…
Carolin Holtermann tweet media
English
4
1
11
1.1K
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
🚨 New paper alert: A new generation of LLMs can now process speech natively. This could expand access for millions excluded by text interfaces, but our research shows a cost: demographic cues in speaker voice can trigger stereotypical model responses. 🎙️⚖️
Carolin Holtermann tweet media
English
3
6
19
3.5K
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
🚨The result is a self-reinforcing gap: systems optimize for frequent users, while growing more distant from those they were supposed to reach. Fairness and accessibility have to be tackled together.
English
0
0
1
41
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
🛠️ Study 3: Mitigation exists. Discrimination tracks continuously with vocal pitch. Systematically lowering pitch reduced bias to statistically insignificant levels across all tested models.
Carolin Holtermann tweet media
English
1
0
1
53
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
📣 CFP: Eval4SD @ KONVENS 2026, Hamburg (Sept 14–17) 1st workshop on evaluating LLMs for specialized domains. We invite work on benchmarking, replication, and evaluation methodology across diverse fields. Deadline: Jul 3, 2026, 23:59 CEST eval4sd.github.io
English
0
0
6
40
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
✨Preprint Alert ✨ Thrilled to share that our paper has been accepted at the #EMNLP2025 main conference! 🚨 We uncover harmful stereotypical biases in LLMs toward German dialect speakers. 🚨 Check out the paper: arxiv.org/abs/2509.13835 Many thanks to all the great authors! 🙌🏼
NALA@NALACUJGU

"You speak Bavarian? Then you must be uneducated and closed-minded!" 🤯 Not your opinion? Good. But it might be your LLM's! 🧵 In our #EMNLP2025 paper we uncover concerning dialect bias in recent LLMs - including GPT-5. #AI #Bias #Dialect #Fairness #LLM #NLProc #Safety

English
0
0
4
70
Carolin Holtermann รีทวีตแล้ว
Gregor Geigle
Gregor Geigle@GregorGeigle·
Want to train a *multilingual* LVLM but not sure how? Or looking for a strong model to use? Presenting "Centurio: On Drivers of Multilingual Ability of Large Vision-Language Model"! Arxiv: arxiv.org/abs/2501.05122 HF Collection: huggingface.co/collections/Wu…
English
2
4
14
3.8K
Carolin Holtermann รีทวีตแล้ว
Trustworthy AI Lab
Trustworthy AI Lab@TrustAI_lab·
📢Hi X Community! We are the #datascience research group led by @anne_lauscher in Hamburg! We are doing #NLProc research relating to various topics such as #LLMs, multimodality, #EthNLP, etc. Follow our new account if you are interested in latest results, insights, and more! 📢
English
0
4
9
857
Carolin Holtermann รีทวีตแล้ว
Markus Frohmann
Markus Frohmann@FrohmannM·
🎉 Our paper "ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale" has been accepted to #ACL2024 findings! Congrats to my amazing co-authors @CarolinHolterm, Shahed Masoudian, @anne_lauscher, Navid Rekabsaz!
Markus Frohmann@FrohmannM

Preprint alert! 🚨 📢 Announcing… ScaLearn! Redefining the landscape of Multi-task Learning (MTL) in NLP. Dive in to see how our ScaLearn enables highly efficient task transfer using adapters! 🚀 Spoiler: 8 parameters may be all you need. 🤯

English
0
2
7
455
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
@rasbt Very cool collection! We have a paper at Findings of the EACL 2024 where we present a unified framework for zero-shot composition of knowledge modules using parameter averaging and output ensembling: arxiv.org/pdf/2401.12756…
English
0
0
1
16
Sebastian Raschka
Sebastian Raschka@rasbt·
Weight averaging and model merging for LLMs seem to be the most interesting themes in 2024 so far. What are the benefits? Combining multiple models (or checkpoints) into a single one can improve training convergence, overall performance, and also robustness. I will probably do a deeper dive in the upcoming weeks, but here are at least 3 interesting papers. To get started, here's a selection of papers on this learning trajectory (no pun intended).
Sebastian Raschka tweet media
English
21
155
994
157.2K
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
🧠 Which knowledge module do I need and how much of it ⁉️ 🌟 Check out our latest paper, "What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition", which has been officially accepted for Findings of the EACL Conference! arxiv.org/pdf/2401.12756…
Carolin Holtermann tweet media
English
4
1
11
1.1K
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
(2/2) Focusing on the scenario of domain knowledge and adapter layers, our framework provides a systematic unification of concepts, allowing us to conduct the first comprehensive benchmarking study of various zero-shot knowledge composition strategies.
English
0
0
0
92
Carolin Holtermann
Carolin Holtermann@CarolinHolterm·
(1/2)🔍In this work, we delve into the possibilities of combining knowledge by proposing a novel framework for zero-shot module composition, which encompasses existing and some novel variations for selecting, weighting, and combining parameter modules under a unified notion.
English
0
0
0
88