

AdapterHub
168 posts

@AdapterHub
A central repository for pre-trained adapter modules in transformers! Active maintainers: @clifapt @h_sterz @LeonEnglaender @timo_imhof @PfeiffJo







📢 New preprint 🎉 We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! 📜arxiv.org/abs/2407.01091 🧵👇

🎉Adapters 1.0 is here!🚀 Our open-source library for modular and parameter-efficient fine-tuning got a major upgrade! v1.0 is packed with new features (ReFT, Adapter Merging, QLoRA, ...), new models & improvements! Blog: adapterhub.ml/blog/2024/08/a… Highlights in the thread! 🧵👇





📢 New preprint 🎉 We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! 📜arxiv.org/abs/2407.01091 🧵👇




New paper! 🫡 We introduce Representation Finetuning (ReFT), a framework for powerful, efficient, and interpretable finetuning of LMs by learning interventions on representations. We match/surpass PEFTs on commonsense, math, instruct-tuning, and NLU with 10–50× fewer parameters.


