

Giorgos Vernikos
126 posts

@gvernikos
ML SWE @googledeepmind Gemini in🇨🇭. Prev @EPFL, @Google, @AmazonScience and @ecentua



Are you interested to join @GoogleDeepMind as a student researcher (PhD students)? I am hiring for a fun project on reward modeling for world models! If you are interested on video understanding/generation and reward models/critiquing, get in touch and apply below! 👇

🚀 Introducing INCLUDE 🌍: A multilingual LLM evaluation benchmark spanning 44 languages! Contains *newly-collected* data, prioritizing *regional knowledge*. Setting the stage for truly global AI evaluation. Ready to see how your model measures up? #AI #Multilingual #LLM #NLProc



📢📢 Call for papers #Repl4NLP @naaclmeeting Consider submitting your work -full papers, extended abstracts, or cross-submissions! ✨ Direct paper submission deadline: Jan 30, 2025 ✨ ARR commitment deadline: Feb 20, 2025 More details on our website: sites.google.com/view/repl4nlp2…



We have released Global-MMLU-lite 🔥 This is designed to run more efficiently while giving a good estimate of overall performance -- which balances both culturally sensitive and culturally agnostic examples. 🎉 huggingface.co/datasets/Coher…

Today, we’re excited to share Global-MMLU 🌍: a multilingual LLM benchmark covering MMLU translations in 42 languages -- combined with improved quality through human curation and extensive metadata on what questions are culturally sensitive 🗽



Can we achieve effective zero-shot summarization by combining language & task information from existing PEFT adapters with weight arithmetic? Our paper, accepted at the @mrl2024_emnlp workshop #EMNLP2024, explores this question! A 🧵:

Can we achieve effective zero-shot summarization by combining language & task information from existing PEFT adapters with weight arithmetic? Our paper, accepted at the @mrl2024_emnlp workshop #EMNLP2024, explores this question! A 🧵:







Happy to share our latest preprint with @apopescubelis : "Don't Rank, Combine! Combining Machine Translation Hypotheses Using Quality Estimation" 📝preprint: arxiv.org/abs/2401.06688

Congratulations to Dr. Katerina (@katemargatina) who passed her PhD viva today with no corrections!! 🎉🥳 Katerina has made very important contributions to active learning for NLP, including contrastive-based acquisition!



Happy to share our latest preprint with @apopescubelis : "Don't Rank, Combine! Combining Machine Translation Hypotheses Using Quality Estimation" 📝preprint: arxiv.org/abs/2401.06688

What do different contrastive learning (CL) losses actually optimize for? In our #ICML2024 paper, we provide a theoretical analysis and propose two loss functions that outperform conventional CL losses. Full paper here: arxiv.org/abs/2405.18045 w/@gbouritsas A thread 🧵

