Iñigo Alonso

33 posts

Iñigo Alonso banner
Iñigo Alonso

Iñigo Alonso

@alonsonlp

Research Associate at @EdinburghNLP | Former Ph.D. student at @Hitz_zentroa | NLP | Multimodality | Table Understanding

Edinburgh, Scotland Katılım Ekim 2023
128 Takip Edilen81 Takipçiler
Iñigo Alonso
Iñigo Alonso@alonsonlp·
I’ll be presenting TABLET at #ICLR2026 this week! Our 4 million example Visual Table Understanding dataset with original table visualisations! Come and say hi! :D We’ll have actual Scottish tablets 👀 See you all on Friday at 7:15 PM Pavilion 4 P4-#3606 alonsoapp.me/tablet/
Iñigo Alonso tweet media
English
1
6
11
449
Iñigo Alonso retweetledi
Irina Saparina
Irina Saparina@irisaparina·
Reasoning models are powerful, but they burn thousands of tokens on potentially wrong interpretations for ambiguous requests! 👉 We teach models to think about intent first and provide all interpretations and answers in a single response via RL with dual reward. 🧵1/6
Irina Saparina tweet media
English
1
12
35
2.7K
Iñigo Alonso retweetledi
Ashutosh Adhikari
Ashutosh Adhikari@aadhikariii·
Excited to share my first work as a PhD student at @EdinburghNLP that I will be presenting at EMNLP! RQ1: Can we achieve scalable oversight across modalities via debate? Yes! We show that debating VLMs lead to better model quality of answers for reasoning tasks.
Ashutosh Adhikari tweet media
English
1
7
14
1.3K
Iñigo Alonso retweetledi
HiTZ zentroa (EHU)
HiTZ zentroa (EHU)@Hitz_zentroa·
Atzo eta gaur euskarazko ereduen artean onena aukeratzen utzi diegu #IEB2024|ko parte hartzaileei. Mila esker deneri!! Sistema komertzialez gain HiTZ garatzen ari den Latxa 🐑berri txikiena ere probatu dugu, 8B tamainakoa. Llama 3.1-en oinarritzen da. Eta irabazlea… 🥁
Euskara
1
15
15
3.7K
Iñigo Alonso retweetledi
Masaru Isonuma
Masaru Isonuma@m_isonuma·
New paper: "What's New in My Data? Novelty Exploration via Contrastive Generation," w/ @iatitov at @EdinburghNLP arxiv.org/abs/2410.14765 With no access to the data, but only access to the pre-trained and fine-tuned model, we can reveal novel aspects of the fine-tuning dataset.
GIF
English
2
6
26
4K
Iñigo Alonso retweetledi
HiTZ zentroa (EHU)
HiTZ zentroa (EHU)@Hitz_zentroa·
WE ARE HIRING!! The HiTZ Center (hitz.eus) at the University of the Basque Country (UPV/EHU) invites applications for several funded research engineering and pre/postdoctoral positions in Natural Language and Speech Processing. hitz.eus/job-offers
English
0
19
19
1.7K
Agostina Calabrese 🦋
Agostina Calabrese 🦋@agostina_cal·
(Sadly?) on my way back from my solo trip in Southern Asia. I have biked in the rice fields, swum with elephants in the north of Thailand and encountered wild orangutans in the Bornean jungle. I'm basically Julia Roberts now. Any cool pictures from your post-#ACL2024NLP trip? 👀
GIF
English
2
0
22
1.8K
Iñigo Alonso retweetledi
EdinburghNLP
EdinburghNLP@EdinburghNLP·
Represent!!! 🚀🚀🚀🚀🚀
EdinburghNLP tweet media
English
0
5
59
13.3K
Iñigo Alonso
Iñigo Alonso@alonsonlp·
@jparag123 Parag! I didn’t get to see you yet! We share the same poster session, but I’ll try to pass by yours :D
English
0
0
1
40
Iñigo Alonso retweetledi
Julen Etxaniz
Julen Etxaniz@juletxara·
Today I will be presenting Latxa at @aclmeeting In-Person Poster Session 3 at 16:00-17:30. Come if you want to hear about our work. I will be giving out Latxa stickers too! #ACL2024NLP
Julen Etxaniz tweet media
Julen Etxaniz@juletxara

In our new paper, we introduce Latxa, a family of LLMs for Basque from 7 to 70B parameters that outperform open models and GPT3.5. Models and datasets @huggingface hf.co/collections/Hi… Code: github.com/hitz-zentroa/l… Blog: hitz.eus/en/node/343 Paper: arxiv.org/abs/2403.20266

English
0
10
27
3.5K
Iñigo Alonso
Iñigo Alonso@alonsonlp·
Hey! I will be presenting our work, PixT3: Pixel-based Table-To-Text Generation, tomorrow at Poster Session 1 at 11:00AM during #ACL2024NLP Come and say hi! :D
Iñigo Alonso@alonsonlp

Reimagining table representation! In our new #ACL2024NLP paper we introduce PixT3: a family of image-based Table-to-Text Generation models that scale better at generating text from large tables, outperforming traditional text-based baselines. arxiv.org/abs/2311.09808

English
0
4
22
1.6K
Iñigo Alonso
Iñigo Alonso@alonsonlp·
Conclusions: (i) Overall performance of LLMs with or without RAG still has large room for improvement; (ii) Performance for non-English languages substantially lower - stresses urgent need of advancing Medical QA in languages different to English
English
0
1
1
167
Iñigo Alonso
Iñigo Alonso@alonsonlp·
Highlights: (i) MedExpQA: the first multilingual benchmark for MedicalQA including gold reference explanations; (ii) Exhaustive comparison of gold reference explanations with respect to automatically retrieved medical knowledge using state-of-the-art RAG techniques
English
1
1
2
179
Iñigo Alonso retweetledi
Mustafa Shukor
Mustafa Shukor@MustafaShukor1·
LLMs are the de facto building block in recent multimodal models. Yet, it is still unclear why text-only models can generalize to multimodal inputs! We investigate why, and provide insights with practical implications on performance, efficiency and safety problems. (1/10)
Mustafa Shukor tweet media
English
2
32
137
16.2K