Joemon Jose

516 posts

Joemon Jose banner
Joemon Jose

Joemon Jose

@joemonj

IR researcher and academic

Glasgow, Scotland Katılım Mart 2009
257 Takip Edilen601 Takipçiler
Joemon Jose retweetledi
Ioannis Arapakis
Ioannis Arapakis@iarapakis·
Big congrats to Ángela López Cardona & Sebastian Idesis on our latest IJCNN publication! We review how cognitive signals, esp. eye tracking, enhance language & image understanding in LLMs & VQA. Preprint: lnkd.in/erHm9NUx #AI #NLP #eyetracking #LLMs #IJCNN #TID
Ioannis Arapakis tweet media
English
0
2
6
410
Joemon Jose retweetledi
Python Coding
Python Coding@clcoding·
Computer Languages and their founders
Python Coding tweet media
English
95
1.4K
8.8K
430.8K
Joemon Jose retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
New short course: Attention in Transformers: Concepts and Code in PyTorch. Last week we released a course on how LLM transformers work. This week, go deeper and learn about the technical ideas behind the attention mechanism, and see how to code it in PyTorch. This course is built with @joshuastarmer, Founder and CEO of StatQuest. The attention mechanism was a breakthrough that led to transformers, the architecture powering large language models like ChatGPT. Transformers, introduced in the 2017 paper: "Attention is All You Need" by Viswani and others, took off because of its highly scalable design. In this course, you’ll learn how the attention mechanism, a key element of transformer-based LLMs, works and implement it in PyTorch. You'll develop deep intuition about building reliable, functional, and scalable AI applications. What you will do: - Understand the evolution of the attention mechanism, a key breakthrough that led to transformers. - Learn the relationships between word embeddings, positional embeddings, and attention. - Learn about the Query, Key, and Value matrices, and how to produce and use them in attention. - Walk through the math required to calculate self-attention and masked self-attention to learn why and how they work. - Understand the difference between self-attention and masked self-attention and how one is used in the encoder to build context-aware embeddings and the other is used in the decoder for generative outputs. - Learn the details of the encoder-decoder architecture, cross-attention, and multi-head attention and how they are all incorporated into a transformer. - Use PyTorch to code a class that implements self-attention, masked self-attention, and multi-head attention. There're lots of exciting technical details in this course. Please sign up here: deeplearning.ai/short-courses/…
English
55
256
1.8K
131.9K
Joemon Jose retweetledi
SIGIR 2025
SIGIR 2025@SIGIRConf·
📣 📣 The #sigir2025 LiveRAG Challenge is open at: sigir2025.dei.unipd.it/live-rag-chall… Prizes: - First Prize: $5000 - Second Prize: $3000 - Third Prize: $2000 Deadlines: - Application submission deadline: February 24, 2025 More info? Ask sigir2025-liverag-gen@tii.ae.
SIGIR 2025 tweet media
English
0
16
31
4.8K
Joemon Jose retweetledi
Ioannis Arapakis
Ioannis Arapakis@iarapakis·
Beyond a successful submission, it has also been an adrenaline-fueled research sprint positioned on the SoA of Reinforcement Learning-based Recommender Systems with Large Language Models, with @sasxbd, @joemonj and @alexk_z Our paper is available here: arxiv.org/abs/2403.16948
Alexandros@alexk_z

"Reinforcement Learning-based Recommender Systems with LLM for State Reward and Action Modeling" has been just accepted at #SIGIR2024 LLM (LE) simulates the user interactions generating state actions and rewards that are used to train the model with RL. arxiv.org/abs/2403.16948

English
0
1
7
375
Joemon Jose retweetledi
Alexandros
Alexandros@alexk_z·
"Reinforcement Learning-based Recommender Systems with LLM for State Reward and Action Modeling" has been just accepted at #SIGIR2024 LLM (LE) simulates the user interactions generating state actions and rewards that are used to train the model with RL. arxiv.org/abs/2403.16948
Alexandros tweet media
English
0
2
14
1.2K
Joemon Jose retweetledi
Mounia Lalmas
Mounia Lalmas@mounialalmas·
I am really honoured to join this amazing group of IR scholars. SIGIR has been a big part of my growth as a researcher, so a big thank you. It feels extra special to be welcomed into the SIGIR Academy with @IR_oldie; we both started working on IR around the same time in Glasgow.
Mark Sanderson@IR_oldie

It means a great deal to me to join the 2024 class of the @ACMSIGIR Academy. SIGIR is a community I have delighted to be part of for over 30 years. Sharing the honor with Donna Harman, @mounialalmas, @841io, & Yiqun Liu makes this all the more special. sigir.org/awards/sigir-a…

English
9
6
69
5.4K
Joemon Jose retweetledi
Junchen Fu
Junchen Fu@ron_junchen_fu·
Our paper, "IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT," has been accepted at #SIGIR2024 as a full paper!🎉🎉🎉 Thrilled to share this collaboration with @Solis_xurige , @joemonj , @alexk_z, & @iarapakis. 👏👏👏
English
3
4
5
1K
Joemon Jose retweetledi
Zijun Long
Zijun Long@zijunlong·
Glad to share that our paper "CFIR: Fast and Effective Long-Text to Image Retrieval for Large Corpora" has been accepted at #SIGIR2024 as a full paper, a joint work with @Solis_xurige, @richardm_, and @joemonj.
English
1
3
22
1K
Joemon Jose retweetledi
Hideo Joho
Hideo Joho@hideo_joho·
Delighted to share that I'll be spending part of my sabbatical at the University of Glasgow until March 2025. Grateful to @joemonj for hosting me. Looking forward to meeting former colleagues and new friends.
English
0
1
14
747