Rahaf Aljundi

21 posts

Rahaf Aljundi

Rahaf Aljundi

@AljundiRahaf

Research scientist at Toyota Motor Europe

Louvain, Belgium Katılım Kasım 2016
203 Takip Edilen195 Takipçiler
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
This fall, during a Dagstuhl seminar on continual learning, we discussed with various researchers from the field the roadmap for continual learning. We converged to one view: modular memory is the key to continual learning agents, as outlined in here arxiv.org/pdf/2603.01761
English
0
6
15
1K
Rahaf Aljundi retweetledi
Gunshi Gupta
Gunshi Gupta@GunshiGupta·
How do you give an RL agent useful long term memory when it needs to act over thousands of steps? Storing everything in-context is expensive, text summaries lose detail and plain recurrence struggles with long horizons. Our NeurIPS Spotlight paper explores a simple idea 🧵:
Gunshi Gupta tweet media
English
9
41
269
26K
#CVPR2026
#CVPR2026@CVPR·
@BinTrash Also oral/poster decisions have NOT been released.
English
2
0
2
2.3K
Mona Jalal @ cvpr2025
Mona Jalal @ cvpr2025@MonaJalal_·
What is the start and end date of CVPR including all workshops, expo, tutorials and main conference? Google says June 10-17 and CVPR website says June 11-15. Please RT and help me @CVPR #CVPR2025 #cvpr
English
1
0
3
5K
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
If you are at @eccvconf and interested in a postdoc position on efficient multimodal models, please ping me for a chat.
English
0
0
3
243
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
If you are working on continual learning with foundation models and you have something cool to share, please consider submitting to our NeurIPS workshop sites.google.com/u/0/d/1VBppSNb…
English
0
2
13
1.8K
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
Are you interested in efficient & continual fewshot updates for VLMs with zero replay and close to zero forgetting? Check our work on PEFT of VLMs 2407.16526 (arxiv.org)
English
0
0
0
143
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
Do you know that you can significantly improve VLMs performance such as LLaVA and MiniGPT2 by only updating the vision encoder separately and plug it back in the corresponding VLMs? 2407.16526 (arxiv.org)
English
1
0
1
200
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
This is why we need continual learning ;)
Rahaf Aljundi tweet media
English
0
3
23
0
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
We are looking for excellent students for research internships on Continual Learning at Toyota Motor Europe. Please share and contact me if you are interested. social.icims.com/viewjob/pt1669…
English
0
0
0
0
Rahaf Aljundi retweetledi
Thang Doan
Thang Doan@doan_tl·
We [EXCEPTIONALLY] extend the deadline of our workshop on "Theory of Continual Learning" (ICML'21) to June 11th (AoE) more information at: sites.google.com/view/cl-theory…
English
0
3
7
0
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
Our new continual learning survey with @matt_dl96 and @mmasana is now on arxiv. We compare 10 SOTA continual learning methods on 2 benchmarks. We investigate the effect of model capacity, regularization, task ordering and so much more. Check it out at: arxiv.org/abs/1909.08383
English
0
0
9
0
Rahaf Aljundi retweetledi
Massimo Caccia
Massimo Caccia@MassCaccia·
Most continual learning approaches assume that training on a new task will cause equal interference over previously learned tasks. With "Maximally Interfered Retrieval" we relax this assumption by automatically rehearsing on tasks currently undergoing the most forgetting.
Massimo Caccia tweet media
English
3
25
69
0
Rahaf Aljundi retweetledi
Massimo Caccia
Massimo Caccia@MassCaccia·
Interested in Continual (Lifelong) Learning? Come to the Workshop on Multi-Task and Lifelong Reinforcement Learning tomorrow (Saturday) @ ICML for posters and oral on how to rehearse on on older tasks efficiently!
Massimo Caccia tweet mediaMassimo Caccia tweet media
English
1
12
38
0
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
@mikarv @lilianedwards I guess telling the network not to forget important thing is different from asking it to forget something
English
0
0
0
0
Rahaf Aljundi
Rahaf Aljundi@AljundiRahaf·
Given a limited model capacity and unlimited evolving data, it is not possible to preserve all the previous knowledge that an agent has learned. How about learning what (not) to forget instead? That's what we humans do and our method aims at arxiv.org/abs/1711.09601
English
0
0
5
0