Quentin Delfosse

19 posts

Quentin Delfosse banner
Quentin Delfosse

Quentin Delfosse

@liimeleemon

PhD student in NeuroSymbolic Reinforcement Learning (AI)

Darmstadt/Paris Katılım Mart 2018
62 Takip Edilen98 Takipçiler
Quentin Delfosse retweetledi
Georgia Chalvatzaki
Georgia Chalvatzaki@GeorgiaChal·
🚀 Hiring PhDs & Postdocs in Structured Robot Learning & Embodied AI @TUDarmstadt (PEARL Lab) 🤖 We study how structure in the robot–environment system can be exploited to learn robust, adaptive, and generalizable behaviors, beyond black-box policies 🔬 Topics: • Grounding (language → perception → action) • Structured world models • VLA + control + memory • RL (credit assignment, offline→online) • Whole-body & bimanual mobile manipulation 🇪🇺 New EU Lighthouse Project on Generative AI for Robotics + ERC StG SIREN 👉 Full call: tinyurl.com/pearlgenai Please repost 🙏 #RobotLearning #EmbodiedAI #Robotics #MachineLearning #PhDPositions #Postdoc
English
3
30
131
12.3K
Quentin Delfosse retweetledi
Kevin Patrick Murphy
Kevin Patrick Murphy@sirbayes·
I am pleased to announce our new paper, which provides an extremely sample-efficient way to create an agent that can perform well in multi-agent, partially-observed, symbolic environments. The key idea is to use LLM-powered code synthesis to learn a code world model (in the form of Python code) from a small dataset of (observation, action) trajectories, plus some background information (in text form), and then to pass this induced WM, plus the observation history, to an existing solver, such as (information-set) MCTS, to choose the next action.
Kevin Patrick Murphy tweet media
English
17
101
820
75.6K
Quentin Delfosse retweetledi
Axel Darmouni
Axel Darmouni@ADarmouni·
Making actually explainable RL agents: an experience on Atari games 🧵📖 Read of the day, season 3, day 2: Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents, by @liimeleemon, Sztwiertnia et al from TU Darmstadt’s computer science department The premise made by the authors is basically that current interpretability methods regarding the behavior of RL agents are flawed Main reason why they point this out is because Shortcut Learning has been seen to be a thing, at least for Atari Games. For instance, in Pong, because the AI follows the ball’s trajectory… an RL model can actually choose to focus on the AI’s paddle rather than the ball to answer its action of going up or down. This misalignment of the AI’s decision process is a problem, even more so because its interpretability is not easy. Hence the work of the authors in this paper 👇
Axel Darmouni tweet media
English
2
6
9
757
Quentin Delfosse retweetledi
Wolfgang Stammer
Wolfgang Stammer@WolfStammer·
How can we identify and mitigate shortcuts and misalignment in RL with concept bottlenecks? We'll tell you at our poster 3411 at #NeurIPS2024
Wolfgang Stammer tweet media
English
1
2
25
636
Quentin Delfosse
Quentin Delfosse@liimeleemon·
So happy that our paper Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents (arxiv.org/pdf/2401.05821) has been accepted at NeurIPS 2024! 🎉 If you are wondering why RL agents cannot generalize to new scenarios and how to mitigate it, check it out !
Quentin Delfosse tweet media
English
3
11
59
3.7K
Quentin Delfosse retweetledi
Hector Kohler
Hector Kohler@kohler_hector·
Please have a look at the accepted papers at the Interpretable RL @InterppolRL workshop. Stay tuned if you attend @RL_Conference for the workshop program happening on August 9. #tab-correct-paper-that-fits-the-topic-accept" target="_blank" rel="nofollow noopener">openreview.net/group?id=rl-co…
English
0
1
6
237
Quentin Delfosse retweetledi
Martin Mundt
Martin Mundt@mundt_martin·
If you're @iclr_conf #ICLR2024 then don't miss out on Quentin's spotlight✨for our work "Adaptive Rational Activations to Boost Deep RL" - Fri May 10th at 10:30-12:30 in Hall B poster 148! Come & learn how to easily equip your NN with more plasticity with plug&play activations!
Martin Mundt@mundt_martin

Super happy that our work “Adaptive Rational Activations to Boost Deep Reinforcement Learning” got accepted as a spotlight to #ICLR2024 -> Use rationals as parameter efficient plug&play activations to promote neural plasticity! Congrats @liimeleemon! openreview.net/forum?id=g90ys…

English
0
3
21
3.1K
Quentin Delfosse retweetledi
InterpPol Workshop
InterpPol Workshop@InterppolRL·
Deadline reminder for the InterpPol workshop @RL_Conference. Submit published or original work before April 26 AoE. Topics of interest: - Interpretable/Explainable RL - Policy Distillation - Formal Verification and RL - Real word and applications of RL #tab-active-submissions" target="_blank" rel="nofollow noopener">openreview.net/group?id=rl-co…
English
0
4
6
1.7K
Quentin Delfosse retweetledi
Hector Kohler
Hector Kohler@kohler_hector·
The workshop on Interpretable Policies in Reinforcement Learning (InterpPol) has been accepted @RL_Conference: "Good diversity. Good list of organizers. Good line up of speakers. Good topic."! Stay tuned for the upcoming twitter account, website, and call for papers.
English
1
6
15
1.5K
Quentin Delfosse retweetledi
Martin Mundt
Martin Mundt@mundt_martin·
Super happy that our work “Adaptive Rational Activations to Boost Deep Reinforcement Learning” got accepted as a spotlight to #ICLR2024 -> Use rationals as parameter efficient plug&play activations to promote neural plasticity! Congrats @liimeleemon! openreview.net/forum?id=g90ys…
Martin Mundt tweet media
English
0
3
34
5.9K