Learning Agents Research Group

150 posts

Learning Agents Research Group banner
Learning Agents Research Group

Learning Agents Research Group

@utlarg

Learning Agents Research Group, Department of Computer Science, The University of Texas at Austin

Se unió Haziran 2021
87 Siguiendo475 Seguidores
Learning Agents Research Group retuiteado
Caroline Wang
Caroline Wang@CarolineWang98·
[1/n] Just wrapped up 7 months interning with @pcastr at DeepMind and I'm so excited to share our work: arxiv.org/abs/2602.10324. TLDR: We used LLM-powered program synthesis to automatically model and discover differences between human and LLM strategic behavior
Caroline Wang tweet media
English
8
46
317
27.1K
Learning Agents Research Group retuiteado
Peter Stone
Peter Stone@PeterStone_TX·
After 6 years as director of Texas Robotics, I'm delighted to hand over the reigns to the capable hands of Prof. Jose Millan. I'm honored to continue supporting Jose in my new role of "Founding Director" as I turn my main focus to acting as Chair of the UT Computer Science Dept
English
4
6
85
5.8K
Learning Agents Research Group retuiteado
Brett Barkley
Brett Barkley@bebark99·
(1/n) With over 1,300 citations, MBPO is often cited as proof that model based RL beats model free methods. In arxiv.org/pdf/2412.14312 we showed it often completely fails in DeepMind Control. In our new work, Fixing That Free Lunch (FTFL), we explain why and make it succeed.
Brett Barkley tweet media
English
2
4
19
5K
Learning Agents Research Group retuiteado
Peter Stone
Peter Stone@PeterStone_TX·
UTCS is hiring for several positions this year. Please share with anyone who may be interested! cs.utexas.edu/faculty/recrui…
English
0
8
26
3.9K
Learning Agents Research Group retuiteado
Youngju Yoo
Youngju Yoo@yyoungju_yoo·
Humans learn new manipulation skills from examples and improve as they see more examples. How can we endow robots with the same ability? 🤖 🚀We introduce RoboSSM, scalable in-context imitation learning that enables robots to learn and improve at test time—robots can improve with more examples without returning to the GPU for fine-tuning. 🧵 Paper: arxiv.org/abs/2509.19658 Video: youtube.com/watch?v=YR4m21…
YouTube video
YouTube
GIF
English
7
29
126
15.3K
Learning Agents Research Group retuiteado
Michael Munje
Michael Munje@michaelmunje·
[1/8] New social navigation paper + benchmark: SocialNav-SUB 🚶🤖 Recent work puts VLMs on robots for navigation, but can they really interpret scenes and extract key details for social navigation? 🔎 larg.github.io/socialnav-sub
Michael Munje tweet media
English
8
3
9
3K
Learning Agents Research Group retuiteado
Peter Stone
Peter Stone@PeterStone_TX·
Congrats to Jiaxun Cui, the 29th Ph.D. graduate from my lab, on the defense of her dissertation entitled "Communication and Generalization in Multi-Agent Learning". Pictured here with me and the rest of her thesis committee (Amy Zhang, Yuke Zhu, Sandeep Chinchali, Yuandong Tian)
Peter Stone tweet media
English
6
4
97
78.5K
Learning Agents Research Group retuiteado
Cameron Angliss
Cameron Angliss@cameronangliss·
Introducing VGC-Bench, a new plug-and-play benchmark made to support research efforts in the domain of Pokémon VGC! Read on for the open source code, the paper, and everything you need to dive in. [1/9]
GIF
English
8
23
161
25.9K
Learning Agents Research Group retuiteado
Learning Agents Research Group retuiteado
Jiaxun Cui 🐿️
Jiaxun Cui 🐿️@cuijiaxun·
[1/n] Can LLM Agents learn to communicate and coordinate in natural language in driving scenarios 🚗 through self-play interactions? Our recent research shows the potential for such learning; interestingly, there is evidence that they develop concise protocols for collaboration! A distilled version of the agents’ model could generalize to multiple scenarios, communicate at 250 bytes per message, and make decisions within 500 ms while maintaining the original (large) model’s performance. 💬Natural Language Communication among Autonomous Agents ✨Multi-agent Gymnasium for Policy Learning 👀Partial Observation and Negotiation Tasks More videos & analysis project page: talking-vehicles.github.io arXiv: arxiv.org/abs/2505.18334 with @ChenTangMark, Jarrett Holtz, Janice Nguyen, @aleallievi, @HangQiu, @PeterStone_TX open to discussion & collaboration! @utlarg @texas_robotics #LLM #multiagent
English
3
9
31
5K
Learning Agents Research Group retuiteado
Jiaheng Hu
Jiaheng Hu@JiahengHu1·
Real-world RL, where robots learn directly from physical interactions, is extremely challenging — especially for high-DoF systems like mobile manipulators. 1⃣ Long-horizon tasks and large action spaces lead to difficult policy optimization. 2⃣ Real-world exploration with whole-body contact raises serious safety concerns. 🚀 Introducing SLAC, a framework that brings safety and efficiency to whole-body real-world RL. Paper: arxiv.org/abs/2506.04147 Video: youtube.com/watch?v=bj5Ghj… 🧵
YouTube video
YouTube
GIF
English
5
49
295
36.1K
Learning Agents Research Group retuiteado
Peter Stone
Peter Stone@PeterStone_TX·
Congratulations to my Ph.D. student, Jiaheng Hu - co-advised with Roberto Martin-Martin. cs.utexas.edu/news/2025/jiah…
English
0
1
21
3.3K
Learning Agents Research Group retuiteado
Zizhao Wang
Zizhao Wang@duke_zzwang·
In multi-object env, why do most Unsupervised Skill Discovery methods fail to learn complex skills like tool use? Because they simply maximize state coverage. Introducing our solution SkiLD: Skill Discovery Guided by Factor Interactions (NeurIPS24) wangzizhao.github.io/SkiLD/
GIF
English
1
12
63
8.5K
Learning Agents Research Group retuiteado
Jiaheng Hu
Jiaheng Hu@JiahengHu1·
🚀Unsupervised RL can learn skills purely from reward-free interactions with an environment. But what form of skills can facilitate efficient downstream hierarchical learning? Introducing DUSDi: Disentangled Unsupervised Skill Discovery (NeurIPS24). jiahenghu.github.io/DUSDi-site 🧵
GIF
English
2
14
75
5.9K