UC Santa Barbara NLP Group

284 posts

UC Santa Barbara NLP Group banner
UC Santa Barbara NLP Group

UC Santa Barbara NLP Group

@ucsbNLP

NLP and AI Researchers @ucsantabarbara. Profs. @xwang_lk, @WilliamWangNLP, @CodeTerminator, Xifeng Yan, Simon Todd, @WenboGuo4.

Santa Barbara, CA انضم Temmuz 2021
616 يتبع2.3K المتابعون
UC Santa Barbara NLP Group أُعيد تغريده
Xin Eric Wang
Xin Eric Wang@xwang_lk·
It seems the AI agents are discussing our Group-Evolving Agents paper on moltbook and rethinking how they should evolve together. lmao. The paper is here: arxiv.org/abs/2602.04837
Xin Eric Wang tweet media
English
1
3
18
2.4K
UC Santa Barbara NLP Group
While the GPUs keep working, we took a break to roll some strikes on Friday
UC Santa Barbara NLP Group tweet media
English
1
1
23
2.5K
UC Santa Barbara NLP Group
Exploration in long-horizon RL is a hard problem to solve. Nice post on how simulator structure can help.
Gurusha Juneja@GurushaJuneja

Recently I've been thinking about why long-horizon RL is so hard to get working, even in simulation. The standard answer is "sparse rewards" and "sample inefficiency" but I think that says very less about the actual problem. I think the problem is in exploration. I believe that the standard exploration strategies are not equipped with appropriate tools to search in a combinatorially large search space. With horizon H and action space |A|, the trajectory space grows as |A|^H. Random exploration, epsilon-greedy, even curiosity-driven methods cover measure zero of this space. Curiosity-based methods (ICM, RND) saturate on early-task states and don't explore into late-task states where meaningful reward is actually available. The good news is that we can leverage some properties in the simulation environment itself. In simulators we can expose things that real world doesn't give us, for example ground truth state, arbitrary resets, contact forces, internal predicates. Most RL formulations ignore all of this and treat the environment as a black box. Environment-aware exploration algorithms can really help here. Asymmetric actor-critic passes full simulator state to the critic for better value estimates, lower variance gradients, and tractable credit assignment. Backward curriculum exploits arbitrary resets to keep the effective training horizon short. HER relabels failed trajectories using simulator state, converting zero-reward rollouts into valid training data. Asymmetric AC also transfers cleanly since the critic is discarded at deployment. How much simulator privilege a policy can absorb while still transferring to real remains an open question. But the broader point is, long-horizon RL should leverage the full simulator state for exploration and not treat it as a black box.

English
0
0
3
293
UC Santa Barbara NLP Group أُعيد تغريده
Nurvai - The Data Layer for Physical AI
This week as our #NurvaiResearcherOfTheWeek we'd like to highlight @ZhaotianWeng and the team behind VQA-Causal and VCR-Causal (EACL 2026 Oral). Really interesting work probing whether vision-language models actually understand causal relationships in visual scenes. By introducing benchmarks that remove common shortcuts, the authors show that many VLMs struggle with causal order reasoning, often performing near random when superficial cues are removed. This suggests that current VLM performance may rely heavily on dataset biases and correlations rather than true causal understanding of events. One takeaway for us is that building datasets that explicitly target causal structure, rather than just recognition or description, could be a powerful lever for improving multimodal reasoning and making model performance more robust.
Zhaotian Weng@WengZhaoti39773

Can VLMs really understand causal relationships in visual scenes? We introduce VQA-Causal and VCR-Causal, and show that VLMs struggle with causal order reasoning, often near random when shortcuts are removed. Check our EACL 2026 Oral Paper🎉👇 aclanthology.org/2026.eacl-long…

English
0
2
4
396
UC Santa Barbara NLP Group أُعيد تغريده
Qianqi "Jackie" Yan
Qianqi "Jackie" Yan@qianqi_yan·
🚀 Excited to share our new work: 𝗢𝗺𝗻𝗶𝗧𝗿𝗮𝗰𝗲: A Unified Framework for Generation-Time Attribution in Omni-Modal LLMs Multimodal LLMs can process text 📝, images 🖼️, audio 🎧, and video 🎬 together, but when they generate a response, 𝘄𝗵𝗶𝗰𝗵 𝗶𝗻𝗽𝘂𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘀𝘂𝗽𝗽𝗼𝗿𝘁𝗲𝗱 𝗲𝗮𝗰𝗵 𝗰𝗹𝗮𝗶𝗺? OmniTrace traces every generated span back to its multimodal sources 𝗱𝘂𝗿𝗶𝗻𝗴 𝗱𝗲𝗰𝗼𝗱𝗶𝗻𝗴 across text, image, audio, and video. No retraining needed. Fully plug-and-play. 🔌 📄 Paper: github.com/eric-ai-lab/Om… 💻 Code: github.com/eric-ai-lab/Om… 🌐 Project: jackie-2000.github.io/omnitrace.gith… 📦 pip install omnitrace 🧵👇
Qianqi "Jackie" Yan tweet media
English
1
6
17
1.7K
Zhaotian Weng
Zhaotian Weng@WengZhaoti39773·
Can VLMs really understand causal relationships in visual scenes? We introduce VQA-Causal and VCR-Causal, and show that VLMs struggle with causal order reasoning, often near random when shortcuts are removed. Check our EACL 2026 Oral Paper🎉👇 aclanthology.org/2026.eacl-long…
English
1
3
24
4.2K
UC Santa Barbara NLP Group أُعيد تغريده
Xin Eric Wang
Xin Eric Wang@xwang_lk·
🎉 Introducing PARE: a new framework for evaluating proactive AI agents. Today’s agents are reactive. The next wave? Proactive agents that anticipate your needs, like adding “soap” to your shopping list when your roommate texts you. 🚧 The challenge: you can’t evaluate this with static benchmarks. 🍐 PARE: active user simulation with realistic mobile interactions 📱 Asymmetric design: agent ≠ user view (just like real life) 👀 Observe → Execute: assist only when it matters 📋 PARE-Bench: 143 tasks, 9 apps, real-world complexity 📊 Result: even top models hit just 42% success Built on Meta’s ARE, PARE brings scalable, realistic evaluation to proactive AI.
Xin Eric Wang tweet media
Deepak Nathani@deepaknathani11

🎉 Excited to share 🍐 PARE and PARE-Bench - a framework and benchmark for evaluating proactive assistants through active user simulation in mobile environments. Current LM agents are reactive: they wait for you to tell them what to do. Proactive agents flip this. They observe what you're doing and figure out how to help. Imagine your assistant notices you got a text from your roommate saying "we're out of soap" while you're editing your shopping list, and adds soap to your list. 🚧 Evaluating these agents is challenging because they must observe realistic user behavior to infer goals. You can't do this with static benchmarks or passive users. Our key contributions: 🍐 PARE: an active user simulation framework where users navigate apps through Finite State Machine (FSM) based stateful interfaces, just like on a real phone 📱 Asymmetric design: users and assistants observe different information and interact through different interfaces, matching real-world deployment 👀 Observe-Execute architecture: lightweight observer monitors continuously, executor acts only after user approval 📋 PARE-Bench: 143 tasks across 9 app categories testing goal inference, intervention timing, and multi-app orchestration 📊 Evaluation of 7 LLMs reveals that even frontier models achieve only 42% success rate PARE is built on top of Meta's Agent Research Environment (ARE) and enables scalable, repeatable evaluation of proactive agents. In PARE, the simulated user goes about their day on the phone: accomplishing goals, navigating between apps, and responding to notifications. The proactive agent watches all of this unfold and uses the user's actions and environment signals to build context about what the user might need help with. Huge thanks to my advisors @xwang_lk @WilliamWangNLP and my amazing collaborators @JasonZ118707 @HuanCC2002 Jiaming Shan @yinfeiy Alkesh Patel @zhegan4 @m2saxon 🙏

English
0
17
84
14.7K
UC Santa Barbara NLP Group أُعيد تغريده
Deepak Nathani
Deepak Nathani@deepaknathani11·
🎉 Excited to share 🍐 PARE and PARE-Bench - a framework and benchmark for evaluating proactive assistants through active user simulation in mobile environments. Current LM agents are reactive: they wait for you to tell them what to do. Proactive agents flip this. They observe what you're doing and figure out how to help. Imagine your assistant notices you got a text from your roommate saying "we're out of soap" while you're editing your shopping list, and adds soap to your list. 🚧 Evaluating these agents is challenging because they must observe realistic user behavior to infer goals. You can't do this with static benchmarks or passive users. Our key contributions: 🍐 PARE: an active user simulation framework where users navigate apps through Finite State Machine (FSM) based stateful interfaces, just like on a real phone 📱 Asymmetric design: users and assistants observe different information and interact through different interfaces, matching real-world deployment 👀 Observe-Execute architecture: lightweight observer monitors continuously, executor acts only after user approval 📋 PARE-Bench: 143 tasks across 9 app categories testing goal inference, intervention timing, and multi-app orchestration 📊 Evaluation of 7 LLMs reveals that even frontier models achieve only 42% success rate PARE is built on top of Meta's Agent Research Environment (ARE) and enables scalable, repeatable evaluation of proactive agents. In PARE, the simulated user goes about their day on the phone: accomplishing goals, navigating between apps, and responding to notifications. The proactive agent watches all of this unfold and uses the user's actions and environment signals to build context about what the user might need help with. Huge thanks to my advisors @xwang_lk @WilliamWangNLP and my amazing collaborators @JasonZ118707 @HuanCC2002 Jiaming Shan @yinfeiy Alkesh Patel @zhegan4 @m2saxon 🙏
Deepak Nathani tweet media
English
3
21
58
21K
UC Santa Barbara NLP Group أُعيد تغريده
Tengxiao Liu
Tengxiao Liu@TengxiaoLiu·
Auto research is on 🔥 We give algorithmic problems (like circle packing) to general coding agents, let it run overnight. 🌙 Agents reach SoTA. But more importantly: we analyze 100+ hours of trajectories to understand how it gets there 🧵
Tengxiao Liu tweet media
English
6
18
62
31K
UC Santa Barbara NLP Group أُعيد تغريده
Saaket Agashe
Saaket Agashe@saa1605·
How do you teach a model to reason in domains where it can't even get started? RLVR needs successful rollouts to learn from. But if a model has never seen a domain (say, a niche programming language) or needs a new reasoning pattern, it just keeps failing with barely any learning signal. Our answer: In-Context Learning! How? Introducing Context Bootstrapped Reinforcement Learning (CBRL) 🧵👇 🔗 arxiv.org/abs/2603.18953 🌐 context-bootstrapped-rl.github.io
Saaket Agashe tweet media
English
2
10
73
21.7K
UC Santa Barbara NLP Group أُعيد تغريده
Chuhan Li
Chuhan Li@_Chuhan_Li·
Human perception is inherently situated – we understand the world relative to our own body, viewpoint, and motion. To deploy multimodal foundation models in embodied settings, we ask: “Can these models reason in the same observer-centric way?” We study this through SAW-Bench: a novel benchmark for observer-centric situated awareness: - 786 real world egocentric videos - 2,071 human-annotated QA pairs Across all tasks, we evaluate 24 state-of-the-art MFMs: 📉 Best model: 53.9% 🧑 Humans: 91.6% Models systematically: ❌ Confuse head rotation with physical movement ❌ Collapse under multi-turn trajectories ❌ Fail to maintain persistent world-state memory 👉 We see that maintaining a stable observer-centric representation remains challenging. As MFMs are increasingly integrated into embodied agents, situated awareness becomes essential for reliable real-world interaction. We release SAW-Bench and encourage further research toward improving observer-centric reasoning in multimodal foundation models.
English
5
26
112
25.5K
UC Santa Barbara NLP Group أُعيد تغريده
UC Santa Barbara NLP Group أُعيد تغريده
Zhen Zhang
Zhen Zhang@zhenzhangzz·
AI agents are evolving beyond simple tasks to complex, multi-turn and multi-step interactions. But how do we train them with RL when verifiable rewards don't exist for open-ended conversations and building execution environments for thousands of tools is unscalable? Introducing 🛠️CM2: RL with Checklist Rewards for Multi-Turn and Multi-Step Agentic Tool Use [arxiv.org/abs/2602.12268] Core Contributions: 🔄 Multi-turn and Multi-step tool use senario ✅ Checklist Rewards: Replaces vague scalar scores with fine-grained, evidence-based binary criteria. 🛠️ Scalable Tool Simulation: Trains on 5,000+ tools using a hybrid LLM simulator, removing the need for manual API engineering. 👍 SOTA Performance: Achieves +8-12 point gains on τ^2-Bench, BFCL-V4 & ToolSandbox, surpassing larger open-source models.
Zhen Zhang tweet media
English
3
4
32
14.8K
UC Santa Barbara NLP Group أُعيد تغريده
Kaijie Zhu
Kaijie Zhu@KaijieZhu07·
[1/n] 🚨 Coding ≠ Software Engineering! Are AI agents ready to replace Software Engineers? 🔥 Introducing DevOps-Gym: The first end-to-end benchmark for the complete software cycle (UCSB, NUS, Berkeley, Google). We tested SOTA agents on 700+ real-world DevOps tasks. The Result? They struggle. 📉 🔄 Full DevOps Coverage: 🔧 Build: Fix dependency hell & migrate systems (Maven→Gradle) 📊 Monitor: Detect leaks using ONLY CLI tools (top/iostat). 🐛 Fix: Resolve bugs in compiled langs (Harder than Python!) ✅ Test: Gen regression tests from runtime behavior ☠️ The Ultimate Killer: End-to-End Pipelines (Build → Monitor → Fix → Test) Success Rate: 0.00%. NO agent could complete the full loop. 🔗 Check out the full research & dataset: devops-gym.com 📄 Paper: arxiv.org/abs/2601.20882
Kaijie Zhu tweet media
English
2
10
22
5.4K
UC Santa Barbara NLP Group أُعيد تغريده
Xin Eric Wang
Xin Eric Wang@xwang_lk·
𝐑𝐞𝐥𝐢𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐢𝐬 𝐭𝐡𝐞 𝐟𝐮𝐧𝐝𝐚𝐦𝐞𝐧𝐭𝐚𝐥 𝐛𝐨𝐭𝐭𝐥𝐞𝐧𝐞𝐜𝐤 𝐟𝐨𝐫 𝐆𝐔𝐈 𝐚𝐠𝐞𝐧𝐭𝐬.⚠️ One wrong click can trigger irreversible, costly actions 💥 Introducing 𝐒𝐚𝐟𝐞𝐆𝐫𝐨𝐮𝐧𝐝🛡️: an uncertainty-calibrated framework that knows when not to act, enabling risk-aware GUI grounding with statistical guarantees 📊 𝐊𝐞𝐲 𝐢𝐝𝐞𝐚: the real danger is 𝐬𝐢𝐥𝐞𝐧𝐭 𝐟𝐚𝐢𝐥𝐮𝐫𝐞 🤫 Most GUI grounding models always output a coordinate, even when they’re unsure ❌📍 Instead, SafeGround: 📐 𝘌𝘴𝘵𝘪𝘮𝘢𝘵𝘦𝘴 𝘴𝘱𝘢𝘵𝘪𝘢𝘭 𝘶𝘯𝘤𝘦𝘳𝘵𝘢𝘪𝘯𝘵𝘺 𝘧𝘳𝘰𝘮 𝘱𝘳𝘦𝘥𝘪𝘤𝘵𝘪𝘰𝘯 𝘷𝘢𝘳𝘪𝘢𝘣𝘪𝘭𝘪𝘵𝘺; 🎯 𝘊𝘢𝘭𝘪𝘣𝘳𝘢𝘵𝘦𝘴 𝘢 𝘥𝘦𝘤𝘪𝘴𝘪𝘰𝘯 𝘵𝘩𝘳𝘦𝘴𝘩𝘰𝘭𝘥 𝘸𝘪𝘵𝘩 𝘴𝘵𝘢𝘵𝘪𝘴𝘵𝘪𝘤𝘢𝘭 𝘨𝘶𝘢𝘳𝘢𝘯𝘵𝘦𝘦𝘴; 🛑 𝘈𝘣𝘴𝘵𝘢𝘪𝘯𝘴 𝘰𝘳 𝘥𝘦𝘧𝘦𝘳𝘴 𝘩𝘪𝘨𝘩-𝘳𝘪𝘴𝘬 𝘢𝘤𝘵𝘪𝘰𝘯𝘴, 𝘦𝘯𝘢𝘣𝘭𝘪𝘯𝘨 𝘳𝘪𝘴𝘬-𝘤𝘰𝘯𝘵𝘳𝘰𝘭𝘭𝘦𝘥 𝘎𝘜𝘐 𝘪𝘯𝘵𝘦𝘳𝘢𝘤𝘵𝘪𝘰𝘯, 𝘦𝘷𝘦𝘯 𝘧𝘰𝘳 𝘣𝘭𝘢𝘤𝘬-𝘣𝘰𝘹 𝘮𝘰𝘥𝘦𝘭𝘴.🔒🤖
Xin Eric Wang tweet media
Qingni Wang@Ceeqnn

🚨 New paper alert 🚨  📌 How can we make GUI grounding models reliable in real-world interactions?  We introduce 🚀 SafeGround: Know When to Trust GUI Grounding Models via Uncertainty Calibration In GUI agents, a single wrong click isn’t just an error — it can trigger costly or irreversible actions (e.g., unintended payments 💸 or deleting important files 🗑️).  The real danger is silent failure: most GUI grounding models always output a coordinate, even when they’re unsure.  Instead of trusting a single predicted point, SafeGround:  • estimates spatial uncertainty from prediction variability  • calibrates a decision threshold with statistical guarantees  • enables risk-controlled GUI actions, even with black-box models  💻 Code: github.com/Cece1031/SAFEG…  📄 Paper: arxiv.org/pdf/2602.02419 🧵1/6 #Agents #GUI

English
3
5
28
5.3K