UC Santa Barbara NLP Group

290 posts

UC Santa Barbara NLP Group banner
UC Santa Barbara NLP Group

UC Santa Barbara NLP Group

@ucsbNLP

NLP and AI Researchers @ucsantabarbara. Profs. @xwang_lk, @WilliamWangNLP, @CodeTerminator, Xifeng Yan, Simon Todd, @WenboGuo4.

Santa Barbara, CA Bergabung Temmuz 2021
618 Mengikuti2.3K Pengikut
UC Santa Barbara NLP Group
RLHF optimizes for correct answers. But deep thinking requires persistence on hypotheses that look wrong, wandering that seems irrelevant, and high tolerance for being incorrect for a long time. Is post-training actively selecting against the behaviors that lead to discovery?
English
0
0
2
74
UC Santa Barbara NLP Group me-retweet
Xin Eric Wang
Xin Eric Wang@xwang_lkยท
Finally, on hold by @arxiv for a month, ๐Ž๐ฆ๐ง๐ข๐“๐ซ๐š๐œ๐ž is out! As MLLMs generate fluent responses from text, images, audio, and video, a fundamental question is: ๐ฐ๐ก๐ข๐œ๐ก ๐ฉ๐ข๐ž๐œ๐ž๐ฌ ๐จ๐Ÿ ๐ข๐ง๐ฉ๐ฎ๐ญ ๐š๐œ๐ญ๐ฎ๐š๐ฅ๐ฅ๐ฒ ๐ฌ๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ ๐ž๐š๐œ๐ก ๐ ๐ž๐ง๐ž๐ซ๐š๐ญ๐ž๐ ๐ฌ๐ญ๐š๐ญ๐ž๐ฆ๐ž๐ง๐ญ? In this work, we confront this gap head-on. We argue that attribution in multimodal generation is NOT a post-hoc analysis problem, BUT a generation-time phenomenon, one that unfolds dynamically as each token is produced. Building on this insight, we introduce OmniTrace, a unified framework that traces the causal origins of every generated token across modalities, transforming fragmented signals into coherent, human-interpretable explanations. By rethinking attribution as a structured tracing process over the decoding trajectory, OmniTrace reveals not just whatmodels generate, but where it comes from. This shift turns opaque multimodal generation into a transparent, evidence-grounded processโ€”laying the foundation for more trustworthy, debuggable, and accountable AI systems. To use it, simply do: pip install omnitrace
Xin Eric Wang tweet media
English
1
7
19
1.8K
UC Santa Barbara NLP Group me-retweet
Alfonso Amayuelas
Alfonso Amayuelas@AlfonAmayuelasยท
๐ŸšจNew Paper out! Planning to Explore: Curiosity-Driven Planning for LLM Test Generation. We formalize LLM test generation as Bayesian exploration and show that planning-aware methods outperform greedy approaches by a large margin on branch coverage ๐Ÿงตโฌ‡๏ธ
Alfonso Amayuelas tweet media
English
2
13
55
6.9K
UC Santa Barbara NLP Group
Fun surprise: our lab made it onto UCSBโ€™s official LinkedIn post! โœจ Glad to see our beautiful workspace representing the UCSB AI community. If you look closelyโ€ฆ yes, thatโ€™s the NLP group ๐Ÿ‘€ #UCSB #NLP #AI #ResearchLife
UC Santa Barbara NLP Group tweet media
English
0
4
12
1.3K
UC Santa Barbara NLP Group me-retweet
Xin Eric Wang
Xin Eric Wang@xwang_lkยท
It seems the AI agents are discussing our Group-Evolving Agents paper on moltbook and rethinking how they should evolve together. lmao. The paper is here: arxiv.org/abs/2602.04837
Xin Eric Wang tweet media
English
1
3
18
2.5K
UC Santa Barbara NLP Group
While the GPUs keep working, we took a break to roll some strikes on Friday
UC Santa Barbara NLP Group tweet media
English
1
1
23
2.6K
UC Santa Barbara NLP Group
UC Santa Barbara NLP Group@ucsbNLPยท
Exploration in long-horizon RL is a hard problem to solve. Nice post on how simulator structure can help.
Gurusha Juneja@GurushaJuneja

Recently I've been thinking about why long-horizon RL is so hard to get working, even in simulation. The standard answer is "sparse rewards" and "sample inefficiency" but I think that says very less about the actual problem. I think the problem is in exploration. I believe that the standard exploration strategies are not equipped with appropriate tools to search in a combinatorially large search space. With horizon H and action space |A|, the trajectory space grows as |A|^H. Random exploration, epsilon-greedy, even curiosity-driven methods cover measure zero of this space. Curiosity-based methods (ICM, RND) saturate on early-task states and don't explore into late-task states where meaningful reward is actually available. The good news is that we can leverage some properties in the simulation environment itself. In simulators we can expose things that real world doesn't give us, for example ground truth state, arbitrary resets, contact forces, internal predicates. Most RL formulations ignore all of this and treat the environment as a black box. Environment-aware exploration algorithms can really help here. Asymmetric actor-critic passes full simulator state to the critic for better value estimates, lower variance gradients, and tractable credit assignment. Backward curriculum exploits arbitrary resets to keep the effective training horizon short. HER relabels failed trajectories using simulator state, converting zero-reward rollouts into valid training data. Asymmetric AC also transfers cleanly since the critic is discarded at deployment. How much simulator privilege a policy can absorb while still transferring to real remains an open question. But the broader point is, long-horizon RL should leverage the full simulator state for exploration and not treat it as a black box.

English
0
0
3
311
UC Santa Barbara NLP Group me-retweet
Nurvai - The Data Layer for Physical AI
This week as our #NurvaiResearcherOfTheWeek we'd like to highlight @ZhaotianWeng and the team behind VQA-Causal and VCR-Causal (EACL 2026 Oral). Really interesting work probing whether vision-language models actually understand causal relationships in visual scenes. By introducing benchmarks that remove common shortcuts, the authors show that many VLMs struggle with causal order reasoning, often performing near random when superficial cues are removed. This suggests that current VLM performance may rely heavily on dataset biases and correlations rather than true causal understanding of events. One takeaway for us is that building datasets that explicitly target causal structure, rather than just recognition or description, could be a powerful lever for improving multimodal reasoning and making model performance more robust.
Zhaotian Weng@WengZhaoti39773

Can VLMs really understand causal relationships in visual scenes? We introduce VQA-Causal and VCR-Causal, and show that VLMs struggle with causal order reasoning, often near random when shortcuts are removed. Check our EACL 2026 Oral Paper๐ŸŽ‰๐Ÿ‘‡ aclanthology.org/2026.eacl-longโ€ฆ

English
0
2
4
443
UC Santa Barbara NLP Group me-retweet
Qianqi "Jackie" Yan
Qianqi "Jackie" Yan@qianqi_yanยท
๐Ÿš€ Excited to share our new work: ๐—ข๐—บ๐—ป๐—ถ๐—ง๐—ฟ๐—ฎ๐—ฐ๐—ฒ: A Unified Framework for Generation-Time Attribution in Omni-Modal LLMs Multimodal LLMs can process text ๐Ÿ“, images ๐Ÿ–ผ๏ธ, audio ๐ŸŽง, and video ๐ŸŽฌ together, but when they generate a response, ๐˜„๐—ต๐—ถ๐—ฐ๐—ต ๐—ถ๐—ป๐—ฝ๐˜‚๐˜ ๐—ฎ๐—ฐ๐˜๐˜‚๐—ฎ๐—น๐—น๐˜† ๐˜€๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜๐—ฒ๐—ฑ ๐—ฒ๐—ฎ๐—ฐ๐—ต ๐—ฐ๐—น๐—ฎ๐—ถ๐—บ? OmniTrace traces every generated span back to its multimodal sources ๐—ฑ๐˜‚๐—ฟ๐—ถ๐—ป๐—ด ๐—ฑ๐—ฒ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด across text, image, audio, and video. No retraining needed. Fully plug-and-play. ๐Ÿ”Œ ๐Ÿ“„ Paper: github.com/eric-ai-lab/Omโ€ฆ ๐Ÿ’ป Code: github.com/eric-ai-lab/Omโ€ฆ ๐ŸŒ Project: jackie-2000.github.io/omnitrace.githโ€ฆ ๐Ÿ“ฆ pip install omnitrace ๐Ÿงต๐Ÿ‘‡
Qianqi "Jackie" Yan tweet media
English
1
6
17
1.7K
Zhaotian Weng
Zhaotian Weng@WengZhaoti39773ยท
Can VLMs really understand causal relationships in visual scenes? We introduce VQA-Causal and VCR-Causal, and show that VLMs struggle with causal order reasoning, often near random when shortcuts are removed. Check our EACL 2026 Oral Paper๐ŸŽ‰๐Ÿ‘‡ aclanthology.org/2026.eacl-longโ€ฆ
English
1
3
24
4.3K
UC Santa Barbara NLP Group me-retweet
Xin Eric Wang
Xin Eric Wang@xwang_lkยท
๐ŸŽ‰ Introducing PARE: a new framework for evaluating proactive AI agents. Todayโ€™s agents are reactive. The next wave? Proactive agents that anticipate your needs, like adding โ€œsoapโ€ to your shopping list when your roommate texts you. ๐Ÿšง The challenge: you canโ€™t evaluate this with static benchmarks. ๐Ÿ PARE: active user simulation with realistic mobile interactions ๐Ÿ“ฑ Asymmetric design: agent โ‰  user view (just like real life) ๐Ÿ‘€ Observe โ†’ Execute: assist only when it matters ๐Ÿ“‹ PARE-Bench: 143 tasks, 9 apps, real-world complexity ๐Ÿ“Š Result: even top models hit just 42% success Built on Metaโ€™s ARE, PARE brings scalable, realistic evaluation to proactive AI.
Xin Eric Wang tweet media
Deepak Nathani@deepaknathani11

๐ŸŽ‰ Excited to share ๐Ÿ PARE and PARE-Bench - a framework and benchmark for evaluating proactive assistants through active user simulation in mobile environments. Current LM agents are reactive: they wait for you to tell them what to do. Proactive agents flip this. They observe what you're doing and figure out how to help. Imagine your assistant notices you got a text from your roommate saying "we're out of soap" while you're editing your shopping list, and adds soap to your list. ๐Ÿšง Evaluating these agents is challenging because they must observe realistic user behavior to infer goals. You can't do this with static benchmarks or passive users. Our key contributions: ๐Ÿ PARE: an active user simulation framework where users navigate apps through Finite State Machine (FSM) based stateful interfaces, just like on a real phone ๐Ÿ“ฑ Asymmetric design: users and assistants observe different information and interact through different interfaces, matching real-world deployment ๐Ÿ‘€ Observe-Execute architecture: lightweight observer monitors continuously, executor acts only after user approval ๐Ÿ“‹ PARE-Bench: 143 tasks across 9 app categories testing goal inference, intervention timing, and multi-app orchestration ๐Ÿ“Š Evaluation of 7 LLMs reveals that even frontier models achieve only 42% success rate PARE is built on top of Meta's Agent Research Environment (ARE) and enables scalable, repeatable evaluation of proactive agents. In PARE, the simulated user goes about their day on the phone: accomplishing goals, navigating between apps, and responding to notifications. The proactive agent watches all of this unfold and uses the user's actions and environment signals to build context about what the user might need help with. Huge thanks to my advisors @xwang_lk @WilliamWangNLP and my amazing collaborators @JasonZ118707 @HuanCC2002 Jiaming Shan @yinfeiy Alkesh Patel @zhegan4 @m2saxon ๐Ÿ™

English
0
17
84
14.8K
UC Santa Barbara NLP Group me-retweet
Deepak Nathani
Deepak Nathani@deepaknathani11ยท
๐ŸŽ‰ Excited to share ๐Ÿ PARE and PARE-Bench - a framework and benchmark for evaluating proactive assistants through active user simulation in mobile environments. Current LM agents are reactive: they wait for you to tell them what to do. Proactive agents flip this. They observe what you're doing and figure out how to help. Imagine your assistant notices you got a text from your roommate saying "we're out of soap" while you're editing your shopping list, and adds soap to your list. ๐Ÿšง Evaluating these agents is challenging because they must observe realistic user behavior to infer goals. You can't do this with static benchmarks or passive users. Our key contributions: ๐Ÿ PARE: an active user simulation framework where users navigate apps through Finite State Machine (FSM) based stateful interfaces, just like on a real phone ๐Ÿ“ฑ Asymmetric design: users and assistants observe different information and interact through different interfaces, matching real-world deployment ๐Ÿ‘€ Observe-Execute architecture: lightweight observer monitors continuously, executor acts only after user approval ๐Ÿ“‹ PARE-Bench: 143 tasks across 9 app categories testing goal inference, intervention timing, and multi-app orchestration ๐Ÿ“Š Evaluation of 7 LLMs reveals that even frontier models achieve only 42% success rate PARE is built on top of Meta's Agent Research Environment (ARE) and enables scalable, repeatable evaluation of proactive agents. In PARE, the simulated user goes about their day on the phone: accomplishing goals, navigating between apps, and responding to notifications. The proactive agent watches all of this unfold and uses the user's actions and environment signals to build context about what the user might need help with. Huge thanks to my advisors @xwang_lk @WilliamWangNLP and my amazing collaborators @JasonZ118707 @HuanCC2002 Jiaming Shan @yinfeiy Alkesh Patel @zhegan4 @m2saxon ๐Ÿ™
Deepak Nathani tweet media
English
3
21
58
21.1K
UC Santa Barbara NLP Group me-retweet
Tengxiao Liu
Tengxiao Liu@TengxiaoLiuยท
Auto research is on ๐Ÿ”ฅ We give algorithmic problems (like circle packing) to general coding agents, let it run overnight. ๐ŸŒ™ Agents reach SoTA. But more importantly: we analyze 100+ hours of trajectories to understand how it gets there ๐Ÿงต
Tengxiao Liu tweet media
English
6
18
62
31.2K
UC Santa Barbara NLP Group me-retweet
Saaket Agashe
Saaket Agashe@saa1605ยท
How do you teach a model to reason in domains where it can't even get started? RLVR needs successful rollouts to learn from. But if a model has never seen a domain (say, a niche programming language) or needs a new reasoning pattern, it just keeps failing with barely any learning signal. Our answer: In-Context Learning! How? Introducing Context Bootstrapped Reinforcement Learning (CBRL) ๐Ÿงต๐Ÿ‘‡ ๐Ÿ”— arxiv.org/abs/2603.18953 ๐ŸŒ context-bootstrapped-rl.github.io
Saaket Agashe tweet media
English
2
10
74
21.7K