uclanlp

355 posts

uclanlp

uclanlp

@uclanlp

UCLA Natural Language Processing Research #NLProc

Katılım Mart 2021
124 Takip Edilen1.3K Takipçiler
uclanlp
uclanlp@uclanlp·
For the UCLA NLP seminar talk this Friday, we are thrilled to host Prof. Christopher Potts @ChrisGPotts from Stanford @stanfordnlp ! Title: “The Archai of Palimpsestic Memorization” When: 2–3 PM (PST), Friday, Jan 23 Registration: ucla.zoom.us/meeting/regist…
uclanlp tweet media
English
0
6
22
10.1K
uclanlp retweetledi
Himanshu Kumar
Himanshu Kumar@codewithimanshu·
@kaiwei_chang Wow, 5 papers at NeurIPS, Kai-Wei! That's impressive, but is AI for Math really the future, or just hype?
English
0
1
0
518
uclanlp
uclanlp@uclanlp·
For this week’s NLP seminar talk, we are thrilled to host Prof. Arman Cohan @armancohan from Yale @Yale and Ai2 aitanaallenperez on LLM evaluation and alignment: Time: 2-3PM, Friday, Nov 14th (PST) Registration: ucla.zoom.us/meeting/regist…
uclanlp tweet media
English
0
3
15
6.7K
uclanlp retweetledi
Violet Peng
Violet Peng@VioletNPeng·
It’s a wrap! I hope y’all enjoyed #EMNLP25 as much as I did! Big shoutout to the photography team! All my photos in this post are taking from their websites. To all attendees: you should check out if you haven’t already!!
Violet Peng tweet mediaViolet Peng tweet mediaViolet Peng tweet mediaViolet Peng tweet media
English
8
4
128
7.8K
uclanlp
uclanlp@uclanlp·
@VioletNPeng served as the Program Co-Chair for #EMNLP25, one of the largest NLP conferences, which received over 8,000 submissions and drew 6,000 participants.
uclanlp tweet mediauclanlp tweet media
English
0
1
22
12.1K
uclanlp retweetledi
Haoyi Qiu
Haoyi Qiu@HaoyiQiu·
🤖💬AI agents can be easily persuaded (like Anthropic’s Claudius often giving discounts). 🤔Previous study on persuasion has been exclusively on text-only modality. We wonder: are AI agents more susceptible when presented with multimodal content? Introducing MMPersuade, a comprehensive multimodal benchmark that assesses AI agents’ susceptibility to established persuasion principles, covering commercial, subjective and behavioral, and adversarial contexts.
Haoyi Qiu tweet media
English
11
26
131
29.3K
uclanlp
uclanlp@uclanlp·
For this week’s UCLA NLP seminar, we are thrilled to invite Prof. Rose Yu @yuqirose from UCSD @UCSanDiego: Talk Title: “Towards AI Co-Scientists: Agentic Foundation Models for Physical Universe” Time: 2-3PM, Friday, October 31st (PDT) Registration: ucla.zoom.us/meeting/regist…
uclanlp tweet media
English
1
0
10
4.2K
uclanlp retweetledi
uclanlp retweetledi
Shayne Longpre
Shayne Longpre@ShayneRedford·
📢Thrilled to introduce ATLAS 🗺️: scaling laws beyond English, for pretraining, finetuning, and the curse of multilinguality. The largest public, multilingual scaling study to-date—we ran 774 exps (10M-8B params, 400+ languages) to answer: 🌍Are scaling laws different by language? 🧙‍♂️Can we model the curse of multilinguality? ⚖️Pretrain from scratch or finetune from multilingual checkpoint? 🔀Cross-lingual transfer scores for 1444 lang pairs? 1/🧵
Shayne Longpre tweet mediaShayne Longpre tweet media
English
7
42
154
24.1K
uclanlp
uclanlp@uclanlp·
We’ve been running the UCLA NLP Seminar for a while now and realized it’s a waste not to share these amazing talks more broadly. So here’s our YouTube channel now! 🎥 Watch and subscribe to our channel for past and upcoming sessions: 👉 @uclanlp-plus" target="_blank" rel="nofollow noopener">youtube.com/@uclanlp-plus #AI #UCLANLP
English
3
20
105
20.8K
uclanlp retweetledi
Wenbo Hu
Wenbo Hu@gordonhu608·
🤔How to maintain a long-term memory for a 3D embodied AI agent across dynamic spatial-temporal environment changes in complex tasks? 🚀Introducing 3DLLM-Mem, a memory-enhanced 3D embodied agent that incrementally builds and maintains a task-relevant long-term memory while it explores and incorporates feedback from the environment. More demos in our website. Project: 3dllm-mem.github.io Paper: arxiv.org/abs/2505.22657 #LLMs #VLMs #Multimodal #3D #memory #AgenticAI
Wenbo Hu tweet media
English
5
25
83
46.4K
uclanlp retweetledi
Hritik Bansal
Hritik Bansal@hbXNov·
New paper 📢 Most powerful vision-language (VL) reasoning datasets remain proprietary 🔒, hindering efforts to study their principles and develop similarly effective datasets in the open 🔓. Thus, we introduce HoneyBee, a 2.5M-example dataset created through careful data curation. It trains VLM reasoners that outperform InternVL2.5/3-Instruct and Qwen2.5-VL-Instruct across model scales (e.g., an 8% MathVerse improvement over QwenVL at the 3B scale). 🧵👇 Work done during my internship at @AIatMeta w/ 🤝 @ramakanth1729, @Devendr06654102, @scottyih, @gargighosh, @adityagrover_, and @kaiwei_chang.
Hritik Bansal tweet media
English
5
44
221
61K
uclanlp
uclanlp@uclanlp·
.@kaiwei_chang is getting a full house for his talk on “mathematical reasoning in visual context” at the Towards Comprehensive Reasoning in Vision-Language Models tutorial at #ICCV2025. Still time to come and engage in room 318A!
uclanlp tweet media
English
0
10
49
7.2K
uclanlp retweetledi
Lucas Bandarkar
Lucas Bandarkar@LucasBandarkar·
Multilingual Routing in Mixture-of-Experts LLMs We present (1) an in-depth analysis of how MoE LLMs route multilingual texts, with very clear patterns + (2) a router intervention (steering) method that leads to consistent multilingual improvements! 🧵1/4 arxiv.org/pdf/2510.04694
English
1
9
27
2.5K
uclanlp
uclanlp@uclanlp·
Check out SPHERE, an energy-regularized method that keeps weights uniformly distributed on hyperspheres throughout sequential editing. It can better understand and mitigate the performance degradation caused by sequential editing.
Jia-Chen Gu@Jiachen_Gu

🚨Model editing in practice often collapses with catastrophic forgetting! Meet SPHERE🌐: an energy-regularized method that keeps weights uniformly distributed on hyperspheres, making sequential editing stable. Paper: arxiv.org/abs/2510.01172 Code: github.com/PlusLabNLP/SPH…

English
0
0
2
495
uclanlp
uclanlp@uclanlp·
Check out TemMed-Bench, the first multimodal benchmark that challenges LVLMs to reason over temporal medical images by letting LVLMs analyze changes in patients’ conditions between different clinical visits.
Jia-Chen Gu@Jiachen_Gu

🏥We introduce TemMed-Bench, the first multimodal benchmark that challenges LVLMs to reason over temporal medical images by letting LVLMs analyze changes in patients’ conditions between different clinical visits. 📰Paper: arxiv.org/abs/2509.25143 💻Project: temmedbench.github.io

English
0
0
13
1K