Berkeley AI Research

1.4K posts

Berkeley AI Research banner
Berkeley AI Research

Berkeley AI Research

@berkeley_ai

We're graduate students, postdocs, faculty and scientists at the cutting edge of artificial intelligence research.

Berkeley, CA Katılım Temmuz 2017
448 Takip Edilen265K Takipçiler
Berkeley AI Research retweetledi
Zhe Ye
Zhe Ye@0xlf_·
1/ 🧵Introducing: VeriSpecGen 🚀 Formal verification is a principled way to guarantee code correctness, but writing high-quality specifications remains expensive and expertise-intensive. What if LLMs could reliably synthesize intent-aligned formal specs directly from natural language? Introducing VeriSpecGen, a framework that improves both inference and training time formal specification synthesis performance in Lean! 🛠️✨ 📈 SOTA Results: VeriSepcGen achieves 86.6% on the VERINA SpecGen task using Claude Opus 4.5. The framework improves over baselines by up to 31.8pt across different model families and scales, proving robust regardless of base model characteristics. 📄 Paper: arxiv.org/pdf/2604.10392 🔗 Website: spec.verina.io
Zhe Ye tweet media
English
1
18
39
10K
Berkeley AI Research retweetledi
Roei Herzig
Roei Herzig@roeiherzig·
We recently held BAIR’s annual robotics workshop at @berkeley_ai, and among many fantastic talks, @trevordarrell and I gave a short one on my research journey over the past three years. Excited to share what we’ve learned about using structural priors and inductive biases to build more capable Physical AI models. This is just the beginning. 🤖✨ youtube.com/watch?v=YdvUQu…
YouTube video
YouTube
Roei Herzig tweet media
English
0
4
28
9.2K
Berkeley AI Research retweetledi
Joey Gonzalez
Joey Gonzalez@profjoeyg·
Should nations be negotiating with AI instead of people? Probably? What would happen? We setup AI and humans in a battle for world domination playing the game Risk with all the negotiation and side channel discussion that make games of Risk end in tears. I was pleasantly surprised by the results and what they might mean for the future of humanity. More seriously, I am really excited to understand how AI performs in competitive settings and how we can find welfare-maximizing correlated equilibrium by introducing AI diplomats ... future work.
Abby O'Neill@abby_k_oneill

Would you trust an AI agent to negotiate on your country's behalf at the G20? Real coordination is long-horizon, asymmetric, and non-binding; current multi-agent evaluations miss this. We build Cooperate to Compete (C2C): a testbed for LM agents coordinating with rivals. 🤝🔪🎭

English
0
7
19
9.6K
Berkeley AI Research retweetledi
Abby O'Neill
Abby O'Neill@abby_k_oneill·
Would you trust an AI agent to negotiate on your country's behalf at the G20? Real coordination is long-horizon, asymmetric, and non-binding; current multi-agent evaluations miss this. We build Cooperate to Compete (C2C): a testbed for LM agents coordinating with rivals. 🤝🔪🎭
Abby O'Neill tweet media
English
4
24
91
24.2K
Berkeley AI Research retweetledi
Kevin Zakka
Kevin Zakka@kevin_zakka·
Heterogeneous simulation (different mesh per world) is now fully supported in mjlab. If you were hesitant about using mjlab for manipulation, now is a great time to switch over 🙂 mujocolab.github.io/mjlab/main/sou…
English
2
9
112
9.1K
Berkeley AI Research retweetledi
Kevin Zakka
Kevin Zakka@kevin_zakka·
I trained a grasping policy on objects of various shapes and sizes using the new mjlab feature and this really cool pivot grasp strategy emerges for large flat objects that are larger than the gripper aperture. So cool and beautiful to see!
English
7
17
204
19K
Berkeley AI Research retweetledi
Berkeley AI Research retweetledi
Haocheng Xi
Haocheng Xi@HaochengXiUCB·
🎥 Video generation is hitting the memory wall. As videos get longer, the KV cache quietly explodes — and long-horizon consistency starts to break. We built Quant VideoGen: a training-free KV cache compression method for auto-regressive video diffusion. Instead of storing every KV in high precision, QVG exploits video’s spatiotemporal redundancy with semantic-aware smoothing + progressive residual quantization. 🚀 Up to 7× KV memory reduction ⚡ <4% overhead ✅ Strong long-video quality 🕹️ Deploy HYWorldPlay on your own RTX 5090 locally KV compression is becoming a core scaling primitive — not just for LLMs, but for video generation too. Paper: arxiv.org/abs/2602.02958 Code: github.com/svg-project/Qu… (1/5)
English
11
53
265
60.6K
Berkeley AI Research retweetledi
Negar Arabzadeh
Negar Arabzadeh@NegarEmpr·
1/ "Can QPP Choose the Right Query Variant?" has been accepted at #SIGIR2026!🇦🇺 You can easily over-generate multiple query variants at low cost, but running RAG for all of them is expensive! Can we pick the winner query before paying the generation cost? arxiv.org/abs/2604.22661
Negar Arabzadeh tweet media
English
2
9
35
8.6K
Berkeley AI Research retweetledi
Kevin Zakka
Kevin Zakka@kevin_zakka·
Gave my PhD dissertation talk on Friday! It's been an incredible journey made possible by the best advisor who believed in me and gave me the freedom and support to explore. Thank you @pabbeel! And thank you to everyone who came to support and share this milestone with me 🙏
Kevin Zakka tweet media
English
63
15
651
30.4K
Berkeley AI Research retweetledi
Marwa Abdulhai
Marwa Abdulhai@marwaabdulhai·
I am in Rio for #ICLR 2026! I am excited to be presenting 3 posters at the following workshops: ⚖️ Hierarchical Agenda Reasoning for Strategic Multi-Turn Dialogue Agents 📍 04/26, 11:15 AM–12 PM or 4:10–5 PM | DATA-FM, Room 203 A+B 📍 04/27, 11:35 AM–12:20 PM & 2:30–3:10 PM | SPOT Workshop ✏️ How LLMs Distort Our Written Language 📍 04/26, 3–3:50 PM | AIWILD Workshop, Room 204 A/B 🤖 Evaluating and Reducing Deceptive Dialogue from Language Models with Multi-Turn RL 📍 04/27, 10–11 AM | Trustworthy AI Workshop, Room 204 A+B If you're interested in dialogue agents, multi-turn RL, AI deception, and/or preserving human agency in AI-interactions, I'd love to chat. Feel free to reach out!
Marwa Abdulhai tweet mediaMarwa Abdulhai tweet mediaMarwa Abdulhai tweet media
English
2
6
48
9.7K
Berkeley AI Research retweetledi
Sergey Levine
Sergey Levine@svlevine·
Tomorrow in the Workshop on World Models at ICLR in Rio (10:30 am) I’ll talk about a… different take on what might make for a good world model. Come find out, 10:30 in Room 202A at ICLR sites.google.com/view/iclr-2026…
English
5
27
303
25.6K
Berkeley AI Research retweetledi
Sewon Min
Sewon Min@sewon__min·
I will give two talks at ICLR workshops!! 🇧🇷 Sunday 9:40-10:10: "LLMs for Distributed Data Use" @ Workshop on Data Problems in Foundation Models (Room 203 A/B) Monday 15:30–16:05 : "Are Mixture-of-Experts Modular? Why It Matters and How to Fix It" @ ICBINB Workshop (Room 201 C) Both happened to be related to MoEs, but tackle two completely different questions → some say hi!
English
2
9
128
12.4K
Berkeley AI Research retweetledi
Sergey Levine
Sergey Levine@svlevine·
I'll give a talk about lifelong learning tmrw (Sun), 9:30 am (Brazil time) in the lifelong agents workshop, about how we can get robot foundation models to improve with RL: lifelongagent.github.io Then at 11:30 am, I'll talk about how generative models can drive self-improvement here: recursive-workshop.github.io
English
3
21
338
35.1K
Berkeley AI Research retweetledi
Seohong Park
Seohong Park@seohong_park·
What if we represent a state as a "list" of similarities to all other states? In our recent ICLR paper, we studied this "dual" representation. Come visit our poster at #4608 10:30a-1p on Fri (morning, 2nd day)! Paper: arxiv.org/abs/2510.06714 Blog post: seohong.me/blog/dual-repr…
Seohong Park tweet media
Seohong Park@seohong_park

Introducing *dual representations*! tl;dr: We represent a state by the "set of similarities" to all other states. This dual perspective has lots of nice properties and practical benefits in RL. Blog post: seohong.me/blog/dual-repr… Paper: arxiv.org/abs/2510.06714

English
3
44
295
40.8K
Berkeley AI Research retweetledi
Lakshya A Agrawal
Lakshya A Agrawal@LakshyAAAgrawal·
Thrilled to present GEPA as an Oral Talk and Poster at ICLR 2026 this Friday in Rio! 🇧🇷 Apr 24 Oral Session 3A (Agents), 10:30 AM BRT, Amphitheater Poster Session 4, 3:15 PM, Pavilion 3 x.com/LakshyAAAgrawa… Let's recap what's happened since we released GEPA last year 🧵
Lakshya A Agrawal tweet media
Lakshya A Agrawal@LakshyAAAgrawal

How does prompt optimization compare to RL algos like GRPO? GRPO needs 1000s of rollouts, but humans can learn from a few trials—by reflecting on what worked & what didn't. Meet GEPA: a reflective prompt optimizer that can outperform GRPO by up to 20% with 35x fewer rollouts!🧵

English
11
38
221
56.8K
Berkeley AI Research retweetledi