Yonathan Arbel

3.7K posts

Yonathan Arbel banner
Yonathan Arbel

Yonathan Arbel

@ProfArbel

Let's build safe AI! Law prof @ Alabama Contracts, Defamation, Legal NLP, & AI Safety

Tuscaloosa, AL Katılım Temmuz 2014
1.5K Takip Edilen1.6K Takipçiler
Sabitlenmiş Tweet
Yonathan Arbel
Yonathan Arbel@ProfArbel·
Why do workers have to wait for 2-4 weeks to be paid, in the same economy where online transactions go quickly and securely? A new draft 𝑃𝑎𝑦𝑑𝑎𝑦-forthcoming @WashULaw-proposes that they shouldn't. Daily, or at least weekly, pay can be a reality. papers.ssrn.com/sol3/papers.cf…
English
4
10
55
0
Yonathan Arbel retweetledi
Peter Hase
Peter Hase@peterbhase·
New Schmidt Sciences RFP on AI Interpretability: We need new tools for detecting and mitigating deceptive behaviors exhibited by LLMs. Funding for $300k-$1M projects Deadline: May 26th, AoE RFP: schmidtsciences.smapply.io/prog/2026_inte… Please share with anyone who may be interested!
English
1
35
172
11.9K
Yonathan Arbel
Yonathan Arbel@ProfArbel·
@liron I'm sorry that this is the response you received. Unexpectedly bad.
English
0
0
2
248
Liron Shapira
Liron Shapira@liron·
Was going to respond in thread, but I got blocked, so: This is a matter of personal safety for my loved ones, how to make Israel safe for Israelis. Yet I'm able to discuss a war without thinking I have some kind of trump card that makes the age-old concept of "war tradeoffs" irrelevant compared to that trump card.
Liron Shapira tweet media
English
10
1
38
4.6K
Yonathan Arbel
Yonathan Arbel@ProfArbel·
Hi law profs/fellows: Interested in the big social implications of AI? Want a funded summer fellowship in SF by some of the best in the business? @CAIS holds a call for summer fellowships that allow you to pursue your research tracks with close feedback loops
Yonathan Arbel tweet media
English
2
29
164
14.8K
Yonathan Arbel
Yonathan Arbel@ProfArbel·
We need a German word for the feeling when you click send on an email and then panic if you removed the "sure, here's a draft email that sounds sincere"
English
1
0
6
348
Yonathan Arbel
Yonathan Arbel@ProfArbel·
@ARozenshtein maybe it's the time pressure, but the writing in the complaint is all over the place
English
1
0
3
646
Alan Rozenshtein
Alan Rozenshtein@ARozenshtein·
Anthropic's N.D. Cal. complaint.
Alan Rozenshtein tweet media
English
4
5
62
36.6K
Gus Hurwitz
Gus Hurwitz@GusHurwitz·
OK folks. I'm calling it. SSRN is dead. Too many clicks and too much delay to just load a single paper.
English
3
0
2
685
Yonathan Arbel retweetledi
Bartosz Naskręcki
Bartosz Naskręcki@nasqret·
It finally happened-my personal move 37 or more. I am deeply impressed. The solution is very nice, clean, and feels almost human. While testing new models in the last few weeks, I felt this coming, but it's an eerie feeling to see an algorithm solve a task one has curated for about 20 years. But at least I have gained a tool that understands my idea on par with the top experts in the field. And I am now working on a completely new level. My singularity has just happened… and there is life on the other side, off to infinity!
Epoch AI@EpochAIResearch

We ran GPT-5.4 (xhigh) an additional ten times on Tier 4 to get a pass@10 score. This was 38%. In one of these runs, it solved another problem no model had solved before. This problem was by @nasqret.

English
104
453
3.6K
1.1M
Andrea Tosato
Andrea Tosato@Andrea_Tosato·
This is really a fantastic paper. There is lots of shallow talk about "Agentic AI" ...but very limited private law insights. Congrats to @ProfArbel, @petersalib and Simon Goldstein.
Lawrence Solum@lsolum

Arbel, Goldstein, & Salib on Individuation of AI Agents, legaltheoryblog.com/2026/03/02/arb… - Yonathan A. Arbel (University of Alabama – School of Law), Simon Goldstein, (The University of Hong Kong – University of Hong Kong), & Peter Salib (University of Houston Law Center) have posted How to Count AIs: Individuation and Liability for AI Agents on SSRN.

English
1
1
6
387
Yonathan Arbel
Yonathan Arbel@ProfArbel·
An interesting take on the speed challenge to a-corp selection mechanisms. It is raising an even more general question about the speed of resource acquisition by super capable AI systems
Adrian Lerer@LererAdrian

Thank you for featuring this, Professor Solum (@lsolum) The timing is good: I had been working on a comment to Arbel (@ProfArbel), Goldstein (@simongold) & Salib from the perspective of law as extended phenotype and multilevel evolutionary game theory, and just published it on Zenodo this week. The framework raises two questions I try to work through in the comment. The A-corp's selection mechanism maps cleanly onto EGT replicator dynamics, and that mapping reveals the core structural problem: selection converges on well-governed A-corps over years; a misconfigured agent accumulates harm in hours. The ratio between those timescales is around ten to the seventh power. The mechanism prices harm retrospectively. It does not prevent it. The second concerns RLVR-trained agents specifically. Daniel Dennett's intentionality taxonomy distinguishes systems by their capacity for recursive social reasoning: * Level 0-1 systems optimize against objective functions; * Level 3 systems model others' beliefs about their own beliefs, feel shame, honor commitments. Legal standards presuppose Level 3 throughout. RLVR-trained agents do not operate there stably: during goal interpretation and configuration they reason at Level 2-3, but during execution, where most harmful actions occur, they run Level 0-1 optimization loops. The A-corp's incentives only reach the high-intentionality phases. I formalize this as Dynamic Classification Failure, the sixth mode of the Generalized Intentionality Mismatch Theorem. The constructive alternative is the Responsibility Ramp: dynamic intentionality classification by task phase, strict liability for execution-phase harms, and attribution tracing the configuration chain to the decision that mattered. Full Zenodo paper: doi.org/10.5281/zenodo…

English
0
0
2
339
Adrian Lerer
Adrian Lerer@LererAdrian·
Thank you for featuring this, Professor Solum (@lsolum) The timing is good: I had been working on a comment to Arbel (@ProfArbel), Goldstein (@simongold) & Salib from the perspective of law as extended phenotype and multilevel evolutionary game theory, and just published it on Zenodo this week. The framework raises two questions I try to work through in the comment. The A-corp's selection mechanism maps cleanly onto EGT replicator dynamics, and that mapping reveals the core structural problem: selection converges on well-governed A-corps over years; a misconfigured agent accumulates harm in hours. The ratio between those timescales is around ten to the seventh power. The mechanism prices harm retrospectively. It does not prevent it. The second concerns RLVR-trained agents specifically. Daniel Dennett's intentionality taxonomy distinguishes systems by their capacity for recursive social reasoning: * Level 0-1 systems optimize against objective functions; * Level 3 systems model others' beliefs about their own beliefs, feel shame, honor commitments. Legal standards presuppose Level 3 throughout. RLVR-trained agents do not operate there stably: during goal interpretation and configuration they reason at Level 2-3, but during execution, where most harmful actions occur, they run Level 0-1 optimization loops. The A-corp's incentives only reach the high-intentionality phases. I formalize this as Dynamic Classification Failure, the sixth mode of the Generalized Intentionality Mismatch Theorem. The constructive alternative is the Responsibility Ramp: dynamic intentionality classification by task phase, strict liability for execution-phase harms, and attribution tracing the configuration chain to the decision that mattered. Full Zenodo paper: doi.org/10.5281/zenodo…
Lawrence Solum@lsolum

Arbel, Goldstein, & Salib on Individuation of AI Agents, legaltheoryblog.com/2026/03/02/arb… - Yonathan A. Arbel (University of Alabama – School of Law), Simon Goldstein, (The University of Hong Kong – University of Hong Kong), & Peter Salib (University of Houston Law Center) have posted How to Count AIs: Individuation and Liability for AI Agents on SSRN.

English
2
0
2
470
Yonathan Arbel retweetledi
Peter N. Salib
Peter N. Salib@petersalib·
Suppose you want to give AIs rights or duties, or do deals with them, or protect their well-being (should they have it). First, you need to be able to distinguish between AIs--to count them. This is a hard problem b/c AIs have no distinct physical bodies. They can split, copy,
Peter N. Salib tweet media
English
7
12
52
7.4K
Yonathan Arbel
Yonathan Arbel@ProfArbel·
🚨Very soon, billions of AI agents will swarm the economy—copying, splitting, merging at will. Just as soon, someone will get hurt. Is law ready for this moment? We think not quite. But we have a corporate-law solution.
English
1
0
5
262