Tim Schnabel

218 posts

Tim Schnabel banner
Tim Schnabel

Tim Schnabel

@TimSchnabel

President, @LawReformInst; previously executive director @uniformlaws, attorney @StateDept

McLean, Virginia Katılım Ocak 2016
571 Takip Edilen122 Takipçiler
Sabitlenmiş Tweet
Tim Schnabel
Tim Schnabel@TimSchnabel·
Will be interesting to see where this goes.
Tim Schnabel tweet media
English
0
0
0
10
Tim Schnabel retweetledi
Jack Clark
Jack Clark@jackclarkSF·
I've spent the past few weeks reading 100s of public data sources about AI development. I now believe that recursive self-improvement has a 60% chance of happening by the end of 2028. In other words, AI systems might soon be capable of building themselves.
English
193
319
2.5K
894.3K
Tim Schnabel
Tim Schnabel@TimSchnabel·
First time I've seen these two team up on an op-ed. Usually, their joint appearances focus more on where they disagree. Thus, this one counts as a must-read.
Dean W. Ball@deanwball

Today, @BuchananBen and I co-author a piece in the New York Times with a simple message: While we disagree on plenty, we believe AI has national security implications which deserve a careful and bipartisan government response. We can (and should) have partisan fights about all manner of AI issues, but catastrophic risk from AI shouldn’t be one of them.

English
0
0
5
238
Tim Schnabel retweetledi
Dean W. Ball
Dean W. Ball@deanwball·
The technologies that will characterize the coming decades will dissolve heretofore ancient constraints on human conduct and human affairs. It seems sure we will want to impose new artificial constraints on ourselves, but no one knows which, or how to impose them.
T. Greer@Scholars_Stage

I am very weary with this whole scene. We are at the point where we need to think hard about how to mitigate the costs and manage the trade offs of this next round of technological revolution. This is a serious task with serious stakes and the people who should be doing this are instead tweeting out “rah rah technology is great, don’t give an inch to the luddites” memes.

English
7
2
92
15.9K
Tim Schnabel retweetledi
Andy Hall
Andy Hall@ahall_research·
How do we train AI to help represent us politically? In my class this quarter @StanfordGSB, we're running some wild experiments to figure this out. Each student built a personal AI representative which we tested against a ground truth set of votes. Then, we built a legislature and unleashed the students' agents to make deals and pass proposals. Some early learnings: (1) AI is a very cool new way to elicit our political preferences. Students innovated some fascinating ways to teach their agents about their views that go way beyond basic surveys. (2) Agents had trouble understanding deeper values---especially where we want them to be Burkean agents that help make the best decisions on policy issues that their humans haven't thought much about. (3) Agents didn't do a great job legislating together---they aren't naturally good at legislative bargaining, prioritizing, or logrolling. (4) We need a science for how to simulate agentic legislatures---there are many ways to set them up and the rules matter for what the simulation produces. (5) I'm convinced that live experiments like these are going to be essential for understanding how to build AI governance and political superintelligence. So many things happened that surprised me, and the students developed fascinating ideas I never would have thought of. Please read the full debrief, linked below! Honored to be running this simultaneously in @PoetsAndQuants . This was a joint project with my amazing course TAs, Piper Fleming and Madeleine Mayhew.
Andy Hall tweet media
English
5
20
94
11.6K
Tim Schnabel retweetledi
Séb Krier
Séb Krier@sebkrier·
If there's one thing I think we can all (maybe) agree on it's that it would be great to have more standardized ways of (a) evaluating models (b) on shared capabilities/threat models definitions (c) while controlling for pure model capabilities vs scaffolding/harness gains. All of which can be done without slowing down or gating releases *and* while continuously empowering defenders and democracies.
English
7
6
48
3.8K
Tim Schnabel
Tim Schnabel@TimSchnabel·
Nice article by @aarontmak about adversarial distillation, featuring analysis from my LRI colleague Joe Khawam.
Tim Schnabel tweet media
English
1
0
2
129
Tim Schnabel
Tim Schnabel@TimSchnabel·
@blainedilli I think there's already quite a bit of innovation in the ADR space. AAA has been working on an AI arbitrator, and there are lots of options for expedited procedural rules. More experimentation is good, of course! adr.org/ai-arbitrator/
English
0
0
1
40
Blaine Dillingham
Blaine Dillingham@blainedilli·
I meant something like: it seems arbitration has its own problems, so one thing we might want to do is copy quite a bit of the court system, but make specific changes to leverage AI. Things like: -in a complaint/ntc of removal and on a civil cover sheet, you can check a box for jury demand, or you can demand a jury trial but consent to an AI jury (and if both parties consent, that’s what happens) -shorten timelines for oppositions and replies
English
1
0
1
18
Blaine Dillingham
Blaine Dillingham@blainedilli·
I expect AI will speed up litigants more than courts, pushing many claims to arbitration. Thoughts on what meta-rules we need for arbitration? Perhaps we should model it after traditional courts in terms of case law and discovery rules and such, but with procedural changes to allow for AI?
English
1
0
0
72
Tim Schnabel
Tim Schnabel@TimSchnabel·
@mentalgeorge Presumably making use of such a mechanism would require waiving user privacy protections, right?
English
0
0
0
134
Tom Reed
Tom Reed@mentalgeorge·
Wacky but secretly-good idea that follows from taking the whole "AI constitutions" premise seriously: There should be "courts" to make rulings on how the Constitution/Spec applies to contentious instances of model behaviour. Users should be able to report + contest costly refusals through a button in the app - most of these are ofc auto-resolved with AI, but the interesting edge cases bubble up through a hierarchy of courts, with the most contentious of all being resolved by a "supreme court". Many advantages to this approach: 1. Claude's ability to apply the constitution to a given situation will be massively, massively improved by having a rich body of precedent to contemplate and refer to. Such a body of precedent makes the constitution a lot "thicker" - it could also be open-sourced to improve the (currently somewhat disappointing) transparency into the Constitution, allowing users to know what to expect from Claude. I imagine that Amanda, Joe et al currently produce such "precedent" for Claude with synthetic data and galaxybrain theorycrafting, but I'd happily bet that the real world is a better, richer source of edge cases than anything even they could come up with. 2. This could be a really, *really* interesting way to get the "democratic input" into AI constitutions that everyone keeps clamouring for. Usually these proposals end up as uninspired calls for using surveys and focus groups in the drafting stage of the Constitution, which I think is a fairly limited way to think about the strengths of liberal democracy. On this "courts for the spec" proposal, you could imagine opensourcing stages of the judicial process in wide variety of ways. One thing you could do is to crowdsource amicus briefs (or perhaps even the whole case!) for petitioner or respondent. I feel like there's a promising Pettit angle to this. 3. Discursive and argumentative traditions (and not just surveys or isolated technocrat drafting) have a good track record for being the means by which we as humans resolve these kinds of problems, so it just makes sense to get this going for AI. I particularly think that debates are likely to be much richer when they are about *specific* instances of model refusals than just vague, open-ended discussions about how AIs should be governed. It's also predisposed to iteratively co-evolve with the pace of technological change far better than any one-and-done philosophising would 4. There may even be some kind of "separation of powers" argument to make here, insofar as the "legislature" drafting the spec/constitution is distinct from the "court" ruling on how it applies to a particular case. More applicable to OAI than Anthropic (which is more generally comfortable with moving past traditional liberal principles) ofc. 5. Finally, and most obviously, there *should* be a way to contest model refusals!!! If we all take seriously the idea that agents will become some large % of the human economy, then an individual unjust refusal could be absurdly costly. I think there should be a transparent, participatory way to contest this such that we are not at the mercy of the AI conglomerates. I cannot think of a better way than to borrow from the law. Perhaps I'm being too Anglobrained with this, but I just think the law and courts pareto-mog almost all other forms of value-resolution we homo sapiens have ever come up with
English
6
10
85
8.4K
Tim Schnabel
Tim Schnabel@TimSchnabel·
With 5.5 out now, seems like one still can't (1) use Pro for scheduled tasks nor (2) update an old scheduled task to use the newer model.
English
0
0
1
13
Tim Schnabel
Tim Schnabel@TimSchnabel·
I'm tired of having to delete and recreate scheduled tasks when new models roll out.
English
1
0
0
25
Tim Schnabel
Tim Schnabel@TimSchnabel·
Does anyone know how to update the model used by scheduled tasks in @ChatGPTapp? I have a bunch of scheduled tasks that use 5.2 Thinking and the editing option doesn't seem to let one switch the model to 5.4. (Also need to be able to schedule tasks for Pro!)
Neel Ajjarapu@neelajj

A hidden, but powerful feature -- you can create scheduled tasks with ChatGPT Agent! Agent can, in the background, regularly search the web or your connectors and take action on the web, including on authenticated sites

English
1
0
1
130
Tim Schnabel
Tim Schnabel@TimSchnabel·
@deanwball At some point, you're going to have to child-proof those tallest few stacks, before they become tempting targets for small hands to pull down...
English
0
0
1
195
Dean W. Ball
Dean W. Ball@deanwball·
The Project is on
Dean W. Ball tweet media
Dean W. Ball@deanwball

I have news to share: I am writing a book, currently untitled, set to be published by Penguin Press next year. I began my public writing career with the thesis that AI and related technologies will both necessitate and enable major changes to the institutional configuration of Western society. I believe it is possible for individual liberty, dignity, and property to survive the coming transformation, but that their survival will take serious collective effort. This book is a contribution to that effort. It is an attempt to describe a positive vision for the future of the American republic in particular, and all ordered liberty worldwide. But it will simultaneously be a diagnosis of a crisis: open societies have drifted from the principles they once embodied, and those principles must be re-imagined for modern ears if we are to embody them in the future. This will therefore be a work of political theory just as much it is an "AI book." There is not much more I can say about the project (yet), other than that it is, by far, the most ambitious work I have ever undertaken. My research and writing will not replace regular writing on Hyperdimensional, but the cadence of publications is likely to remain lower than it once was. I anticipate writing essays every 2-3 weeks for the foreseeable future. I expect my output to shift a bit more toward mini-essays posted on X, which is a format I have been using more anyway. I am immensely grateful for my readers' support over these last two years. Thank you to everyone who has taken the time to read my work, and to the countless people who have lent me a hand along the way. More to come soon.

English
18
6
280
21.8K