Blaine Dillingham

23 posts

Blaine Dillingham

Blaine Dillingham

@blainedilli

AI Policy @joinFAI

Katılım Eylül 2021
102 Takip Edilen179 Takipçiler
Dave Banerjee
Dave Banerjee@DaveRBanerjee·
In 1962, the Executive Branch spent roughly $544 for every $1 the Legislative Branch spent By 2024, that number doubled to $984 to $1 I am increasingly concerned about worlds the legislative and judicial branch are unable to oversee an AI-accelerated executive branch It seems more important than ever to figure how to empower congress and the courts with AI tools And its important to figure out how congress and the courts should oversee the executive branch once a significant fraction of the USG is automated
Dave Banerjee tweet media
English
4
0
8
1.7K
Blaine Dillingham
Blaine Dillingham@blainedilli·
Agreed, we should choose definitions that capture what matters. “AGI” doesn’t seem like the most helpful term
Dean W. Ball@deanwball

For a moment, substitute the notion of “believing in short AGI timelines” for: “acknowledging the idea of AGI as an ill-defined thing that will nonetheless probably exist within a strategically relevant timeframe, the pursuit of which will produce importantly capable artifacts along the way, whose arrival will be even sooner than the so-called ‘AGI,’ and so we don’t really need to quibble all that much about exact AGI definitions and timelines, because the mega-capable artifacts already kinda resemble ‘AGI,’ have national security implications, and seem like they’re going to keep improving rapidly, so functionally we just have to accept that we live in AGI world now, regardless of whether one’s personal definition of AGI is satisfied in 2028, 2035, or, indeed, 2026.” If this was your view—and it is mine—then it is not so much about short timelines to AGI as it is “short timelines to the importantly useful artifacts produced along the path to AGI, so capable that maybe in some ways they blend into AGI.” Thus “Mythos” or “the latest frontier model” can be substituted for “AGI” in many debates about timelines. The rhetoric of “we should sell China more chips” or “AI is the next internet platform business and it should we regulated exactly like prior waves of internet platform businesses, which is to say ‘basically not at all’” pivots not so much on short timelines to AGI but instead short timelines to models that *matter,* national-security-wise. The fatal flaw in the 2024-era accelerationist view, epitomized by Jensen, was that models would never matter in this way; or at least, that you should not think so much about the world in which models mattered in that way. It is much easier to justify “doing what we have been doing” if you don’t believe neural networks will ever truly matter to national security. Basically all AI debates hinge not so much on “AGI timelines” but on “will LLMs ever matter, really, to national security.” The near-term existence of LLMs with national-security-relevant capabilities can therefore be thought of as, to borrow a phrase, an inconvenient truth.

English
0
0
2
92
Blaine Dillingham
Blaine Dillingham@blainedilli·
Power concentration is a huge motivating concern for me in AI policy, but a lot of potential checks and balances seem politically difficult. This bill seems like a great tangible lever we can pull thefai.org/posts/the-gove…
English
1
7
29
3.2K
Blaine Dillingham
Blaine Dillingham@blainedilli·
@anton_d_leicht I wonder if strict liability + an insurance mandate could be a good way for the public to see government as “holding big tech accountable”, while taking a market-based approach rather than some heavy-handed one that involves huge state involvement
English
1
0
1
623
Anton Leicht
Anton Leicht@anton_d_leicht·
Accelerationist AI policy is losing ground, and the current strategy does not give moderates a pro-AI case for 2028. Without it, they'll get pulled apart by anti-AI sentiment. Today, I argue accelerationism needs to change, or its defeat will make AI policy worse for everyone.
Anton Leicht tweet media
English
6
10
85
17.6K
Blaine Dillingham
Blaine Dillingham@blainedilli·
Bad 702 defense "An emergency exception to a warrant requirement wouldn’t make a difference in this case. There’s no obvious urgency to finding out more about Joe. But a delay could in retrospect prove to have been a missed opportunity to thwart a terrorist plot." Yeah this is just 4th Amendment law always? If there's no apparent urgency and no probable cause, seems like you shouldn't search an Americans' sensitive data... lawfaremedia.org/article/fisa-s…
English
0
0
0
40
Blaine Dillingham
Blaine Dillingham@blainedilli·
Evals and auditing are important, but so is cybersecurity. Instead of increasing the risk of model weight theft, Congress should work to accelerate cyber defense. I for one will be etching my secrets into stone tablets from now on Enjoyed working with @hamandcheese doing evals and auditing of this bill
Samuel Hammond 🦉@hamandcheese

@blainedilli @MarshaBlackburn On model evaluation, the Act proposes creating a new program at DOE for predeployment testing. Beyond being duplicative of CAISI's eval capacity, requiring companies to transfer proprietary data and model weights to a gov't agency introduces serious security risks.

English
0
2
4
799
Blaine Dillingham
Blaine Dillingham@blainedilli·
WH: “Congress should prevent the United States government from coercing AI providers” Blackburn: give us your model weights or pay $1 million / day Below, @hamandcheese and I analyze what is truly one of the AI bills of all time thefai.org/posts/the-trum…
English
0
3
6
1.2K
Charlie Bullock
Charlie Bullock@CharlieBull0ck·
I think I just came across a new contender for “worst state AI bill of all time.” Move over, Colorado. This Illinois “AI Safety” bill would give AI companies immunity from liability for catastrophes caused by their model in exchange for the company publishing a safety protocol online. I’m not joking. That’s actually the proposal. If you negligently design an unsafe model that kills a million people, you can’t be sued. Because you did the thing that SB 53 already required you to do.
Charlie Bullock tweet media
English
14
40
238
17.1K
Blaine Dillingham
Blaine Dillingham@blainedilli·
Has anyone proposed strict liability for cyber breaches but with cyber defense investment being tax deductible?
English
0
0
2
92
Sam Bowman
Sam Bowman@sleepinyourhat·
Mythos Preview seems to be the best-aligned model out there on basically every measure we have. But it also likely poses more misalignment risk than any model we’ve used: Its new capabilities significantly increase the risk from any bad behavior. 🧵
Sam Bowman tweet media
English
54
191
1.4K
970.8K
Blaine Dillingham retweetledi
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
We're hunting for someone to lead our AI x Cybersecurity policy work, with a near-term emphasis on agentic security and cyber defense. thefai.org/posts/job-list…
English
0
14
46
6.1K
Blaine Dillingham
Blaine Dillingham@blainedilli·
@RishiBommasani @sebkrier I expect a lot of skepticism can be explained by people not using the most advanced models and definitely not investing the bit of upfront effort it takes to elevate the models from “often helpful” to “cracked.” So in a sense, they do inhabit a different world
English
1
1
3
63
rishi
rishi@RishiBommasani·
For many ardent skeptics, I feel they must either inhabit a different world from that one we inhabit or have a weird mix of epistemic inhumility or unabashed commitment to their ideology that clouds their sincerity. I don't consider myself particularly bullish on frontier AI but it feels hard to take serious people dismissing what imo is unequivocally the most acute technological progress within the 21st century.
English
2
1
15
758
Séb Krier
Séb Krier@sebkrier·
I occasionally have my doubts about the Bay Area flavoured monoculture of Al hyper-bullishness, but occasionally I look at what the smarmy skeptics are offering and remind myself the alternative is even bleaker. All the confidence, none of the imagination.
nature@Nature

Book review 📚 Artificial-intelligence models will supposedly take over the world, but AI innovator Luc Julia tells Nature that they’re little more than glorified pocket calculators go.nature.com/4lPpuPd

English
31
50
613
52.6K