Eva Behrens

27 posts

Eva Behrens banner
Eva Behrens

Eva Behrens

@_ebehrens_

AI Policy in London. Nobody knows how to build controllable AGI - so let's not do it!

Katılım Ocak 2016
151 Takip Edilen104 Takipçiler
Eva Behrens
Eva Behrens@_ebehrens_·
sad but true
English
0
0
2
103
Eva Behrens
Eva Behrens@_ebehrens_·
Geneva is fun: On the one hand, I had a great time moderating a closed-door breakout session on the future of global AI governance at the @AIforGood Summit with ministers, UN officials, and private sector leaders. On the other hand, a mediocre plate of Falafel cost me 21 CHF.
Eva Behrens tweet media
English
1
0
6
140
Eva Behrens
Eva Behrens@_ebehrens_·
Here are 5 policy recommendations for the upcoming AI Safety Summit in Seoul, from me and my colleagues at ICFG. In Bletchley, world leaders discussed major risks of frontier AI development. In Seoul, they should agree on concrete next steps to address them.
Eva Behrens tweet media
English
39
16
72
102K
Eva Behrens
Eva Behrens@_ebehrens_·
To reduce extinction risk from powerful AI, we need strong international cooperation. A costly signal by one country, a unilateral leap of faith, can start that process. The snag is that in AI, taking a leap of faith will only become harder over time.
Eva Behrens tweet media
English
2
0
0
132
Eva Behrens
Eva Behrens@_ebehrens_·
Labeling text created by AI likely won't keep users from anthropomorphising AI systems, making them more vulnerable to manipulation by AI. And combine persuasive AI with improving deepfake tech and the over 60 elections coming up this year... 2024 is going to be a wild ride.
Project Syndicate@ProSyn

In the not-so-distant future, generative AI could enable the creation of new user interfaces that can persuade on behalf of any person or entity with the means to establish such a system, predict @Exp_Mark, Josh Entsminger, and @Terencecmtse. bit.ly/4aFjXVr

English
0
0
2
196
Eva Behrens
Eva Behrens@_ebehrens_·
@jasoncrawford Yes indeed, it might not make sense to put liability only on foundation models in all cases. But they definitely shouldn't be exempt from the AI Act.
English
0
0
2
19
Jason Crawford
Jason Crawford@jasoncrawford·
@_ebehrens_ This is not an obvious conclusion to me. In some cases it might make sense for liability to be on the foundation models, but not all
English
1
0
3
117
Eva Behrens
Eva Behrens@_ebehrens_·
Great thread. Large orgs with safety engineering resources that can address the root causes of risks are best positioned to make their technologies safe. Hence, the EU AI Act should regulate foundation model developers instead of putting regulatory burden on deployers.
Jason Crawford@jasoncrawford

One specific lesson from that history: It's better if liability rests with whoever can address the *root causes* of risks, and if it rests with larger organizations that have the resources to invest in safety engineering, as opposed to with small businesses or individuals.

English
1
0
1
227
Eva Behrens
Eva Behrens@_ebehrens_·
Useful thread that summarises a variety of voices speaking out against France's and Germany's recent efforts to exempt foundation models from the EU AI Act:
Future of Life Institute@FLI_org

As the #AIAct reaches the year's final trilogue on Dec. 6, a chorus of voices are speaking out against the exempting of foundation models. They cite irreparable harm it will do to #EU innovation and how it will put Big Tech profits ahead of safety. Some examples🇪🇺👇 🧵1/16

English
0
0
3
136