Daniel Privitera

231 posts

Daniel Privitera

Daniel Privitera

@privitera_

Making AI go well | Founder @kira_center_ai

Katılım Nisan 2022
1.5K Takip Edilen715 Takipçiler
Sabitlenmiş Tweet
Daniel Privitera
Daniel Privitera@privitera_·
Should we choose AI progress or AI safety? Address present-day impacts of AI or potential future risks? In our op-ed for @TIME , Yoshua Bengio and I argue that these are false dilemmas. And we propose a “Beneficial AI Roadmap” (BAIR). 1/n
Daniel Privitera tweet media
English
2
22
117
29.2K
Daniel Privitera
Daniel Privitera@privitera_·
New report by @MonikaSchnitzer , my @kira_center_ai colleague @philip_fox_, and me: Germany already has very little AI compute, things are on track to get dramatically worse, and urgent and decisive action is needed to reverse this trend. Germany currently has less than 2% of global AI compute. And my impression is that in Berlin it has not yet fully sunk in how bad things are about to get if nothing changes. One or two potential new German “AI Gigafactories” are the most ambitious things being discussed, but one such Gigafactory would still be at least 50x smaller than the biggest clusters announced internationally. Germany's projected compute capacity (existing + announced) does not get anywhere near to closing the expected gap in demand in coming years. So what should the Merz government aim for instead? We outline 3 possible approaches. The appropriateness of each approach depends on how seriously the government really takes AI and how ambitious it really wants to be, beyond talking points and buzzwords. - Low ambition: aim for enough compute to deploy AI in selected sectors - Medium ambition: aim for enough compute to deploy AI across society - High ambition: aim for enough compute to deploy AI across society and develop frontier models Even the low ambition approach could not be realized, by far, with the currently announced clusters. And all three approaches will require proportionally decisive actions beyond just building datacenters. So urgent action is needed. We make 6 recommendations that make sense irrespective of the ambition level with which the Merz government wants to approach AI: - Expand AI compute capacity according to strategic priorities - Create political enablers, e.g. centralized steering - Secure cheap, reliable energy - Accelerate planning/permitting - Ramp up data center security - Strengthen the domestic AI ecosystem Feedback is very welcome – what do you agree/disagree with?
Daniel Privitera tweet media
KIRA Center@kira_center_ai

New KIRA Report by @philip_fox_ @MonikaSchnitzer (LMU) & @privitera_ Germany is lagging behind on AI infrastructure: Its share of global AI compute (<2%) will shrink even further if current trends continue. But with decisive action, the Merz government could reverse this trend.🧵

English
1
2
5
1.2K
Daniel Privitera retweetledi
Anton Leicht
Anton Leicht@anton_d_leicht·
Very excited to announce I've joined @CarnegieEndow as a Visiting Scholar with the Tech & IA team! This means I get to take my work on AI & political economy full-time: I'll keep writing ~weekly blog posts on fast AI progress, slow institutions and thorny incentives. (1/3)
Anton Leicht tweet media
English
15
11
122
10.3K
Daniel Privitera
Daniel Privitera@privitera_·
Most frontier AI companies, including @AnthropicAI , @Google/@GoogleDeepMind, @Microsoft, @MistralAI , @OpenAI , and @xai have signed the Safety & Security Chapter (which I co-authored alongside 8 great co-chairs) of the Code of Practice. I think this is great news for a number of reasons: - It allows these companies to comply with the EU AI Act in a streamlined way, provided that they actually adhere to the Chapter in full. - It gives the regulator (the Commission) greater visibility into how these companies will comply with the Act. I hope this transparency can contribute to increased mutual trust and a productive relationship between signatories and the Commission. - By signing and committing to the Chapter, these companies also give the public more visibility into how they will approach risk management for cyber, CBRN, and other large-scale risks. This is valuable, and it will be interesting to see whether frontier AI companies that don’t sign the Safety & Security Chapter will offer a similar level of transparency to the public. The objective here was to create a practical tool for streamlined compliance with the AI Act that is of real use to frontier AI companies, while also being effective at meeting the AI Act’s “systemic risk” obligations. I think we achieved this. This would not have been possible without the thoughtful engagement from companies, civil society, and academics throughout the process – and without the Commission taking the somewhat bold decision to outsource this work to external experts. Thank you to everyone involved! This is only a starting point, of course. Much more needs to happen for the EU's regulatory regime for frontier AI to go well in practice. My co-chairs and I shared some thoughts on this here: digital-strategy.ec.europa.eu/en/policies/co…
Yoshua Bengio@Yoshua_Bengio

I’ve been thrilled to see the support for the Safety & Security Chapter of the Code of Practice. Most frontier AI companies have now signed on to it: @AnthropicAI, @Google, @MistralAI, @OpenAI, @xAI Why this is important: 🧵 1/6

English
0
1
11
1.1K
Daniel Privitera retweetledi
Yoshua Bengio
Yoshua Bengio@Yoshua_Bengio·
I’ve been thrilled to see the support for the Safety & Security Chapter of the Code of Practice. Most frontier AI companies have now signed on to it: @AnthropicAI, @Google, @MistralAI, @OpenAI, @xAI Why this is important: 🧵 1/6
English
13
50
304
36.8K
Daniel Privitera retweetledi
Markus Anderljung
Markus Anderljung@Manderljung·
The EU's Code of Practice for General-Purpose AI is out. As one of the co-chairs who drafted the Safety & Security Chapter, focused on frontier AI, I'm proud of what we've put together. It’s a lean but effective framework for frontier AI companies to comply with the AI Act.
Markus Anderljung tweet media
English
3
22
76
5.8K
Daniel Privitera retweetledi
Yoshua Bengio
Yoshua Bengio@Yoshua_Bengio·
The Code of Practice is out. I co-wrote the Safety & Security Chapter, which is an implementation tool to help frontier AI companies comply with the EU AI Act in a lean but effective way. I am proud of the result! 1/3
Yoshua Bengio tweet media
English
8
31
106
7.7K
Daniel Privitera
Daniel Privitera@privitera_·
🗞️ @MonikaSchnitzer und ich haben für die @faznet einen Gastbeitrag über AGI geschrieben: KI, die praktisch jede kognitive Aufgabe gleich gut oder besser löst als ein Mensch. Wir sind besorgt darüber, dass in der deutschen Politik keine ernsthafte Auseinandersetzung mit AGI stattfindet. Die meistzitierten KI-Forscher der Welt, die führenden Köpfe aus der Industrie und immer mehr Politiker in anderen Ländern: Sie halten AGI schon in den kommenden 4 Jahren für möglich und bereiten sich auf dieses Szenario vor. In Deutschland passiert hingegen: nichts. AGI ist wie der Elefant im Raum, der entweder ignoriert oder gar nicht erst gesehen wird – während man in D.C., Peking, San Francisco und London die Möglichkeit sehr ernst nimmt, dass AGI bald erreicht wird, und sich für dieses Szenario vorbereitet. Im Beitrag beschreiben wir, was die Kernelemente einer ernstzunehmenden AGI-Strategie der Bundesregierung sein sollten: Rechenzentren ausbauen, kluges Ausspielen der deutschen KI-Trümpfe, und die staatliche Handlungsfähigkeit erhöhen.
Daniel Privitera tweet media
FAZ Wirtschaft@FAZ_Wirtschaft

Wenn Intelligenz in unvorstellbarem Ausmaß verfügbar, kopierbar und skalierbar ist, wird sich das Arbeiten und Wirtschaft dramatisch ändern. Wir müssen besser vorbereitet sein, schreiben @MonikaSchnitzer und @privitera_ in einem Beitrag für @faznet: faz.net/aktuell/wirtsc…

Deutsch
3
2
12
2.1K
Shayne Longpre
Shayne Longpre@ShayneRedford·
4/ I hope this report provides a balanced starting point for discussion on AI impact and safety. Thank you to the leads @Yoshua_Bengio @privitera_ and Sören Mindermann, and advisors on my section @DIGoldfarb and Lee Tiedrich!
English
1
0
3
179
Shayne Longpre
Shayne Longpre@ShayneRedford·
1/ Last week, we published the International AI Safety Report—supported by 30 nations plus the OECD, UN, and EU. Over 100 independent experts contributed. I’m thankful to play a small writing role, focusing on “Risks of Copyright.” 🔗 bit.ly/40Vm7Mu
English
1
2
12
782
Daniel Privitera
Daniel Privitera@privitera_·
The 1st International AI Safety Report is out today! Being the Lead Writer and collaborating with 100 leading AI experts (including Nobel laureates, Turing Award winners, etc) has been an honor. I look forward to it being read and discussed by policymakers and governments around the world. If you have thoughts on how to improve the report next time, please share them with us!
Daniel Privitera tweet media
Yoshua Bengio@Yoshua_Bengio

Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU. It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵 Link to full Report: assets.publishing.service.gov.uk/media/679a0c48… 1/16

English
4
2
71
3.4K
Daniel Privitera retweetledi
Arvind Narayanan
Arvind Narayanan@random_walker·
I appreciate that the authors of the report solicited and incorporated feedback from experts with a diverse range of views on AI Safety, including me. The report is stronger for it.
Yoshua Bengio@Yoshua_Bengio

Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU. It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵 Link to full Report: assets.publishing.service.gov.uk/media/679a0c48… 1/16

English
2
9
67
8.4K
Danielle Goldfarb
Danielle Goldfarb@DIGoldfarb·
This report represents a crucial shift in how we as a society assess AI safety, moving to evidence-based scientific analysis. It was great to be part of the writing group with the role of making the scientific evidence more relevant to AI policymakers.
Yoshua Bengio@Yoshua_Bengio

Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU. It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵 Link to full Report: assets.publishing.service.gov.uk/media/679a0c48… 1/16

English
1
0
4
234
Dr. Chinasa T. Okolo
Dr. Chinasa T. Okolo@ChinasaTOkolo·
I'm incredibly proud to be part of this monumental effort towards the International AI Safety Report, the first comprehensive global assessment of the state of the science on AI capabilities and risks. Thank you to Dr. Bengio for your leadership!
Yoshua Bengio@Yoshua_Bengio

Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU. It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵 Link to full Report: assets.publishing.service.gov.uk/media/679a0c48… 1/16

English
1
0
5
400
Daniel Privitera retweetledi
Daniel Privitera retweetledi
Sayash Kapoor
Sayash Kapoor@sayashk·
More than 60 countries held elections this year. Many researchers and journalists claimed AI misinformation would destabilize democracies. What impact did AI really have? We analyzed every instance of political AI use this year collected by WIRED. New essay w/@random_walker: 🧵
Sayash Kapoor tweet media
English
4
59
165
29.5K
Daniel Privitera retweetledi
Markus Anderljung
Markus Anderljung@Manderljung·
The first draft of the EU's General-Purpose AI Code of Practice was just published. I was one of the vice-chairs involved in drafting it. As we've got many more drafts to go until May 2025, I'd be keen on folks' input on how it can be improved.
English
5
15
81
8.4K