Alix Pham 🔸

271 posts

Alix Pham 🔸 banner
Alix Pham 🔸

Alix Pham 🔸

@alix_ph

Strategic Programs Associate @ Simon Institute | AI & Biosecurity & Policy | 🔸 10% Pledger

Switzerland Katılım Kasım 2015
178 Takip Edilen73 Takipçiler
Alix Pham 🔸 retweetledi
Simon Institute for Longterm Governance
Open-source and open-weight AI are high on the policy agenda. But what do these terms actually mean, why do they matter for governance and competition, and where is the debate heading? Check out our explainer series to find out. Link in comments 👇
Simon Institute for Longterm Governance tweet media
English
1
3
5
344
Alix Pham 🔸 retweetledi
Will Saunter
Will Saunter@willsaunter·
Our biosecurity course is back at @bluedotimpact We need better defences to prevent, detect and respond to pandemic threats. If you want to identify where you can contribute and get funded to start building, this course is the place to start.
Will Saunter tweet mediaWill Saunter tweet media
English
2
8
11
781
Alix Pham 🔸 retweetledi
Tyler Johnston
Tyler Johnston@TylerJnstn·
I, too, made the mistake of *checks notes* taking OpenAI's charitable mission seriously and literally. In return, got a knock at my door in Oklahoma with a demand for every text/email/document that, in the "broadest sense permitted," relates to OpenAI's governance and investors.
Tyler Johnston tweet media
Nathan Calvin@_NathanCalvin

One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened: 🧵

English
178
970
5.1K
4.2M
Alix Pham 🔸 retweetledi
Nathan Calvin
Nathan Calvin@_NathanCalvin·
One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened: 🧵
Nathan Calvin tweet media
English
310
1.2K
6.3K
6.7M
Alix Pham 🔸 retweetledi
Ajeya Cotra
Ajeya Cotra@ajeya_cotra·
If you're a generalist EA who's working on AI safety or policy because it's the most important problem, I'd consider switching to biosecurity. AI is more important but bio seems a lot more neglected and there's straightforward object level work that could help a lot.
Rob Wiblin@robertwiblin

Convention wisdom is that bioweapons are humanity's greatest weakness – 100x cheaper to make than to defend against. Andrew Snyder-Beattie thinks conventional wisdom is likely wrong. He has a plan cheap enough to do without government. Useful even in worst case scenarios like mirror bacteria. Effective enough to save most people. In one of my all-time fav interviews he lays out a low-tech 4-step approach developed by his research team at Open Philanthropy, to fix a problem most have thought unsolvable. ASB is hiring for many roles in this project from logistics to biotech to manufacturing, and has $100s millions to deploy. Enjoy, links below! 2:10 How bad it could get 9:19 The worst-case scenario: mirror bacteria 18:14 Why low-tech 25:30 Prevention 31:21 The “4 pillars” plan 33:09 ASB is hiring now to make this happen 35:11 Everyone was wrong: biorisks are defence dominant 40:23 Pillar 1: Lungs 55:53 Pillar 2: Biohardening 1:15:19 Pillar 3: Detection 1:28:40 Pillar 4: The wrench hypothesis 1:40:12 The plan's biggest weaknesses 1:44:44 Would chaos make this impossible to pull off? 1:51:50 Would rogue AI make bioweapons? 1:57:57 We can feed the world even if all the plants die 2:07:03 Could a bioweapon make the Earth uninhabitable? 2:09:35 What ASB is hiring for 2:30:27 How to protect yourself and your family (On the 80,000 Hours Podcast, available anywhere you get podcasts.)

English
8
18
249
26.1K
Alix Pham 🔸 retweetledi
Simon Institute for Longterm Governance
As world leaders meet in New York for #UNGA80, we wrote up some thoughts on the UN in the age of AI – AI's impact on its core mission, the UN's role in AI governance, and whether it can keep pace with transformative change. Link in comments 👇
Simon Institute for Longterm Governance tweet media
English
1
2
4
586
Alix Pham 🔸 retweetledi
Charbel-Raphael
Charbel-Raphael@CRSegerie·
The time for AI self-regulation is over. 200 Nobel laureates, former heads of state, and industry experts just signed a statement: "We urgently call for international red lines to prevent unacceptable AI risks" The call was presented at the UN General Assembly today by Maria Ressa, Nobel Peace Prize laureate:
English
85
419
1.3K
483.8K
Alix Pham 🔸 retweetledi
Simon Institute for Longterm Governance
How might AI governance draw inspiration from the IAEA's nuclear verification regime? Guest author Christina Krawec analyzed 24 of the IAEA's tools and highlights three promising examples for AI. Link in comments 👇
Simon Institute for Longterm Governance tweet media
English
1
2
4
326
Alix Pham 🔸 retweetledi
chiara maharani ✧
chiara maharani ✧@chiaragerosa·
Last week we celebrated the launch of Fractal University Geneva! 🌟🌟🌟 There are loads of fun courses to (unfortunately?) have to choose between Our first ever semester will comprise...
chiara maharani ✧ tweet media
English
5
6
142
16.6K
Alix Pham 🔸 retweetledi
Maxime Fournes⏸️
Maxime Fournes⏸️@FournesMaxime·
⚠ Quand l'imposture atteint les sommets de l'État : l'affaire Luc Julia (l'homme qui N'A PAS inventé Siri) Il y a quelques mois, j'ai débattu avec Luc Julia sur l'IA. Je trouvais déjà que ses propos révélaient une grande incompétence technique sur le sujet. Mais la vidéo que vient de publier Monsieur Phi révèle une réalité bien plus grave. Pour contexte, Luc Julia est présenté partout comme LE grand expert français de l'IA, et a récemment été auditionné par le Sénat en tant que tel. Les faits : 🔴 Il déforme les études scientifiques : devant les sénateurs, il invente totalement le contenu d'une étude pour appuyer ses arguments. L'étude testait quelques centaines de questions de raisonnement (Julia affabule "des millions de faits") sur un modèle obsolète de 2022 (!!) ; il prétend qu'elle prouve que toutes les IA actuelles se trompent systématiquement. 🔴 Il invente des études d'OpenAI qui n'existent pas. Les mots me manquent pour qualifier une telle malhonnêteté intellectuelle. 🔴 Il insulte les vrais experts : Geoffrey Hinton et Yoshua Bengio (prix Turing, scientifiques les plus cités au monde) auraient selon lui "pété une durite" ou "fumé des trucs" parce qu'ils alertent sur les dangers de l'IA 🔴 Il ne comprend pas les bases de l'IA : par exemple il confond systématiquement paramètres et données, concepts pourtant fondamentaux 🔴 Il s'attribue la création de Siri alors qu'il n'est arrivé chez Apple qu'APRÈS son lancement, et y est resté 10 mois seulement (avant de se faire mettre à la porte ?) Ce qui m'inquiète profondément, ce n'est pas tant l'individu que la défaillance systémique qu'il révèle. Comment quelqu'un qui insulte les prix Turing et déforme les études scientifiques peut-il : - Siéger au Comité de l'IA nommé par le gouvernement ? - Être auditionné comme "expert mondial" par le Sénat ? - Influencer nos politiques publiques sur l'IA ? Cette affaire pose une question d'intérêt public majeure : combien d'autres "experts" autoproclamés occupent des positions stratégiques sans avoir les compétences requises ? À l'heure où l'IA transforme notre société, nous ne pouvons pas nous permettre que nos décisions soient guidées par des imposteurs mythomanes. Merci à Monsieur Phi pour ce travail de fact-checking rigoureux. La vidéo complète (58min de démonstration implacable) est en commentaires. Cette vidéo est d'intérêt public, partagez la rapidemment.
Maxime Fournes⏸️ tweet media
Français
148
769
2.3K
289.1K
Alix Pham 🔸 retweetledi
David Manheim
David Manheim@davidmanheim·
I'm not a fan of open-sourcing frontier LLMs, but this seems to have been done as responsibly as possible; a very low bar. That is, it seems unlikely to be marginally more useful than what is available and unmonitored from other providers, which can already enable bioterrorism.
Greg Brockman@gdb

Just released gpt-oss: state-of-the-art open-weight language models that deliver strong real-world performance. Runs locally on a laptop!

English
3
2
19
2.3K
Alix Pham 🔸 retweetledi
Michael Aird
Michael Aird@michael__aird·
🚀Come join my team at RAND! We’re looking for research leads, researchers, & project managers for our compute, US AI policy, Europe, & talent management teams. All teams have urgent, important work to do & broad options for the future. Some roles close July 27⏰
English
13
12
76
7.8K
Alix Pham 🔸 retweetledi
Alix Pham 🔸 retweetledi
Simon Institute for Longterm Governance
🌐 Does AI need it's own "CERN"? 🌐 Join us & @TheGCSP this Thursday, July 10th, on the sidelines of ITU's @AIforGood Summit, for a Geneva Security Debate on the subject. Registration link below 👇
English
1
5
10
658