Madhulika

8.9K posts

Madhulika banner
Madhulika

Madhulika

@Madhusrikumar

Head of Safety @partnershipAI | AI governance | prev @Harvard_Law, @NewAmerica @orfcyber

Katılım Kasım 2008
740 Takip Edilen1.1K Takipçiler
Madhulika retweetledi
Sayash Kapoor
Sayash Kapoor@sayashk·
Folks in San Francisco: I'm doing a book talk on AI Snake Oil with @msurman. Come hear us discuss the future of AI, why AI isn't an existential risk, building AI in the public, and what goes into writing a book. November 18, 5:30pm. RSVP: forms.gle/m9BGAY6ALCXSjv…
Sayash Kapoor tweet media
English
3
12
47
5K
Madhulika retweetledi
Haydn Belfield
Haydn Belfield@HaydnBelfield·
Big big job ad: 🔥Director of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge🔥🏛️ You get to lead this lovely team of researchers and shape the world's first masters in global catastrophic risk - link in next tweet
Haydn Belfield tweet media
English
1
8
33
4K
Madhulika retweetledi
Iason Gabriel
Iason Gabriel@IasonGabriel·
Are you interested in exploring questions at the ethical frontier of AI research? If so, then take a look at this new opening in the humanity, ethics and alignment research team: boards.greenhouse.io/deepmind/jobs/… HEART conducts interdisciplinary research to advance safe & beneficial AI.
English
5
64
247
48.1K
Elissa M. Redmiles, Ph.D.
Elissa M. Redmiles, Ph.D.@eredmil1·
I’m honored to have been awarded the @TheOfficialACM SIGSAC Outstanding Early-Career Researcher Award for “pioneering research contributions in the area of sociotechnical security research” today at @acm_ccs #ccs24
Elissa M. Redmiles, Ph.D. tweet mediaElissa M. Redmiles, Ph.D. tweet media
English
41
15
293
16.6K
Madhulika retweetledi
Centre for the Study of Existential Risk
🌍 Applications are now open for CSER's MPhil in Global Risk and Resilience! If you're interested in learning more, be sure to register for our virtual open day on 4th November 2024. #cambridge #mphil
Centre for the Study of Existential Risk tweet media
English
3
9
23
21.1K
Markus Anderljung
Markus Anderljung@Manderljung·
Excited to be joining @MarietjeSchaake and @AnkaReuel as a vice-chair for Working Group 4 for the EU AI Act Code of Practice. We’ll focus on what risk management procedures providers of systems like GPT-4 should put in place.
Sabrina Küspert@SabrinaKuespert

Important months ahead of us to shape the first #GeneralPurposeAI Code of Practice! Can’t wait to work with our brilliant Working Group (Vice) #Chairs 🇪🇺🤍 Our team at #AIOffice is facilitating By April 2025, transparency, risk assessment & mitigation will be detailed out ⬇️

English
7
1
71
3.6K
Michael Littman
Michael Littman@mlittmancs·
I got to help shape this document, providing guidance about how AI researchers collaborate globally. It was unveiled at the UN General Assembly yesterday by the Secretary of State.
English
4
10
61
7.7K
Madhulika retweetledi
AS
AS@agstrait·
Our team spent several months speaking with firms working on foundation model evals. While they can be useful, they are not sufficient for ensuring the safety of a model, and suffer from a range of theoretical, practical, and gaming issues. A critical read for AI safety policy.
Ada Lovelace Institute@AdaLovelaceInst

📢Evaluations are a useful method for identifying and mitigating the risks posed by foundation models. However, they should be used alongside other tools, such as audit, incident reporting and post-market monitoring. Read ‘Under the radar’: adalovelaceinstitute.org/report/under-t… (1/6)

English
0
8
24
3.6K
Madhulika retweetledi
Madhulika retweetledi
Claire Leibowicz
Claire Leibowicz@CLeibowicz·
Check out this *PUBLIC* webinar I'll be moderating with this stellar, wise crew representing @adobe @witness @OpenAI @BBC, on how they applied PAI's Synthetic Media Framework to real world scenarios. 🗓️June 18, 9amPST/12EST Register here! 👉 buff.ly/3x4k6me
Partnership on AI@PartnershipAI

What did it take for @Adobe @OpenAI @BBCNews & @witnessorg to put PAI's Synthetic Media Framework into practice? Join us for a webinar on June 18 to hear their insights and discuss real-world challenges. Register 👉 buff.ly/3x4k6me @andyparsons @_lamaahmad @SamGregory

English
0
2
13
1.1K
Madhulika retweetledi
AS
AS@agstrait·
New blog post from @halcyene @wonderlikeours and I on the future of the UK AI Safety Institute and AI safety after Seoul. Tl;dr: We need a shift in the the 'what and how' that AISI works on, backed up with new statutory powers and a joined-up AI regulation strategy.
Ada Lovelace Institute@AdaLovelaceInst

What have we learned about AI safety since Bletchley? What governance role should AI safety institutes play? Our new blog post looks at the UK's AI Safety Institute as a model - and argues for more context-specific evaluations and new statutory powers. adalovelaceinstitute.org/blog/safety-fi…

English
2
5
16
2.5K
Madhulika retweetledi
GitHub Policy
GitHub Policy@GitHubPolicy·
How can we actively pursue harm reduction strategies for open foundation models without hindering their accessibility? We co-hosted an expert workshop 👇 on this and related questions with @PartnershipAI following up on our NTIA response github.blog/2024-04-10-hel…
Partnership on AI@PartnershipAI

As open foundation models advance, we must proactively develop tailored risk mitigation strategies. Our latest blog covers: ✅ Recap from our recent workshop co-hosted w/ @GitHub on safeguarding open models ✅ Roles across the open model value chain buff.ly/3UdMKZB

English
0
3
7
1.1K
Madhulika retweetledi
Connor Dunlop
Connor Dunlop@cp_dunlop·
Super excited to share that our Brussels team is growing! 🚀 Join my team to work on EU & international governance and regulation, AI accountability and risk management in practise, rebalancing power and democratic oversight for AI.. and much more! adalovelaceinstitute.org/job/researcher…
English
0
12
27
3.1K
Partnership on AI
Partnership on AI@PartnershipAI·
Foundation models have become a cornerstone of AI research and development. We need a dynamic framework to enable responsible innovation while ensuring transparency. Today, we’re proud to announce PAI's Guidance for Safe Foundation Model Deployment: partnershiponai.org/modeldeploymen…
Partnership on AI tweet media
English
5
14
91
125.3K
Madhulika retweetledi
Joelle Pineau
Joelle Pineau@jpineau1·
This is one of the most comprehensive, nuanced and inclusive frameworks for responsibly building and deploying AI models through an open approach. PAI's leadership has been invaluable in bringing together many different opinions and offering clear guidance for AI model builders.
Partnership on AI@PartnershipAI

Foundation models have become a cornerstone of AI research and development. We need a dynamic framework to enable responsible innovation while ensuring transparency. Today, we’re proud to announce PAI's Guidance for Safe Foundation Model Deployment: partnershiponai.org/modeldeploymen…

English
4
28
108
84.1K
Madhulika retweetledi
Nick Clegg
Nick Clegg@nickclegg·
The @PartnershipAI brings together industry, civil society & experts as companies like ours look for the most responsible ways to develop & release AI models. Its draft guidance establishes much-needed best practices for open & restricted releases - an important step when the world is working out how to strike the right balance between rapid AI innovation & the need for sensible guardrails. partnershiponai.org/modeldeployment
English
4
13
30
36.4K