Miranda Bogen

2.5K posts

Miranda Bogen banner
Miranda Bogen

Miranda Bogen

@mbogen

Director of the AI Governance Lab @CenDemTech / responsible AI + policy

Washington, DC Katılım Ekim 2008
1.3K Takip Edilen2.4K Takipçiler
Neil Chilson ⤴️⬆️🆙📈 🚀
I spoke at @SOTN 2026 yesterday on AI Liability: Who's Responsible When Machines Act. Great panel moderated by @mbogen, with Tod Cohen, @hlntnr, and @aaronkbr. My main argument: the companies & individuals closest to the harm - deployers and sometimes users - are generally best situated to prevent it. Liability should primarily rest with them. Upstream (including model developers) should be subject to general anti-deception law and doctrines like failure to warn or known defects. There was pretty strong consensus across the panel that existing law has a lot to contribute here - there are strong frameworks in agency law, tort law, and product liability that apply to agentic AI. We don't need a whole new framework such as some legal conception of AI personhood. Those who followed the Clawdbot / Open Claw developments last week know there exists the potential for agents to do completely unanticipated (and perhaps unanticipatable) things. But even there, we have analogues. As I pointed out, if you have a pet tiger and let him wander the neighborhood, the law will hold you liable for damages he causes. Great conversation and thanks again to @SOTN for having me!
Neil Chilson ⤴️⬆️🆙📈 🚀 tweet mediaNeil Chilson ⤴️⬆️🆙📈 🚀 tweet mediaNeil Chilson ⤴️⬆️🆙📈 🚀 tweet mediaNeil Chilson ⤴️⬆️🆙📈 🚀 tweet media
English
1
2
17
855
Miranda Bogen
Miranda Bogen@mbogen·
The choices that advanced AI companies make today about how they’ll cover the mind-boggling costs they are taking on to build AI systems will inevitably shape the systems themselves. That could have an enormous impact on our world for decades to come
English
1
0
0
41
Miranda Bogen
Miranda Bogen@mbogen·
For deeper analysis, CDT’s recent report Risky Business: Advanced AI Companies’ Race for Revenue explores the array of business models advanced AI companies are implementing or considering, including advertising, and how they are likely to affect users. cdt.org/insights/risky…
English
0
0
0
31
Miranda Bogen
Miranda Bogen@mbogen·
AI companies should be extremely careful not to repeat the many mistakes that have been made — and harms that have resulted from — the adoption of personalized ads on social media and around the web.
English
1
0
0
24
Miranda Bogen
Miranda Bogen@mbogen·
Follow the money 💰 Frontier AI companies are converging on a set of business models as they race to generate returns — and what that means for the rest of us. cdt.org/insights/risky…
English
0
0
0
40
Miranda Bogen retweetledi
Amy Winecoff
Amy Winecoff@aawinecoff·
Today @CenDemTech released a report on the risks of AI to people with eating disorders. As AI systems become more prevalent, their mental-health impacts can't be ignored. To build safeguards, we must develop risk assessments that reflect how AI impacts people in the real world.
English
2
5
8
611
Miranda Bogen retweetledi
Amy Winecoff
Amy Winecoff@aawinecoff·
I'm so excited to highlight a new report from @CenDemTech: "Opening the Book: A Rubric to Support Effective Transparency for EdTech Products That Incorporate AI." My colleagues introduce a rubric for assessing how transparently edtech vendors communicate about their AI products.
English
1
1
0
57
Miranda Bogen retweetledi
Helen Toner
Helen Toner@hlntnr·
AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts. Delighted to feature @mbogen on Rising Tide today, on what's being built and why we should care:
Helen Toner tweet mediaHelen Toner tweet mediaHelen Toner tweet mediaHelen Toner tweet media
English
2
1
26
3.2K
Miranda Bogen retweetledi
Chinmay Deshpande
Chinmay Deshpande@chinmay_deshp·
If we want to understand and shape how advanced AI behaves, we need to know the rules it’s supposed to follow. This requires a type of transparency that’s different from what most policymakers focus on today. @aawinecoff, @mbogen, and I explain in a new report for @CenDemTech:
English
1
3
4
349
Miranda Bogen retweetledi
Miles Brundage
Miles Brundage@Miles_Brundage·
This last sentence seems false? The system card does not appear to have been updated even to incorporate the information in this thread. The whole point of the term system card is that the model isn’t the only thing that matters.
OpenAI@OpenAI

OpenAI o3-pro is available in the model picker for Pro and Team users starting today, replacing OpenAI o1-pro. Enterprise and Edu users will get access the week after. As o3-pro uses the same underlying model as o3, full safety details can be found in the o3 system card. help.openai.com/en/articles/96…

English
6
7
132
13.3K
Miranda Bogen retweetledi
Center for Democracy & Technology
NEW REPORT: CDT AI Governance Lab’s’s Assessing AI reportAudits looks at the rise of complex automated systems which demand a robust ecosystem for managing risks and ensuring accountability. cdt.org/insights/asses… cc: @MBogen
Center for Democracy & Technology tweet media
English
1
2
2
366
Miranda Bogen retweetledi
Aliya Bhatia
Aliya Bhatia@AliyaBhatia·
Companies often attribute their focus on English AI to the lack of resources in non-English langs. New brief written by @Evani_RD and @mbogen highlights the incredible work researchers are doing on multilingual AI if only companies wanted to work with them cdt.org/insights/beyon…
English
0
3
3
741
Miranda Bogen retweetledi
Shakir Mohamed
Shakir Mohamed@shakir_za·
I’m honoured to be one of the @FAccTConference Programme Chairs this year, alongside the amazing @jennwvaughan @sinafazelpour @TaliaGillis 🤩 and we’ve been hard at work already. The CFP is coming soon, but the key dates everyone are now set. Happy paper planning 🚀
Shakir Mohamed tweet media
English
1
21
140
10.2K
Miranda Bogen retweetledi
Irene Solaiman
Irene Solaiman@IreneSolaiman·
Call for Tiny Papers! Community Research! Studying Social/Broader Impact! Making Evaluations Better! Submit your 2 pager by Sept 20 on eval perspectives, challenges, validity And come to our #NeurIPS2024 workshop to work together on this part of safety evaleval.github.io/call-for-paper…
Avijit Ghosh@evijit

Announcing NeurIPS Workshop: EvalEval 2024! 🚀 As generative AI rapidly transforms our world, a critical question looms: How do we measure and evaluate its broader societal impacts? 📄 Our recent collaborative paper (arxiv.org/pdf/2306.05949) reveals a lack of standardized methods to assess the full range of effects of generative AI on society, culture, and individual lives. 🔍 To bridge this gap, we're excited to announce our NeurIPS 2024 workshop: "Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI" aka EvalEval 2024! 🌐 Our workshop website is live! Visit evaleval.github.io to learn more and check out the call for tiny papers! Key focus areas include: • Conceptualization and operationalization of AI impact evaluations • Ethical considerations in assessment methodologies • Novel approaches for measuring social impact across different AI modalities 🌟 We're thrilled to have secured commitments from several stellar speakers in the field. Stay tuned for the full speaker list announcement coming soon! This workshop aims to unite experts in evaluation science, AI practitioners, policymakers, and stakeholders. By fostering collaboration, we hope to develop comprehensive evaluation frameworks and policy recommendations for responsible AI development. Are you passionate about ensuring AI benefits society? Join us in shaping the future of AI evaluation! Follow our page for updates and reach out with any questions or ideas. See you at NeurIPS! #EvalEval2024 #NeurIPS2024 #AIEthics #GenerativeAI #ResponsibleAI #AIPolicy

English
0
5
19
3.7K