https://bsky.app/profile/taniaduarte.bsky.social

9.2K posts

https://bsky.app/profile/taniaduarte.bsky.social banner
https://bsky.app/profile/taniaduarte.bsky.social

https://bsky.app/profile/taniaduarte.bsky.social

@TanDuarte

No longer here - find me on Bluesky (She/her) Founder @WeandAI | @ImagesofAI

Notting Hill, London Katılım Ocak 2012
4.5K Takip Edilen5.2K Takipçiler
neil turkewitz
neil turkewitz@neilturkewitz·
@TanDuarte @GaryMarcus @ptknitwear To be fair, do any of us truly have the special skills necessary to operate this book properly? This has raised an entirely new and terrifying existential question to confront. I may never sleep again.
English
1
0
1
29
Gary Marcus
Gary Marcus@GaryMarcus·
First time seeing Taming Silicon Valley in a bookstore @ptknitwear ❤️
Gary Marcus tweet media
English
8
8
242
19.9K
https://bsky.app/profile/taniaduarte.bsky.social
“The reason we’re confident of our assessments is that they are not based on limitations of the technology. Instead, they are based on inherent limits to prediction grounded in our sociological analysis.” The book we’ve been waiting for!
Arvind Narayanan@random_walker

📢 AI Snake Oil is out today! I have too many feelings because this book has been five years in the making. In 2019 I gave a talk on AI Snake Oil and tweeted out the slides. I had no idea my career was about to change. Within a couple of days I had 30 or 40 invites to write a book or an article. I said no to all of them because I felt I needed to do more research to be able to write a book. So when @sayashk joined my team I knew this was the topic we were going to work on. The book is based on our research on limits to prediction, the dangers of predictive decision making, the reproducibility crisis in machine learning based science, the limits of large language models, and the risks of social media algorithms. The book is not an anti-technology screed. If our point was that all AI is useless, we wouldn’t need a book to say it. It’s precisely because of AI’s usefulness in many areas that overhyped claims and outright snake oil have been so successful in the market. It’s hard for people to tell these apart, and we hope our book can help. And while we think the tech industry should have much more accountability, the harms we describe are usually not solely due to tech, and much more often due to AI being an amplifier of existing problems in our society. A recurring pattern we point out in the book is that "broken AI is appealing to broken institutions" (Chapter 8). Since our book talks about what AI can’t do, AI boosters often tell us it’s going to be outdated soon. If you think so, I’d love to have a friendly bet! Read the book and tell us what you think will be wrong in two years or five years. (You can start with the intro chapter, which is available online; the link is in my pinned tweet.) The reason we’re confident of our assessments is that they are not based on limitations of the technology. Instead, they are based on inherent limits to prediction grounded in our sociological analysis. (The book owes a huge debt to Princeton sociologist @msalganik; our collaboration with him informed and inspired the book.) We also talk a lot about what we shouldn’t use AI for, even if we can. We’ve had many moments of hope and trepidation for the book as we worked on it over the years. We’re so grateful for the early interest the book has had, especially to the 40,000 people who subscribe to the AI Snake Oil newsletter, and the many podcasts that have had us on to talk about the book. We’ve had great reviews in The New Yorker, Publishers' Weekly (featured in the Top 10 science books for Fall 2024), and many other outlets. And we’ve had such a wonderful experience working with Princeton University Press. We’re also fortunate to have received incisive peer reviews from @MelMitchell1 , @mollycrockett , @chris_bail, and three anonymous reviewers. The book is so much better as a result. What's in the book Chapter 1: Introduction. We begin with a summary of our main arguments in the book. We discuss the definition of AI (and more importantly, why it is hard to come up with one), how AI is an umbrella term, what we mean by AI Snake Oil, and who the book is for. The entire introduction is available online. See the link in my pinned tweet. Chapter 2: How predictive AI goes wrong. Predictive AI is used to make predictions about people—will a defendant fail to show up for trial? Is a patient at high risk of negative health outcomes? Will a student drop out of college? These predictions are then used to make consequential decisions. Developers claim predictive AI is groundbreaking, but in reality it suffers from a number of shortcomings that are hard to fix. Chapter 3: Can AI predict the future? Are the shortcomings of predictive AI inherent, or can they be resolved? In this chapter, we look at why predicting the future is hard — with or without AI. While we have made consistent progress in some domains such as weather prediction, we argue that this progress cannot translate to other settings, such as individuals' life outcomes, the success of cultural products like books and movies, or pandemics. Chapter 4: The long road to generative AI. Recent advances in generative AI can seem sudden, but they build on a series of improvements over seven decades. In this chapter, we retrace the history of computing advances that led to generative AI. Chapter 5: Is advanced AI an existential threat? Claims about AI wiping out humanity are common. Here, we critically evaluate claims about AI's existential risk and find several shortcomings and fallacies in popular discussion of x-risk. We discuss approaches to defending against AI risks that improve societal resilience regardless of the threat of advanced AI. Chapter 6: Why can't AI fix social media? One area where AI is heavily used is content moderation on social media platforms. We discuss the current state of AI use on social media, and highlight seven reasons why improvements in AI alone are unlikely to solve platforms' content moderation woes. Chapter 7: Why do myths about AI persist? Companies, researchers, and journalists all contribute to AI hype. We discuss how myths about AI are created and how they persist. In the process, we hope to give you the tools to read AI news with the appropriate skepticism and identify attempts to sell you snake oil. Chapter 8: Where do we go from here? While the previous chapter focuses on the supply of snake oil, in the last chapter, we look at where the demand for AI snake oil comes from. We also look at the impact of AI on the future of work, the role and limitations of regulation, and conclude with vignettes of the many possible futures ahead of us. We have the agency to determine which path we end up on, and each of us can play a role. ------- We hope you find the book useful and look forward to hearing what you think.

English
0
0
1
161
Karen Hao
Karen Hao@_KarenHao·
@random_walker @sayashk Wow I have been using the exact metaphor with people! Incredible. Great minds think alike 🥳
English
1
0
4
1K
Arvind Narayanan
Arvind Narayanan@random_walker·
📢 The first chapter of the AI snake oil book by me and @sayashk is now available online. #preview" target="_blank" rel="nofollow noopener">press.princeton.edu/books/hardcove… It is 30 pages long and summarizes the book’s main arguments. If you start reading now, you won't have to wait long for the rest of the book — it is available to preorder and will be published in less than two weeks. amazon.com/Snake-Oil-Arti… We were fortunate to receive positive early reviews by The New Yorker, Publishers' Weekly (featured in the Top 10 science books for Fall 2024), and many other outlets. Here's an outline of all the chapters: aisnakeoil.com/p/starting-rea…
Arvind Narayanan tweet media
English
28
139
580
82.1K
Abeba Birhane
Abeba Birhane@Abebab·
was wating to have everything in order to make proper annoucmenet but alas, i need help so here we are. got some grant & should be starting a research lab soon focused on AI accountbility & I'm looking for someone who could help design logo & artwork for the website. hit me up
English
38
52
257
19.7K
https://bsky.app/profile/taniaduarte.bsky.social retweetledi
Abeba Birhane
Abeba Birhane@Abebab·
correction: it's not "power shifting from humanity to alien intelligences" rather it's shifting power away from the masses towards big corporations and their share holders
Tsarathustra@tsarnick

Yuval Noah Harari says in 10 years the world will be run by millions of AI bureaucrats who will make decisions that we can't understand about jobs, finance and government, leading to power shifting from humanity to alien intelligences

English
3
142
595
26.6K
https://bsky.app/profile/taniaduarte.bsky.social retweetledi
Yasmin Ibison
Yasmin Ibison@yasminibison·
@jrf_uk's new research with @WeandAI asks: Why and how do non-profit and grassroots orgs engage with gen AI tools? How do these orgs see their role in shaping the broader AI debate? 🌐Join me + the researchers to explore findings and key themes! bit.ly/3WcRMGS
Yasmin Ibison tweet media
English
0
7
7
2.8K
https://bsky.app/profile/taniaduarte.bsky.social retweetledi
Joseph Rowntree Foundation
59% of orgs feel they aren't engaged in the broader AI debate Without sector engagement, initiatives aimed at developing AI for public good cannot have an adequate understanding of what that might entail 📑 Find out more from @yasminibison and @WeandAI: jrf.org.uk/ai-for-public-….
English
0
4
7
2.6K
https://bsky.app/profile/taniaduarte.bsky.social
The lineup for this event is incredible - please join to hear many different perspectives on the impacts and mitigations of deepfakes bit.ly/4ckuO7C
Scottish AI Alliance@Scottish_AI

How can we better understand and address the complex societal challenges posed by deepfakes and synthetic media? Join @WeandAI on the 8th of July for 'Framing Deepfakes' - find out more and register your tickets here: eventbrite.co.uk/e/framing-deep…

English
0
2
3
202
https://bsky.app/profile/taniaduarte.bsky.social retweetledi
Alex Hanna (اليكس حنٌا)
We need to ban tech CEOs from using the term "democratizing" What the hell does "democratizing creativity" mean? Are y'all blocking access to pens and paints?
Mira Murati@miramurati

At OpenAI, we’re working to advance scientific understanding to help improve human well-being. The AI tools we are building, like Sora, GPT-4o, DALL·E and ChatGPT, are impressive from a technical standpoint. But what really matters is how they’re starting to change the way we interact with information and ideas. A few years ago, in my essay "Language & Coding Creativity", I wrote about how these systems represent a big shift in our relationship with language and creativity. As we keep improving these tools, our mission stays the same: to make them helpful, safe, easy to use, and available to as many people as possible. We want to help reduce the obstacles that have traditionally kept people from expressing their unique ideas and perspectives. By carefully designing these technologies to collaborate with human creators, I think we can build wonderful tools to help artists have more control, be more innovative, and explore new frontiers of possibility. When we made DALL·E, we worked closely with artists, designers, and storytellers, trying to build a tool that fits into their creative process and helps bring their visions to life. Moving forward, I believe AI has the potential to democratize creativity on an unprecedented scale. A person’s creative potential should not be limited by their access to resources, education, or industry connections. AI tools could lower the barriers and allow anyone with an idea to create. At the same time, we must be honest and acknowledge that AI will automate certain tasks. Just like spreadsheets changed things for accountants and bookkeepers, AI tools can do things like writing online ads or making generic images and templates. But it's important to recognize the difference between temporary creative tasks and the kind that add lasting meaning and value to society. With AI tools taking on more repetitive or mechanistic aspects of the creative process, like generating SEO metadata, we can free up human creators to focus on higher-level creative thinking and choices. This lets artists stay in control of their vision and focus their energy on the most important parts of their work. To make sure these technologies are developed and used in a way that does the most good and the least harm, we work closely with red-teaming experts from early stages of research. We also use an iterative approach, gradually releasing tools and carefully studying how they impact the real world to guide future development. Protecting and strengthening the most valuable aspects of creativity is fundamental to our human experience.  Realizing the potential of AI is not guaranteed. It takes carefully building tools and using them responsibly, in close partnership with creators and communities they’re intended to benefit. This means putting strong safeguards in place, reducing harmful biases, and proactively dealing with potential negative effects. At OpenAI, this is at the core of how we work, and we’ve never wavered in our commitment to this as we've released new tools.

English
23
319
1.8K
53.9K
https://bsky.app/profile/taniaduarte.bsky.social retweetledi
Eleonora Lima
Eleonora Lima@limaeleonora1·
My article "AI art and public literacy: the miseducation of Ai-Da the robot" is now available in open access on the journal 'AI and Ethics.' It's part of a collection edited by @TanDuarte exploring the ethical implications of #AIhype. #citeas" target="_blank" rel="nofollow noopener">link.springer.com/article/10.100…
English
1
1
2
232