


https://bsky.app/profile/taniaduarte.bsky.social
9.2K posts

@TanDuarte
No longer here - find me on Bluesky (She/her) Founder @WeandAI | @ImagesofAI







📢 AI Snake Oil is out today! I have too many feelings because this book has been five years in the making. In 2019 I gave a talk on AI Snake Oil and tweeted out the slides. I had no idea my career was about to change. Within a couple of days I had 30 or 40 invites to write a book or an article. I said no to all of them because I felt I needed to do more research to be able to write a book. So when @sayashk joined my team I knew this was the topic we were going to work on. The book is based on our research on limits to prediction, the dangers of predictive decision making, the reproducibility crisis in machine learning based science, the limits of large language models, and the risks of social media algorithms. The book is not an anti-technology screed. If our point was that all AI is useless, we wouldn’t need a book to say it. It’s precisely because of AI’s usefulness in many areas that overhyped claims and outright snake oil have been so successful in the market. It’s hard for people to tell these apart, and we hope our book can help. And while we think the tech industry should have much more accountability, the harms we describe are usually not solely due to tech, and much more often due to AI being an amplifier of existing problems in our society. A recurring pattern we point out in the book is that "broken AI is appealing to broken institutions" (Chapter 8). Since our book talks about what AI can’t do, AI boosters often tell us it’s going to be outdated soon. If you think so, I’d love to have a friendly bet! Read the book and tell us what you think will be wrong in two years or five years. (You can start with the intro chapter, which is available online; the link is in my pinned tweet.) The reason we’re confident of our assessments is that they are not based on limitations of the technology. Instead, they are based on inherent limits to prediction grounded in our sociological analysis. (The book owes a huge debt to Princeton sociologist @msalganik; our collaboration with him informed and inspired the book.) We also talk a lot about what we shouldn’t use AI for, even if we can. We’ve had many moments of hope and trepidation for the book as we worked on it over the years. We’re so grateful for the early interest the book has had, especially to the 40,000 people who subscribe to the AI Snake Oil newsletter, and the many podcasts that have had us on to talk about the book. We’ve had great reviews in The New Yorker, Publishers' Weekly (featured in the Top 10 science books for Fall 2024), and many other outlets. And we’ve had such a wonderful experience working with Princeton University Press. We’re also fortunate to have received incisive peer reviews from @MelMitchell1 , @mollycrockett , @chris_bail, and three anonymous reviewers. The book is so much better as a result. What's in the book Chapter 1: Introduction. We begin with a summary of our main arguments in the book. We discuss the definition of AI (and more importantly, why it is hard to come up with one), how AI is an umbrella term, what we mean by AI Snake Oil, and who the book is for. The entire introduction is available online. See the link in my pinned tweet. Chapter 2: How predictive AI goes wrong. Predictive AI is used to make predictions about people—will a defendant fail to show up for trial? Is a patient at high risk of negative health outcomes? Will a student drop out of college? These predictions are then used to make consequential decisions. Developers claim predictive AI is groundbreaking, but in reality it suffers from a number of shortcomings that are hard to fix. Chapter 3: Can AI predict the future? Are the shortcomings of predictive AI inherent, or can they be resolved? In this chapter, we look at why predicting the future is hard — with or without AI. While we have made consistent progress in some domains such as weather prediction, we argue that this progress cannot translate to other settings, such as individuals' life outcomes, the success of cultural products like books and movies, or pandemics. Chapter 4: The long road to generative AI. Recent advances in generative AI can seem sudden, but they build on a series of improvements over seven decades. In this chapter, we retrace the history of computing advances that led to generative AI. Chapter 5: Is advanced AI an existential threat? Claims about AI wiping out humanity are common. Here, we critically evaluate claims about AI's existential risk and find several shortcomings and fallacies in popular discussion of x-risk. We discuss approaches to defending against AI risks that improve societal resilience regardless of the threat of advanced AI. Chapter 6: Why can't AI fix social media? One area where AI is heavily used is content moderation on social media platforms. We discuss the current state of AI use on social media, and highlight seven reasons why improvements in AI alone are unlikely to solve platforms' content moderation woes. Chapter 7: Why do myths about AI persist? Companies, researchers, and journalists all contribute to AI hype. We discuss how myths about AI are created and how they persist. In the process, we hope to give you the tools to read AI news with the appropriate skepticism and identify attempts to sell you snake oil. Chapter 8: Where do we go from here? While the previous chapter focuses on the supply of snake oil, in the last chapter, we look at where the demand for AI snake oil comes from. We also look at the impact of AI on the future of work, the role and limitations of regulation, and conclude with vignettes of the many possible futures ahead of us. We have the agency to determine which path we end up on, and each of us can play a role. ------- We hope you find the book useful and look forward to hearing what you think.








Yuval Noah Harari says in 10 years the world will be run by millions of AI bureaucrats who will make decisions that we can't understand about jobs, finance and government, leading to power shifting from humanity to alien intelligences






this title is so fucking hard lmao

First up in our Deepfakes Deepdive starting soon: a fascinating panel exploring what influences public perception, policy and economic dynamics related to #deepfakes. Tickets are on bit.ly/4ckuO7C. Then onto some more presentations, interviews and an interactive game!

How can we better understand and address the complex societal challenges posed by deepfakes and synthetic media? Join @WeandAI on the 8th of July for 'Framing Deepfakes' - find out more and register your tickets here: eventbrite.co.uk/e/framing-deep…




read this short paper and thank me later link.springer.com/article/10.100… you're welcome

