Alex Engler

1.6K posts

Alex Engler banner
Alex Engler

Alex Engler

@AlexCEngler

Penn Center for Media, Tech & Democracy | "If you can keep it" | Alum: NSC/OSTP @BrookingsGov @urbaninstitute @UChicagoCAPP @McCourtSchool

Washington, D.C. Katılım Temmuz 2011
2.2K Takip Edilen4.3K Takipçiler
Sabitlenmiş Tweet
Alex Engler
Alex Engler@AlexCEngler·
Join our 1st public event on October 21, 𝗧𝗵𝗲 𝗗𝗲𝗺𝗼𝗰𝗿𝗮𝘁𝗶𝗰 𝗥𝗲𝗽𝗲𝗿𝗰𝘂𝘀𝘀𝗶𝗼𝗻𝘀 𝗼𝗳 𝗠𝗲𝗱𝗶𝗮 𝗙𝗿𝗮𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻! Our events aim to share essential research on what ails, and how we can improve, the information environment. …iafragmentation-online.eventbrite.com
Alex Engler tweet media
English
1
2
1
821
Alex Engler
Alex Engler@AlexCEngler·
The dramatic rise in use of LLMs in computational social science over the past few years necessitates this kind of introspection. I think about how long we debated just clustering standard errors. It’s going to take a long while to work on methodological validity here.
Eddie Yang@ey_985

New paper: LLMs are increasingly used to label data in political science. But how reliable are these annotations, and what are the consequences for scientific findings? What are best practices? Some new findings from a large empirical evaluation. Paper: eddieyang.net/research/llm_a…

English
1
0
4
352
Alex Engler retweetledi
Tyler Johnston
Tyler Johnston@TylerJnstn·
I, too, made the mistake of *checks notes* taking OpenAI's charitable mission seriously and literally. In return, got a knock at my door in Oklahoma with a demand for every text/email/document that, in the "broadest sense permitted," relates to OpenAI's governance and investors.
Tyler Johnston tweet media
Nathan Calvin@_NathanCalvin

One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened: 🧵

English
178
996
5.1K
4.2M
Alex Engler retweetledi
Nathan Calvin
Nathan Calvin@_NathanCalvin·
One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened: 🧵
Nathan Calvin tweet media
English
310
1.2K
6.3K
6.7M
Alex Engler
Alex Engler@AlexCEngler·
The media continues to fragment, with more outlets, influencers, podcasters, platforms and AI systems–while mass media has become ideologically aligned. Is this an opportunity to break political echo chambers, or will fragmentation undermine what shared reality we have left?
English
1
0
0
99
Alex Engler
Alex Engler@AlexCEngler·
Join our 1st public event on October 21, 𝗧𝗵𝗲 𝗗𝗲𝗺𝗼𝗰𝗿𝗮𝘁𝗶𝗰 𝗥𝗲𝗽𝗲𝗿𝗰𝘂𝘀𝘀𝗶𝗼𝗻𝘀 𝗼𝗳 𝗠𝗲𝗱𝗶𝗮 𝗙𝗿𝗮𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻! Our events aim to share essential research on what ails, and how we can improve, the information environment. …iafragmentation-online.eventbrite.com
Alex Engler tweet media
English
1
2
1
821
Alex Engler
Alex Engler@AlexCEngler·
An unprecedented assault on freedom of speech and American democracy. What you choose to do tomorrow is what you will have done when autocracy came to America. You have more autonomy than you admit to yourself. Do something that will make you proud.
Alex Engler tweet mediaAlex Engler tweet media
English
0
0
1
129
Alex Engler retweetledi
Chris Hayes
Chris Hayes@chrislhayes·
The countries where comedians can't mock the leader on late night TV are not really ones you want to live in.
English
11.5K
14.9K
122.6K
11.1M
Alex Engler retweetledi
Republicans against Trump
Republicans against Trump@RpsAgainstTrump·
Not a big fan of Jimmy Kimmel, but if you think his show should be canceled for this, you’re a hack and a total fraud who should never pretend to care about free speech
English
4.8K
10.8K
110.5K
5.4M
Alex Engler retweetledi
Jon Favreau
Jon Favreau@jonfavs·
I'm just incredibly sad for his family and his young kids, genuinely shaken about where this country is headed, and praying that we can find a way to walk ourselves back from the brink. We all have agency here. We all have the ability to choose our words carefully even as we forcefully defend our beliefs. We all have the capacity for grace. We don't have to make things worse.
English
63
144
2.3K
169.6K
Alex Engler
Alex Engler@AlexCEngler·
@thomaschattwill This is obviously untrue - my feed is exclusively people condemning the violence - and it’s irresponsible to make such uninformed comments like this at time like this.
English
0
0
11
315
Alex Engler
Alex Engler@AlexCEngler·
Violence is abhorrent in all forms - and political violence is further the antithesis of democracy. We must be unified in its condemnation, or else find ourselves all less safe and less safe.
English
0
0
2
212
Alex Engler retweetledi
Arvind Narayanan
Arvind Narayanan@random_walker·
I’m excited to announce I’ve started a YouTube channel. I plan to publish videos regularly explaining my views on AI and its present and future impacts. My first video asks: What happens if there’s an AI crash? youtube.com/watch?v=VDfyuB… This is my first foray into video (beyond my usual academic talks) so I would love to receive feedback on what you would like to see. Thank you! Intro: What if there’s a bubble? AI companies have invested over a trillion dollars into data centers. They're promising that it'll be worth it because AI will transform the economy. But so far, that's not happening. There doesn't seem to be any measurable uptick in the GDP growth rate despite all the investment. This has led to a chorus of voices saying AI is a bubble, and it's easy to see why. Even Mark Zuckerberg has been candid that meta might be spending too much money simply because of FOMO. Zuckerberg: I think that there's a meaningful chance that a lot of the companies are overbuilding now, but the downside of being behind is that you're out of position for the most important technology for the next 10-15 years. Suppose there is indeed an AI bubble and it bursts. I'm not saying that it will happen, but it could. Let's imagine what the scenario will look like. What will the consequences be? Will an AI crash look like the dot com crash? This wouldn't be the first cycle of AI hype to be corrected. This has happened twice in the past. These periods have been called AI winters. During these periods, research funding dried up and the press became negative. Back then, of course, AI was just a research field. Now it's a big industry, so the consequences of a bubble will surely be bigger. Maybe it will be more like the dot com crash. Many companies went out of business. Enterprises paused their digitization plants. The whole of e-commerce was set back by many years. Entire business models like grocery delivery online were poisoned and no one would fund them for well over a decade. We might expect something similar to happen if there is an AI crash. Products like ChatGPT will no longer be offered because they aren't profitable and the venture money is no longer rolling in. Enterprises will pause their AI strategies and deployments. Hype will turn sour and research will be cut back. Perhaps most significantly countless people will lose their jobs, just like the dot com crash, which led to between half a million and a billion job losses in the tech sector alone and cascading effects in many other sectors, and a 50% increase in the unemployment rate over the next few years, Maybe. Or maybe the dot com bubble is not a good model for an AI bubble. So how can we think clearly about this? Unit economics is a helpful concept Now, the AI industry is an outlier in terms of the scale of investment, but the fact that AI companies aren't profitable is not at all unusual. Early on in the lifecycle of any new industry sector, most startups won't be profitable. So the key question is this: Is it because they're doing a lot of research and building out infrastructure, in which case the costs will come down as the companies mature? Or is it because the basic cost of providing their products or services is so high, in which case the costs won't come down? For example, the Juicero was a juice machine that famously failed after being introduced at a ridiculous price. It was destined to flop because the machine was extremely over-engineered. The parts alone cost hundreds of dollars, so there is no way the company could have dropped it to a reasonable price point without taking a loss on every unit. In the same way, there have been subscription services that lost money on every single customer, and so were widely predicted to fail. Now let's look at generative AI. These companies have three big kinds of costs, talent training and inference. First, talent. AI companies are paying obscene amounts of money to higher top engineers. Next, training. Training models can cost hundreds of millions of dollars in terms of hardware and electricity, especially ones you factor in the computational needs of all the research that needs to happen in order to push the state-of-the-art and develop new model capabilities. And companies are training new and bigger models all the time. And finally, inference. Inference means running the models so that people can actually use it. Now, we know that companies are spending a lot on data centers, but we don't know how that breaks down into training and inference. Inference costs are low and falling But we do know what it costs for a chatbot to respond to a single query or to output a given amount of text. And it's remarkably little. You can generate thousands and thousands of pages of text for just $1, and that cost has decreased a hundred fold in the last couple of years because engineers have been able to speed up these models by making them smaller and more efficient. Now, there is a meme that using ChatGPT is extremely energy intensive, but this is just not true when it comes to the regular use of chatbots, though it is somewhat true when it comes to generating images or if you use the feature where it goes off on its own and does research for a while. So knowing all this, suppose there is a market correction or crash. AI companies will decrease or maybe even eliminate their expenditure on all that expensive research and training of new models as well as the ridiculous amounts of money that they're paying researchers. But even in this scenario, they will be able to offer chatbots like ChatGPT, because the inference costs, that is the cost of actually operating these models, are dirt cheap. They might have to limit access to a few features like image generation that are particularly expensive. OpenAI has over 5,000 employees today. But actually running a chatbot only requires a handful of engineers. In fact, there are chatbot companies with very few employees. So if you cut out the research, operating a chatbot can actually be extremely profitable because people are willing to pay $20 a month, which translates to $240 a year for a subscription. That's a lot more than ad-based apps typically make per user per year. Even if companies like OpenAI were to go out of business, smaller AI companies would step in to take their place. There are many openly available AI models which might not be as good as the leading ones, but whose quality is good enough for everyday users. It’s the same story beyond chatbots Now, of course, chatbots are not the only kind of generative AI products, but in most other cases I've looked at, it's the same story. For example, AI agents for assisting with coding or software engineering are a lot more expensive to run than chatbots, but at the same time, the value that they bring is also much greater. If it makes a software developer, let's say, 20% more productive. What that means is that it brings tens of thousands of dollars per year of value to a software company for every single developer in that company that uses such a product. Of course, not every developer benefits from AI assistance. It depends on the type of project, the developer's preferences and other factors. But for many people, including me, it's hard to even imagine going back to the way that programming was done before AI. Once you've made the adjustment of learning how to delegate the tedious and boring and error prone parts of software engineering to AI — admittedly, this adjustment takes a while — but having done that, it feels like trying to go back to punch cards instead of keyboards. Here's another AI application that I looked into. One of the most computationally intensive and expensive types of generative AI is video generation. The Wall Street Journal used AI to create a moderately high quality YouTube video as part of learning about the process of using AI to create such videos. And this is what they reported: the total cost would've been around a thousand dollars for Google and runway's tools. Now, that number seems like a lot, but it's much less expensive than the cost of producing a video with comparable quality in the traditional way. An AI crash will look nothing like the dot com crash So based on all this, my view is that an AI crash will look nothing like the dot com crash. It's true that in both cases we see outrageous valuations of companies and expenditures as well. But the dot com bubble was based entirely on the expectation of future profits, and those profits never materialized because customers just weren't interested. Whereas in the case of AI, it's true that there is a lot of hype, but that hype is layered on top of a technology that's already bringing lots of real value to lots of people. It's being used by hundreds of millions of people every day, and a growing number of them are paying $20 a month or even $200 a month. All of that I think will continue. I'm making a subtle point here, so let me be clear. I'm not saying that there won't be a crash. I'm saying that even if there is a crash, its effect will be on the research that's going into AI and the development of new models. The use of existing models and products will keep going strong. Of course, if there is a crash, the way that the media talks about and hypes up AI might change, and I think that's probably for the better. And now for the big question, what will a potential AI crash do to jobs? Once again, I think the AI situation is very different from the internet bubble. The dot com crash was so harmful because in the run up to it, many internet companies were massively overstaffed, especially considering their lack of a business model. But in AI, despite all the hype the tech sector has actually been contracting over the last few years, which is seen as a period of correction for the hiring that happened during the pandemic. And as for the rest of the economy outside of tech, AI is seen as a reason to cut jobs rather than to hire. So if there is a crash, some of these AI engineers may no longer receive enormous paychecks, but they'll have no trouble finding other jobs in tech and companies not having AI as a readily believable excuse to cut jobs will probably be a good thing for workers on balance. The long view Even though AI is different from the dot com bubble, there are a lot of similarities between AI adoption and internet adoption. Over a couple of decades, the internet gradually became the medium through which all knowledge work happens. Those of us who were around in the nineties used to log onto the internet, do stuff quickly because it was so expensive and log back out. But now we're online all the time, and the hard part is to switch off our devices. In my book, AI Snake Oil with Sayash Kapoor, we predict that the same thing will happen with AI, for better or for worse. It will just be there all the time in the background. But getting to that point will take time. In a paper called AI As Normal Technology, we talk about the gradual decades-long process by which workers and companies will have to adapt to AI and in turn adapt AI to us. The key thing is that most of this work will happen outside the AI companies, which means it'll be relatively unaffected by what happens in the AI companies, including whether there is a crash. So in short, I think an AI winter is unlikely, and whether we like it or not, AI is a technology that's here to stay in our lives. Regardless of what happens in the market, we need to get used to it and start figuring out what it means for each of us and for society as a whole. Let me know what you think in the comments. I'm Arvind Narayanan. Thank you for watching.
YouTube video
YouTube
Arvind Narayanan tweet media
English
18
59
355
61.2K
Alex Engler
Alex Engler@AlexCEngler·
Republican support for ignoring opposing court orders has roughly doubled, with as many as 25-30% of MAGA republicans supporting non-compliance, according to @PRL_Tweets #prl-is-supported-by" target="_blank" rel="nofollow noopener">prlpublic.s3.us-east-1.amazonaws.com/reports/Courts…
Alex Engler tweet mediaAlex Engler tweet media
English
0
0
2
191