Sneha

221 posts

Sneha banner
Sneha

Sneha

@SnehaRevanur

founder @EncodeAction 🇺🇸

Bay Area Katılım Kasım 2018
437 Takip Edilen2.2K Takipçiler
Sneha retweetledi
Dean W. Ball
Dean W. Ball@deanwball·
“Describing highly capable frontier AI models as highly capable” is not “fear mongering.” “Taking AI seriously” is not “fear-mongering.” “Acknowledging obvious, realized or soon-to-be-realized risks” is not “fear-mongering.” The stark reality is that those who have taken AI capabilities growth seriously have been basically right about most important things in the last three years; those that haven’t have been consistently confused and, what’s worse, frustrated at the world about their own confusion. You don’t have to be a mega-pessimist or a “doomer” to take AI seriously. You don’t have to advocate for stark top-down controls over AI. You don’t have to support regulatory capture. It is possible to take AI seriously and advocate for a governmental response that is both effective *and* measured. To the young researchers out there, still trying to make their intellectual fortunes: Do not let anyone tell you otherwise. Do not let anyone bully you into believing otherwise. Think for yourself.
English
21
41
412
30.5K
Sneha retweetledi
Thomas Woodside 🫜
Thomas Woodside 🫜@Thomas_Woodside·
Usually, if an AI company causes harm, courts will look to the quality of its safety policies to help determine whether it is liable. This proposal removes liability if it had any policy at all, unless it was intentional or reckless (a high bar). Bad idea.
Max Zeff@ZeffMax

Scoop: OpenAI is backing an Illinois state AI bill that would shield AI labs from liability for critical harms caused by their AI models—such as mass deaths or financial disasters—as long as they weren't intentional and the labs have published safety reports on their website.

English
1
4
33
2.6K
Sneha retweetledi
Nathan Calvin
Nathan Calvin@_NathanCalvin·
Unless i'm mistaken, no agencies responsible for cybersecurity in the US government will be receiving early access to Mythos under Project Glasswing, because Anthropic is still labeled a supply chain risk! Seems bad!
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
24
55
1.7K
54.1K
Sneha retweetledi
Helen Toner
Helen Toner@hlntnr·
Everyone knows "AGI" doesn't have a single clear definition, but usually people try to fix that by proposing new ones. I wrote about why that approach is doomed. People's conceptions of AGI (o3+tools? intelligence explosion-launcher? conscious AI?) are too different. 🧵
Helen Toner tweet media
English
18
36
218
32.5K
Sneha retweetledi
Anton Leicht
Anton Leicht@anton_d_leicht·
Taken seriously, something like this is the best direction for accelerationist policy. OpenAI is asking policymakers to build a world that can handle the speed they’re planning to move at; deployment absorption instead of development friction. But there’s a good and bad version of future advocacy on this. Bad, and consistent with the industry’s worse moments, would be to reject all other policy proposals by reference to these ideas instead: ‘not this bill, please - but generally, we’re pro-regulation, look at our New Deal!’. Of course that’s absurd: these ideas are much more fundamental asks and much heavier political lifts than regulating the industry a bit, and they’re not just going to emerge as an organic alternative. On that read, this is comms work to provide cover for regulatory nihilism. I think that probably won’t work for long in the face of ever-rising salience. The good version would be to redeploy some of the industry’s substantial political funding and lobbying skill toward actually making progress on something like this agenda, finding allies and sponsors and actually getting this done; including not spending big to derail candidates that could champion some of these measures. That would be thankless work - no populist in Congress and few members of the public would openly appreciate OpenAI’s work on this. But it would still help, by getting ahead of the backlash through policy in time: mediocre corporate comms, but good political strategy. The vague nature and timing of all this doesn’t make me too optimistic that they’re going for the latter plan, but I really hope they do.
Mike Allen@mikeallen

🚨🚨@sama tells me he feels such URGENCY about the power of coming AI models that @OpenAI is unveiling a New Deal for superintelligence - ideas to wake up DC He says AI will soon be so mindbending that we need a new social contract 👇Altman's top 6 ideas axios.com/2026/04/06/beh…

English
8
10
98
24.3K
Sneha retweetledi
Sneha retweetledi
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
AI capabilities are doubling fast, but so is Congressional awareness of AI superintelligence and the risks. You can make a "METR graph" for AI policy and it shows an explosion... and it's bipartisan ->
Peter Wildeford🇺🇸🚀 tweet media
English
20
42
410
49.5K
Sneha retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate. Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities... Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies. (the quoted tweet is half-ish related, but inspired me to post some recent thoughts)
Harry Rushworth@Hrushworth

The British Government is a complicated beast. Dozens of departments, hundreds of public bodies, more corporations than one can count... Such is its complexity that there isn't an org chart for it. Well, there wasn't... Introducing ⚙️Machinery of Government⚙️

English
393
701
5.8K
851.5K
Sneha retweetledi
jasmine sun
jasmine sun@jasminewsun·
Personal news: I’m joining @TheAtlantic as a contributing writer! It drives me nuts how wide of an understanding gap there is between SF AI world and everywhere else — especially given the immense public stakes. There's so much AI hype, anxiety, and misinformation; so doing translation and synthesis feels more important than ever. (This role is in addition to Subst*ck, where I’ll keep writing at the same cadence.) I'm using this excuse to share some rambly media thoughts: namely that tech journalism can & must be great again. The problem with “old media” is that it often refuses to take tech bros at their word, and the problem with “new media” is that it’s often just advertising, which is boring even for the subjects. There’s a doom loop where some reporters write poorly-informed stories, so insiders won’t talk to them, so sourcing is worse; not to mention that most journalists are not based in the communities they cover. This makes people bad-faith, but it also means a lot of AI reporting is 6-12 months behind. Yes, fantastic blogs/podcasts abound — these are the bulk of my info diet — but they are largely insiders talking to insiders, too niche to recommend to policymakers or smart non-AI friends. These fractures are a disaster for shared public knowledge, and make us less prepared to navigate AI well. Magazine writing offers the ability to rise above of the hourly play-by-play (squinting at every new model release, every new jobs report) and to the bigger questions. I actually think the most impactful AI writing has *months*, not days of longevity! Rather than over-anchoring to any particular forecast, it offers generalized frames for operating under uncertainty. A few types of pieces I’m especially keen to write: 1) AI culture: A few people’s idiosyncratic personal beliefs regularly change the world. It thus matters tremendously how AI builders view their work, politics, philosophy, and the future. I think most individuals in the AI industry are good and want their tech to do good. Journalists can portray AI workers’ earnest beliefs while being appropriately skeptical of how that can clash with or be shaped by industry incentives, and how it might diverge from the public. "Smart people confront hard moral/intellectual problem" is one of my favorite genres. 2) AI diffusion: AI discourse disproportionately focuses on its impact on software and writing because those are the jobs the messengers do (obviously I’m guilty of this). That makes me want to do more field reporting on AI in education, manufacturing, healthcare, etc: e.g. can I ride along with a team trying to integrate AI tutors into a school? Diffusion is rarely as smooth as economic models predict, and “how AI will go” depends largely on the speed, and where it hits first. Relatedly: AI in the non-western world. 3) AI superusers: Polls show people are highly anxious about AI’s speculative effects but sanguine about their personal use. I think more people should experiment with AI to feel both the pace of progress *and* its jagged edges. While AI can produce slop/surveillance/etc, it can also extend human ability & creativity. I want to paint portraits of people already “living in the future" so we can ask: is that a life we want? The tech is here, but we can choose how to relate to it. If you have ideas/feedback/etc my DMs are open, and my Signal is jws.27. For me 1-1 conversations are *not* on the record unless we say so. (I always thought this was a weird norm, and in general am happy to answer people's questions about “how journalism works” from my POV because it can be quite opaque.) (also I'm replacing my blurry macbook selfie with a b&w portrait profile picture to signify reluctant induction into the label of "capital-j Journalist.” I spent most of last year pretending to be funemployed, but I suppose this is graduation. end of an era!)
English
149
48
1.4K
130.7K
Sneha retweetledi
Alex Imas
Alex Imas@alexolegimas·
My worry is that economists' forecasts of AI's impact on growth is colored too much by historical precedent. Historical precedent is important, but we should be humble about the possibility of AI being more transformative than prior technologies. For example, the speed of transformation is at least partly determined by adoption and diffusion through the economy. If one's model is based on AI being adopted within existing organizational structures, then diffusion will be quite slow---this is what we're seeing now. But consider the possibility of a 'Coasean Singularity'---a scenario where AI drives the transaction and coordination costs that traditionally dictate firm size to near zero. This could lead to the (potentially fast) emergence of smaller, more nimble AI-first firms, new types of organizations that are outside of our current models, that don't have historical precedent. These firms will not have the sort of bottlenecks of traditional firm structure, and the transformation and resulting impact on economic growth would be much closer to the technological frontier. I know that many economists are already thinking through these transformative scenarios. My guess is that as these ideas are developed further, forecasts will change as well.
Paul Novosad@paulnovosad

Very interesting. Economists and AI experts have very similar forecasts on what AI will be able to do in 20 years. But the AI experts think it will have a much bigger effect on the economy than the economists do. Why? Because economists study this stuff 🤷‍♂️

English
40
34
253
87.2K
Seve Christian
Seve Christian@sevejchristian·
In case you didn’t know that @SnehaRevanur is a fucking badass, you do now!
Sneha@SnehaRevanur

I’ve been to AMC Georgetown twice. Once in 2022 to watch Everything Everywhere All At Once, and again this weekend to watch the Everything Everywhere All At Once director’s new movie about AI. This time, I got to see myself on the screen. When I was interviewed for @theaidocfilm in fall 2024, I was cautiously optimistic. It is really hard to make an evergreen movie about the fastest changing technology ever, and to feature a bunch of people who disagree with each other (intensely) and yet make them all proud. But the filmmakers killed it. The AI Doc is informative and moving and also just a genuinely fun watch. My message - that there’s a bridge humanity must cross to reach an amazing future, and we can act urgently to safely get to the other side - was represented well. I didn’t feel pigeonholed or caricatured at all. There is no better feeling than seeing my friends and family all fired up from a movie that masterfully distills what I’ve been talking their heads off about for years. The AI Doc is truly a must watch. Go run to a theater near you - I’m excited to hear what you think :)

English
1
0
6
166
Sneha retweetledi
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
I understand the concept of permissionless innovation well. I helped review the 2nd edition as your RA. My first published op-ed in 2015 before moving to America was titled "Banish over-regulation, embrace permissionless innovation." Before that I wrote on a group blog dedicated to the work of Deirdre McCloskey. The emancipatory power of technology and innovation runs through my bones, and I am as radical as they come on most questions of deregulation and permissionless innovation per se. I am also an economist and economists are taught to think on the margin, as there are always margins where principles fail. Just like the optimal amount of pollution is nonzero, the optimal amount of permissionless innovation is clearly short of the theoretical maximum. That's doubly true through the filter of public choice and the theory of the second best. I am still 99th percentile "permissionless innovation pilled" relative to the US policy establishment. Yet there are some margins where something like the precautionary principle is legitimately warranted. This includes: - building a recursively self-improving machine superintelligence End of list. Now, my broader falling out with Adam is more onesided. I have lots of friends across the political spectrum. But at some point Adam stopped inviting me to his poker games because I was too "statist" in his words, presumably because at the time I had transitioned to defending social insurance as conducive to economic freedom, though I cannot be sure as this was years ago. Regardless, the sense I got from knowing and working with Adam is that "permissionless innovation" is less a heuristic than a sacred principle for which admission of an edge case signifies heresy. This is not a very economic way of thinking, needless to say. More generally, I came away from our professional overlap (and my experience in the libertarian policy ecosystem more generally) rather disenchanted. Rather than grapple with edge cases and hard policy questions, I more often than not saw arguments deployed in support of a presupposed conclusion. The education I recieved at GMU was often less about understanding and applying core economic concepts than supplying me with a ready bag of such arguments to whip out at a moments noticed. This was perhaps more stark for me, as I entered GMU econ with a preexisting graduate degree in economics from a normal university, so I had something to compare it against. I still valued my time at GMU greatly, but took to describing it less as econ program than an MPA in free market apologetics. Adam's approach to policy in some ways exemplifies this failure mode. Thanks to Evernote, he has accumulated an unusually heavy bag of arguments, quotes, case studies, and rhetorical tricks to pull from. These then substitute for actual thinking and analysis in favor of a kind of permissionless innovation, paint by numbers evangelism. There is nothing wrong with that per se. The world needs evangelists, "thought leaders," men of "one big idea," etc. Winning in politics and policy is to a large extent about memes and cognitive capture. The issue again remains the edge cases; cases where the stochastic parrots among us struggle to generalize outside their training distribution. AGI / ASI is one such unprecedented, world-historic edge case. That does not mean throwing away one's values or principles. But it does require doing *actual intellectual labor* with no ready-made answers. We are on the cusp of the most radical socioeconomic transformation in human history. It ought to give Adam pause that I think we should have a modicum of caution, and maybe even a touch of proactive regulation, *in spite* of my ability to cite the permissionless innovation credo chapter and verse. But alas, Adam does not "update," as the kids say. That is simply not his role. And that is just fine, provided policymakers know to distinguish genuine technological statecraft from a windup doll with a catch-phrase.
Samuel Hammond 🦉 tweet media
Adam Thierer@AdamThierer

8) Is permissionless innovation a partisan right-wing thing? No, permissionless innovation is not a partisan term or movement. At least it shouldn’t be. In my book and other writings, I have pointed out how the Clinton administration’s 1997 Framework for Global Electronic Commerce is probably the most concise articulation of permissionless innovation that any government has ever promulgated. “The Internet should develop as a market driven arena not a regulated industry,” it noted, while “governments should avoid undue restrictions on electronic commerce.” It also said “parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention.” Finally, “where governmental involvement is needed,” the Framework continued, “its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.” That's the permissionless innovation vision in a nutshell. People on opposite sides of the political spectrum are often united in the belief that permissionless innovation is crucial to prosperity and human flourishing. For example, while they both once worked at PayPal, Reid Hoffman and David Sacks went down very different paths politically after that. Hoffman became the co-founder of LinkedIn and a leading support of Democrats and President Biden. By contrast, Sacks became a successful venture capitalist and a strong supporter of Donald Trump, going on to serve as AI and crypto czar in the Trump administration. They disagree bitterly about many issues. Despite their differences, however, both Hoffman and Sacks agree that permissionless innovation powered the digital revolution and can propel the AI revolution next. “Permissionless innovation in AI is working more effectively than ever,” Reid Hoffman argued in a 2025 tweet. “It's what will keep the US at the forefront of Al development. It is arguably the most important time to move quickly. Anti-tech critics who insist on hitting the brakes, full stop, are misguided,” he argued. David Sacks concurs, noting in a late 2025 podcast that, “the thing that has really made Silicon Valley special over the past several decades is permissionless innovation.” By contrast, what is being contemplated for AI “is an approval system for both software and hardware” where “you have to go to Washington to get permission before you release a new model." This would “drastically slow down innovation and make America less competitive,” Sacks argued. At the same time, however, the concept of permissionless innovation has also come under fire from technocrats on both left and right. Left-leaning scholars at Brookings celebrate what they see as “the end of permissionless innovation,” and progressive academics have attacked the concept relentlessly on many different grounds. Meanwhile, some conservatives such as Trump-appointed FTC commissioner Mark Meadow decry permissionless innovation as “a progressive impulse, not a conservative one.” He argues it “is antithetical to the conservative’s considered preservation of custom and tradition and our commitment to the rule of law.” Sometimes the concept of permissionless innovation has even come under fire from groups that nominally support expanding innovation opportunities, but find something distasteful about the term—at least as they (mis)conceptualize it. For example, two top officials at the Foundation for American Innovation say permissionless innovation amounts to little more than a “legitimizing facade for anarcho-capitalists, tech bros, and cynical corporate flacks” and represents a “shallow ideological slogan.” And this comes from a group with "innovation" in its title! There are two things that generally unify the varied critics of permissionless innovation. The first is a fundamental misunderstanding of what the term represents, or an intentional effort to equate it with anarchism or a self-serving corporate agenda, even though it is nothing of the sort. Properly understood, permissionless innovation is about creating a policy environment conducive to new entry, entrepreneurialism, creative destruction, and openness to ongoing disruption of the status quo. It is all too often the case that established players and “corporate flacks” are the ones standing in the way of progress. I’ve often joked with people that, when I retire, I will be writing one final book entitled: “Why Businesses Make the Worst Capitalists.” I’ve spent decades fighting corporations, trade associations, and other special interests who only care about defending their own narrow interests, and not a broader environment of innovation freedom. They are every bit as bad as the extremist academic reactionaries who rail against permissionless innovation and, in many cases, those private interests do more damage because of how politically connected they are. True defenders of permissionless innovation understand that our greatest fight is often against those who would use the power of government to protect themselves from technological change while throwing their competitors under the bus. Yet, for whatever reason, many critics of permissionless innovation like to rail against the term when they see cronyist corporations or special interests gain advantage using political leverage. That is a plainly incorrect understanding of the term. The second factor that often unifies opponents of permissionless innovation is a technocratic impulse to control the future according to some sort of grand blueprint or elitist design. In The Future and Its Enemies, Postrel explained how the opponents of dynamism (another word for what permissionless innovation embodies) are unified by a strong distaste of “a future that is dynamic and inherently unstable” and that is full of “complex messiness.” The critics simply cannot tolerate that inherent messiness and, therefore, they look to soothe us “with the reassurance that some authority will make everything turn out right,” she argued. “They promise to make the world safe and predictable, if only we will trust them to design the future, if only they can impose their uniform plans,” Postrel noted. This is precisely why we see “horseshoe theory” at work in such a major way in modern tech policy debates. At some point, the reactionaries and technocrats on the edges of the political spectrum bend around and meet at a destination called CONTROL. Of course, they all have a different central blueprint, and disagree about the motivations and methods for how to get to that final destination. But, nominally, they all hate the idea of permissionless innovation because it embraces the benefits of “complex messiness,” bottom-up evolutionary processes, spontaneous order, and freedom of choice. Paternalistic elitists just cannot tolerate any of that because it runs counter to their desired control plans for society.

English
5
9
81
10.1K
Sneha retweetledi
Nathan Calvin
Nathan Calvin@_NathanCalvin·
The AI doc manages to convey profound urgency, anxiety, and hope in equal measures in a beautiful package, and the focus on Daniel's story brings out emotion and texture from even familiar interviewees. See it and bring your friends/family!
Sneha@SnehaRevanur

I’ve been to AMC Georgetown twice. Once in 2022 to watch Everything Everywhere All At Once, and again this weekend to watch the Everything Everywhere All At Once director’s new movie about AI. This time, I got to see myself on the screen. When I was interviewed for @theaidocfilm in fall 2024, I was cautiously optimistic. It is really hard to make an evergreen movie about the fastest changing technology ever, and to feature a bunch of people who disagree with each other (intensely) and yet make them all proud. But the filmmakers killed it. The AI Doc is informative and moving and also just a genuinely fun watch. My message - that there’s a bridge humanity must cross to reach an amazing future, and we can act urgently to safely get to the other side - was represented well. I didn’t feel pigeonholed or caricatured at all. There is no better feeling than seeing my friends and family all fired up from a movie that masterfully distills what I’ve been talking their heads off about for years. The AI Doc is truly a must watch. Go run to a theater near you - I’m excited to hear what you think :)

English
2
2
39
1.8K
Sneha
Sneha@SnehaRevanur·
I’ve been to AMC Georgetown twice. Once in 2022 to watch Everything Everywhere All At Once, and again this weekend to watch the Everything Everywhere All At Once director’s new movie about AI. This time, I got to see myself on the screen. When I was interviewed for @theaidocfilm in fall 2024, I was cautiously optimistic. It is really hard to make an evergreen movie about the fastest changing technology ever, and to feature a bunch of people who disagree with each other (intensely) and yet make them all proud. But the filmmakers killed it. The AI Doc is informative and moving and also just a genuinely fun watch. My message - that there’s a bridge humanity must cross to reach an amazing future, and we can act urgently to safely get to the other side - was represented well. I didn’t feel pigeonholed or caricatured at all. There is no better feeling than seeing my friends and family all fired up from a movie that masterfully distills what I’ve been talking their heads off about for years. The AI Doc is truly a must watch. Go run to a theater near you - I’m excited to hear what you think :)
Sneha tweet mediaSneha tweet mediaSneha tweet media
English
8
12
99
7.7K
Sneha retweetledi
Hadas Gold
Hadas Gold@Hadas_Gold·
BREAKING: Anthropic has been GRANTED a preliminary injunction re: Pentagon 'supply chain risk' designation by Judge Rita Lin in California but is allowing a stay for one week storage.courtlistener.com/recap/gov.usco…
English
15
129
900
490.5K