Sneha

243 posts

Sneha banner
Sneha

Sneha

@SnehaRevanur

founder @EncodeAction 🇺🇸

SF, sometimes DC Beigetreten Kasım 2018
446 Folgt2.3K Follower
Sneha retweetet
Kelsey Piper
Kelsey Piper@KelseyTuoc·
I have a bunch of secret AI benchmarks I only reveal when they fall, and today one did. I give the AI 1000 words written by me and never published, and ask them who the author is. They generally give flattering wrong answers (see ChatGPT, below:)
Kelsey Piper tweet media
English
63
97
2.3K
436.6K
Sneha retweetet
Nathan Calvin
Nathan Calvin@_NathanCalvin·
If you had told me two years ago, when I was working with Senator Wiener on SB 1047, that someone would try to rewrite history to claim he was soft on AI regulation, I would have laughed in your face. But here we are. So let's get the facts straight. Scott Wiener wrote the bill the AI industry lobbied Congress to preempt. Senator Ted Cruz has publicly cited SB 1047 as one of the reasons motivating his push for federal preemption to block states from regulating AI. That is not the record of someone soft on the industry. A few more facts worth getting straight: • Senator Wiener introduced SB 1047 in February 2024. It set off a firestorm of debate and became the first major piece of AI safety legislation passed by a state legislature anywhere in the country before being vetoed by Governor Newsom. The Wikipedia page has a lengthy list of supporters and opponents if you're curious. You are free to criticize the bill. The idea that Wiener was too soft on industry in pushing it is not a serious claim. • After the veto, Governor Newsom convened an AI working group that included experts who had been critical of SB 1047's approach. They recommended a revised bill focused on transparency, incident reporting, and whistleblower protections, rather than mandated guardrails or expanded liability for misuse. • Senator Wiener incorporated those recommendations into SB 53 in July 2025. What followed were intense rounds of negotiations with powerful AI industry actors, particularly Google, Amazon, and Meta. (I remember this well, because during those negotiations I was personally subpoenaed by OpenAI for all my communications on SB 53.) • Politico reported, and I can confirm, that Senator Wiener negotiated fiercely over that summer, repeatedly threatening to walk and blow up negotiations if the bill was compromised. Yes, changes were made. SB 53 is both a landmark first-in-the-nation AI safety law and a bill that will need to be strengthened in future legislation. But the notion that Senator Wiener wanted those changes or supported the weakening is laughable. He pushed for the strongest bill that could still be signed into law. He did not want SB 53 vetoed the way SB 1047 was. • Your video conflates two different AI Super PACs. Leading the Future has been explicit about supporting federal preemption to remove state AI protections, and has aggressively attacked politicians like Alex Bores who support AI regulation. You can look at my feed to see what I think of them. Public First is a different PAC. It receives funding from Anthropic, among others. It was created specifically to counter Leading the Future, it has defended Alex Bores from LTF attacks, and it is fighting against AI preemption and supporting states in their efforts to enact AI protections. It is the PAC supporting Senator Wiener in this race.

 For what it's worth, I wish we didn't have Super PACs at all. But Public First, created in response to the enormous funding Leading the Future has spun up, is an important counterweight in the world as it is to support candidates with a track record of delivering on AI regulation and AI safety. Criticize Public First, SB 53, or anything else you want to. But this video is misleading, and it is insulting to those of us who spent years working with Senator Wiener to pass the first AI safety laws in the country against fierce industry opposition. Insofar as this was a genuine misunderstanding, I would appreciate you saying so directly.
Saikat Chakrabarti for Congress@saikatc

The AI lobby has entered our race to bankroll my opponent, Scott Wiener. That's because he worked with them to water down AI regulations in California. As his reward, he gets a huge super PAC. I'm not taking any corporate PAC or lobbyist money, and in Congress, I’m going to end this kind of legalized bribery. AI oligarchs want to control your future. I will fight to put people back in control.

English
0
13
91
11.6K
Sneha retweetet
Sneha retweetet
Thomas Woodside 🫜
Thomas Woodside 🫜@Thomas_Woodside·
I can say with authority that this is complete nonsense. I've been advocating for AI regulation in California for two years. In 2024, I worked with @Scott_Wiener on SB 1047, an AI regulation bill that was vetoed. And I spent my 2025 labor day weekend at a Marriott in Sacramento so that I could advise him during the SB 53 negotiations referred to inaccurately in this video. Scott Wiener has fought incredibly hard on this issue. He burned huge amounts of political capital to ensure the strongest possible version of SB 53 was signed into law. It was, and we now have a good law on the books, which paved the way for others to follow. Previously, there were no laws in California regulating AI companies' management of catastrophic risk.
Saikat Chakrabarti for Congress@saikatc

The AI lobby has entered our race to bankroll my opponent, Scott Wiener. That's because he worked with them to water down AI regulations in California. As his reward, he gets a huge super PAC. I'm not taking any corporate PAC or lobbyist money, and in Congress, I’m going to end this kind of legalized bribery. AI oligarchs want to control your future. I will fight to put people back in control.

English
5
16
158
9.5K
Sneha
Sneha@SnehaRevanur·
Pretty jarring to see the side-by-side. It’s a bit insulting to voters to assume that no one will catch on to OpenAI’s revealed preferences and that the company (which otherwise has many earnest employees in its ranks) will be judged as if it’s somehow amputated from its in-house Global Affairs team and the super PAC it funds
Nathan Calvin@_NathanCalvin

On initial read this plan struck me as similar to OpenAI's "Industrial Policy for the Intelligence Age Paper," but reading them both again the similarities are more striking than I expected. You would really think that if OpenAI really believed in making these policies happen, they would support Bores's candidacy, or at the least not back a Superpac spending millions to attack him. (The OpenAI funded Superpac network even frequently puts out content about how concerns about job loss are a doomer hoax!) Similarities: 1. Citizens get stake in AI profits Bores: "if AI dramatically increases productivity and concentrates wealth, the American people have a stake in those gains" OpenAI: "Create a Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth" 2. Change tax code to favor labor Bores: "If AI can substitute for labor rather than complement it, then our tax code is actively subsidizing job elimination. We encourage companies to invest in AI by making it cheaper through tax breaks, while taxing the wages of the workers being displaced." OpenAI: "As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes. This could erode the tax base that funds core programs... Policymakers could rebalance the tax base by increasing reliance on capital-based revenues... and by exploring new approaches such as taxes related to automated labor." 3. Trigger based safety nets Bores: "The program would be tied to clear economic triggers—such as sustained declines in labor force participation, wage compression in affected sectors, or rapid increases in AI-driven productivity without corresponding job growth—to ensure it activates based on real-world conditions, not political discretion." OpenAI: "Define a package of temporary, expanded safety nets... that activates automatically when these metrics exceed pre-defined thresholds. When disruption rises above those levels, support would scale up; as conditions stabilize, it would phase out."

English
1
5
79
9.5K
Sneha retweetet
Nathan Calvin
Nathan Calvin@_NathanCalvin·
On initial read this plan struck me as similar to OpenAI's "Industrial Policy for the Intelligence Age Paper," but reading them both again the similarities are more striking than I expected. You would really think that if OpenAI really believed in making these policies happen, they would support Bores's candidacy, or at the least not back a Superpac spending millions to attack him. (The OpenAI funded Superpac network even frequently puts out content about how concerns about job loss are a doomer hoax!) Similarities: 1. Citizens get stake in AI profits Bores: "if AI dramatically increases productivity and concentrates wealth, the American people have a stake in those gains" OpenAI: "Create a Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth" 2. Change tax code to favor labor Bores: "If AI can substitute for labor rather than complement it, then our tax code is actively subsidizing job elimination. We encourage companies to invest in AI by making it cheaper through tax breaks, while taxing the wages of the workers being displaced." OpenAI: "As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes. This could erode the tax base that funds core programs... Policymakers could rebalance the tax base by increasing reliance on capital-based revenues... and by exploring new approaches such as taxes related to automated labor." 3. Trigger based safety nets Bores: "The program would be tied to clear economic triggers—such as sustained declines in labor force participation, wage compression in affected sectors, or rapid increases in AI-driven productivity without corresponding job growth—to ensure it activates based on real-world conditions, not political discretion." OpenAI: "Define a package of temporary, expanded safety nets... that activates automatically when these metrics exceed pre-defined thresholds. When disruption rises above those levels, support would scale up; as conditions stabilize, it would phase out."
Alex Bores@AlexBores

Today, I’m proud to announce the AI Dividend, my plan to prepare for the AI economy with direct payments to Americans funded by tax reform that simultaneously incentivizes hiring humans instead of AI. Read the full plan here: alexbores.nyc/ai-dividend

English
3
6
71
16.8K
Sneha retweetet
sev field
sev field@sevdeawesome·
Sharing my paper with @DavidSKrueger and @raymondadouglas! We interviewed 25 researchers from DeepMind, OpenAI, Anthropic, Meta, Berkeley, Princeton & Stanford about what happens when AI helps develop its own successor: AIs automating AI research and development. 🧵
English
6
31
201
24.9K
Sneha retweetet
Kevin Roose
Kevin Roose@kevinroose·
New column: I went to visit @METR_Evals, the 30-person AI nonprofit that makes the Most Important Chart in the World. I learned a lot, but the most striking thing was how soon some of them think AI R&D could be fully automated. (This year!) nytimes.com/2026/04/17/tec…
English
11
56
530
90.3K
Sneha retweetet
Matthew Yglesias
Matthew Yglesias@mattyglesias·
“Our product will generate mass unemployment and possible mass extinction” isn’t a bad messaging choice, it’s a wildly held sincere view among the people building AI. slowboring.com/p/its-not-bad-…
Matthew Yglesias tweet media
English
43
72
513
181.6K
Sneha
Sneha@SnehaRevanur·
@joel_bkr absolutely! would love to hear yours too
English
0
0
0
95
Joel Becker
Joel Becker@joel_bkr·
i found this tweet super thoughtful. in some ways mirrors my own journey. consider: what if the AI people continue to be right?
Sneha@SnehaRevanur

I thought this was a great piece, not least because it captured so much of my own journey into taking AI seriously. In the early days of Encode, we were squarely focused on immediate AI harms and in fact I was actively giving workshops internally about how our members should avoid being psyopped by the catastrophic risk sideshow (as I then perceived it). Friends asked me if I’d read the AI safety canon and whether it was shaping my work, and I was completely dismissive. It wasn’t until I actually sat and reflected on why I was so averse to thinking about the most extreme AI scenarios that I realized I didn’t have very good object-level reasons - it just felt weird and uncomfortable to entertain such alien futures for more than a few seconds, and at the time there was basically no social proof for this outside Berkeley. These days I laugh when critics of AI safety say that people should stand strong against doomerism and that if you dwell on AI risk you must be a loser devoid of imagination and hope. That literally sounds like something I would say on any other topic - I am a devout, lifelong, in-my-bones optimist, and taking bad outcomes seriously has felt like a constant battle against my natural disposition. I’m honestly pretty embarrassed by how I initially approached thinking about AI. But I’m also proud of the work Encode has done since my mind opened up, and I know a lot of smart people are on the other side of this bridge I only relatively recently crossed. If that’s you, I really implore you to use the latest models, see for yourself, and ask: “What if AI actually is world-historic? Are there specific capabilities or arguments that could convince me this is true, or am I just not open to being convinced? How would my other assumptions about the future change under the premise that AI is world-historic?” As Dylan also acknowledges, I am totally sympathetic to how hard this is (especially if others around you are still AI skeptics); but when I did this exercise, it completely reoriented my life and work.

English
3
0
20
2.3K
Chris Painter
Chris Painter@ChrisPainterYup·
Great post. Rare to see people sincerely talk about having changed their mind in public.
Sneha@SnehaRevanur

I thought this was a great piece, not least because it captured so much of my own journey into taking AI seriously. In the early days of Encode, we were squarely focused on immediate AI harms and in fact I was actively giving workshops internally about how our members should avoid being psyopped by the catastrophic risk sideshow (as I then perceived it). Friends asked me if I’d read the AI safety canon and whether it was shaping my work, and I was completely dismissive. It wasn’t until I actually sat and reflected on why I was so averse to thinking about the most extreme AI scenarios that I realized I didn’t have very good object-level reasons - it just felt weird and uncomfortable to entertain such alien futures for more than a few seconds, and at the time there was basically no social proof for this outside Berkeley. These days I laugh when critics of AI safety say that people should stand strong against doomerism and that if you dwell on AI risk you must be a loser devoid of imagination and hope. That literally sounds like something I would say on any other topic - I am a devout, lifelong, in-my-bones optimist, and taking bad outcomes seriously has felt like a constant battle against my natural disposition. I’m honestly pretty embarrassed by how I initially approached thinking about AI. But I’m also proud of the work Encode has done since my mind opened up, and I know a lot of smart people are on the other side of this bridge I only relatively recently crossed. If that’s you, I really implore you to use the latest models, see for yourself, and ask: “What if AI actually is world-historic? Are there specific capabilities or arguments that could convince me this is true, or am I just not open to being convinced? How would my other assumptions about the future change under the premise that AI is world-historic?” As Dylan also acknowledges, I am totally sympathetic to how hard this is (especially if others around you are still AI skeptics); but when I did this exercise, it completely reoriented my life and work.

English
1
2
26
1.8K
Sneha retweetet
Steven Adler
Steven Adler@sjgadler·
Dwarkesh: Why would we want to sell China the materials for a serious cyberweapon? It's like selling them nukes with a casing that says 'made by Boeing' and claiming that's good for the US Jensen: Comparing AI to nukes is lunacy. Enriched uranium is a lousy analogy. It's an illogical analogy. What we have to recognize is that AI is a five-layered cake.
English
209
129
2.9K
1.2M
Sneha retweetet
Jason Wolfe
Jason Wolfe@w01fe·
I like Chris, but I really disagree with the positions presented in this article. I believe our job in the AI industry isn't just to explain why AI will be good for people. I believe our job should be to earn trust by making the benefits real, being honest about risks and uncertainty, sharing what we learn, measuring real-world impacts, and supporting public oversight and resilience. And while I of course agree that the recent violence is terrible, unjustified, and may have been encouraged by a small number of bad actors, I think it’s bad for the public discourse to lump all AI critics together as “doomers” and suggest that it’s inappropriate for them to express their concerns.
The San Francisco Standard@sfstandard

OpenAI’s global policy chief, Chris Lehane, thinks the discussion around AI has gotten out of hand. "When you put some of those thoughts and ideas out there, they do have consequences.” 📝: @ceodonovan sfstandard.com/2026/04/15/ope…

English
29
45
331
45.4K
Sneha retweetet
Alec Stapp
Alec Stapp@AlecStapp·
When NVIDIA sells AI chips to China, that means fewer AI chips for American companies. The three main inputs needed to produce the H200s for China overlap heavily with the inputs needed to make chips for US customers. Jensen’s response is word salad because it’s indefensible.
Alec Stapp tweet media
Dwarkesh Patel@dwarkesh_sp

Distilled recap of the back-and-forth with Jensen on export controls: Dwarkesh: Wouldn’t selling Nvidia chips to China enable them to train models like Claude Mythos with cyber offensive capabilities that would be threats to American companies and national security? Jensen: First of all, Mythos was trained on fairly mundane capacity and a fairly mundane amount of it by an extraordinary company. The amount of capacity and the type of compute it was trained on is abundantly available in China. Dwarkesh: With that, could they eventually train a model like Mythos? Yes. But the question is, because we have more FLOPs, American labs are able to get to this level of capabilities first. Furthermore, even if they trained a model like this, the ability to deploy it at scale matters. If you had a cyber hacker, it's much more dangerous if they have a million of them versus a thousand of them. Jensen: Your premise is just wrong. The fact of the matter is their AI development is going just fine. The best AI researchers in the world, because they are limited in compute, also come up with extremely smart algorithms. DeepSeek is not an inconsequential advance. The day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation. Dwarkesh: Currently, you can have a model like DeepSeek that can run on any accelerator if it's open source. Why would that stop being the case in the future? Jensen: Suppose it optimizes for Huawei. Suppose it optimizes for their architecture. It would put others at a disadvantage. As AI diffuses out into the rest of the world, their standards and their tech stack will become superior to ours because their models are open. Dwarkesh: Tesla sold extremely good electric vehicles to China for a long time. iPhones are sold in China. They didn't cause some lock-in. China will still make their version of EVs, and they're dominating, or smartphones, they're dominating. Jensen: We are not a car. The fact that I can buy this car brand one day and use another car brand another day is easy. Computing is not like that. There's a reason why x86 still exists. There's a reason why Arm is so sticky. These ecosystems are hard to replace. Dwarkesh: It's just hard to imagine that there's a long-term lock-in to the Chinese ecosystem, even if they have this slightly better open-source model for a while. American labs port across accelerators constantly. Anthropic's models are run on GPUs, they're run on Trainium, they're run on TPUs. There are so many things you can do, from distilling to a model that's well fit for your chips. Jensen: China is the largest contributor to open source software in the world. China's the largest contributor to open models in the world. Today it's built on the American tech stack, Nvidia’s. Fact. All five layers of the tech stack for AI are important. The United States ought to go win all five of them. in a few years time, I'm making you the prediction that when we want American technology to be diffused around the world—out to India, out to the Middle East, out to Africa, out to Southeast Asia—on that day, I will tell you exactly about today's conversation, about how your policy ... caused the United States to concede the second largest market in the world for no good reason at all.

English
27
43
304
39K
dave kasten
dave kasten@David_Kasten·
Sneha is one of the most impressive leaders in this space, in large part because she just...does what she thinks is right. She doesn't even see why saying something like this takes guts for most folks.
Sneha@SnehaRevanur

I thought this was a great piece, not least because it captured so much of my own journey into taking AI seriously. In the early days of Encode, we were squarely focused on immediate AI harms and in fact I was actively giving workshops internally about how our members should avoid being psyopped by the catastrophic risk sideshow (as I then perceived it). Friends asked me if I’d read the AI safety canon and whether it was shaping my work, and I was completely dismissive. It wasn’t until I actually sat and reflected on why I was so averse to thinking about the most extreme AI scenarios that I realized I didn’t have very good object-level reasons - it just felt weird and uncomfortable to entertain such alien futures for more than a few seconds, and at the time there was basically no social proof for this outside Berkeley. These days I laugh when critics of AI safety say that people should stand strong against doomerism and that if you dwell on AI risk you must be a loser devoid of imagination and hope. That literally sounds like something I would say on any other topic - I am a devout, lifelong, in-my-bones optimist, and taking bad outcomes seriously has felt like a constant battle against my natural disposition. I’m honestly pretty embarrassed by how I initially approached thinking about AI. But I’m also proud of the work Encode has done since my mind opened up, and I know a lot of smart people are on the other side of this bridge I only relatively recently crossed. If that’s you, I really implore you to use the latest models, see for yourself, and ask: “What if AI actually is world-historic? Are there specific capabilities or arguments that could convince me this is true, or am I just not open to being convinced? How would my other assumptions about the future change under the premise that AI is world-historic?” As Dylan also acknowledges, I am totally sympathetic to how hard this is (especially if others around you are still AI skeptics); but when I did this exercise, it completely reoriented my life and work.

English
2
0
10
1.1K
Sneha
Sneha@SnehaRevanur·
@austinc3301 Yes circa 2023 for me! It’s funny bc some people generously ask if we were making a strategic bet waiting for the vibe shift but nah I just genuinely was not seeing the bigger picture until ChatGPT
English
0
0
27
1.1K
Agus 🔸
Agus 🔸@austinc3301·
@SnehaRevanur I always wondered what your journey was, since I really struggled with tracking Encode's stances during your early days. Your journey was quite similar to mine circa 2022!
English
1
0
27
1.3K
Sneha
Sneha@SnehaRevanur·
I thought this was a great piece, not least because it captured so much of my own journey into taking AI seriously. In the early days of Encode, we were squarely focused on immediate AI harms and in fact I was actively giving workshops internally about how our members should avoid being psyopped by the catastrophic risk sideshow (as I then perceived it). Friends asked me if I’d read the AI safety canon and whether it was shaping my work, and I was completely dismissive. It wasn’t until I actually sat and reflected on why I was so averse to thinking about the most extreme AI scenarios that I realized I didn’t have very good object-level reasons - it just felt weird and uncomfortable to entertain such alien futures for more than a few seconds, and at the time there was basically no social proof for this outside Berkeley. These days I laugh when critics of AI safety say that people should stand strong against doomerism and that if you dwell on AI risk you must be a loser devoid of imagination and hope. That literally sounds like something I would say on any other topic - I am a devout, lifelong, in-my-bones optimist, and taking bad outcomes seriously has felt like a constant battle against my natural disposition. I’m honestly pretty embarrassed by how I initially approached thinking about AI. But I’m also proud of the work Encode has done since my mind opened up, and I know a lot of smart people are on the other side of this bridge I only relatively recently crossed. If that’s you, I really implore you to use the latest models, see for yourself, and ask: “What if AI actually is world-historic? Are there specific capabilities or arguments that could convince me this is true, or am I just not open to being convinced? How would my other assumptions about the future change under the premise that AI is world-historic?” As Dylan also acknowledges, I am totally sympathetic to how hard this is (especially if others around you are still AI skeptics); but when I did this exercise, it completely reoriented my life and work.
dylan matthews 🔸@dylanmatt

I am sympathetic to people who think AI is all nonsense hype. This is what I thought in 2015. I was very wrong, though, and I wrote about why and what I learned from that dylanmatthews.substack.com/p/the-ai-peopl…

English
12
36
353
96.7K
Sneha retweetet
Miles Brundage
Miles Brundage@Miles_Brundage·
As I and others have said a gazilliion times, the issue is not that people will believe companies that the stakes are high -- they are -- but that they will perceive companies to not be acting in a way consistent with those stakes x.com/NewYorker/stat…
The New Yorker@NewYorker

OpenAI’s Sam Altman wants to “de-escalate” the rhetoric around A.I. But if you tell people that your product will upend their way of life, take their jobs, and possibly threaten humanity, they might believe you. newyorker.com/culture/infini…

English
2
9
113
9K