Sneha
243 posts

Sneha
@SnehaRevanur
founder @EncodeAction 🇺🇸

The AI lobby has entered our race to bankroll my opponent, Scott Wiener. That's because he worked with them to water down AI regulations in California. As his reward, he gets a huge super PAC. I'm not taking any corporate PAC or lobbyist money, and in Congress, I’m going to end this kind of legalized bribery. AI oligarchs want to control your future. I will fight to put people back in control.

The AI lobby has entered our race to bankroll my opponent, Scott Wiener. That's because he worked with them to water down AI regulations in California. As his reward, he gets a huge super PAC. I'm not taking any corporate PAC or lobbyist money, and in Congress, I’m going to end this kind of legalized bribery. AI oligarchs want to control your future. I will fight to put people back in control.

The AI lobby has entered our race to bankroll my opponent, Scott Wiener. That's because he worked with them to water down AI regulations in California. As his reward, he gets a huge super PAC. I'm not taking any corporate PAC or lobbyist money, and in Congress, I’m going to end this kind of legalized bribery. AI oligarchs want to control your future. I will fight to put people back in control.

As I and others have said a gazilliion times, the issue is not that people will believe companies that the stakes are high -- they are -- but that they will perceive companies to not be acting in a way consistent with those stakes x.com/NewYorker/stat…

On initial read this plan struck me as similar to OpenAI's "Industrial Policy for the Intelligence Age Paper," but reading them both again the similarities are more striking than I expected. You would really think that if OpenAI really believed in making these policies happen, they would support Bores's candidacy, or at the least not back a Superpac spending millions to attack him. (The OpenAI funded Superpac network even frequently puts out content about how concerns about job loss are a doomer hoax!) Similarities: 1. Citizens get stake in AI profits Bores: "if AI dramatically increases productivity and concentrates wealth, the American people have a stake in those gains" OpenAI: "Create a Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth" 2. Change tax code to favor labor Bores: "If AI can substitute for labor rather than complement it, then our tax code is actively subsidizing job elimination. We encourage companies to invest in AI by making it cheaper through tax breaks, while taxing the wages of the workers being displaced." OpenAI: "As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes. This could erode the tax base that funds core programs... Policymakers could rebalance the tax base by increasing reliance on capital-based revenues... and by exploring new approaches such as taxes related to automated labor." 3. Trigger based safety nets Bores: "The program would be tied to clear economic triggers—such as sustained declines in labor force participation, wage compression in affected sectors, or rapid increases in AI-driven productivity without corresponding job growth—to ensure it activates based on real-world conditions, not political discretion." OpenAI: "Define a package of temporary, expanded safety nets... that activates automatically when these metrics exceed pre-defined thresholds. When disruption rises above those levels, support would scale up; as conditions stabilize, it would phase out."

Today, I’m proud to announce the AI Dividend, my plan to prepare for the AI economy with direct payments to Americans funded by tax reform that simultaneously incentivizes hiring humans instead of AI. Read the full plan here: alexbores.nyc/ai-dividend








OpenAI’s global policy chief, Chris Lehane, thinks the discussion around AI has gotten out of hand. "When you put some of those thoughts and ideas out there, they do have consequences.” 📝: @ceodonovan sfstandard.com/2026/04/15/ope…





I am sympathetic to people who think AI is all nonsense hype. This is what I thought in 2015. I was very wrong, though, and I wrote about why and what I learned from that dylanmatthews.substack.com/p/the-ai-peopl…

OpenAI’s Sam Altman wants to “de-escalate” the rhetoric around A.I. But if you tell people that your product will upend their way of life, take their jobs, and possibly threaten humanity, they might believe you. newyorker.com/culture/infini…


