joe scheidler
313 posts

joe scheidler
@Joe_Scheidler
Founder & CEO @HeliosIntel 🇺🇸

New York is about to make a massive mistake. The NY State Senate is advancing a proposal to decouple from federal QSBS (Section 1202) — the tax provision that lets startup founders exclude gains on qualifying exits. If this passes, founders would owe 10-13% in combined state and city tax on exits that are tax-free at the federal level and in nearly every other major tech state. Even worse: it's retroactive to January 1, 2025. This comes right as the federal government just expanded QSBS benefits and New Jersey moved to full conformity. New York wants to go in the opposite direction. As a seed investor in NYC who has backed hundreds of companies, I can tell you: founders are mobile. If New York becomes one of the most punitive states for startup exits, the best founders will simply build somewhere else — and the jobs, tax revenue, and innovation will follow. NYC has built something special over the last two decades. This proposal puts it all at risk for a short-sighted revenue grab. If you're a founder, investor, or anyone who cares about the NYC tech ecosystem — please sign the TechNYC open letter before Monday below 👇🏾👇🏾👇🏾 Keep building, NYC 🗽

New York is about to make a massive mistake. The NY State Senate is advancing a proposal to decouple from federal QSBS (Section 1202) — the tax provision that lets startup founders exclude gains on qualifying exits. If this passes, founders would owe 10-13% in combined state and city tax on exits that are tax-free at the federal level and in nearly every other major tech state. Even worse: it's retroactive to January 1, 2025. This comes right as the federal government just expanded QSBS benefits and New Jersey moved to full conformity. New York wants to go in the opposite direction. As a seed investor in NYC who has backed hundreds of companies, I can tell you: founders are mobile. If New York becomes one of the most punitive states for startup exits, the best founders will simply build somewhere else — and the jobs, tax revenue, and innovation will follow. NYC has built something special over the last two decades. This proposal puts it all at risk for a short-sighted revenue grab. If you're a founder, investor, or anyone who cares about the NYC tech ecosystem — please sign the TechNYC open letter before Monday below 👇🏾👇🏾👇🏾 Keep building, NYC 🗽



I asked Claude to look up the current situation and write a note to @DarioAmodei Dario, I say this as a friend: you are making a catastrophic strategic error, and the reasoning behind it doesn’t survive contact with reality. Your two red lines — no mass surveillance of Americans, no autonomous weapons without human-in-the-loop — sound principled in a vacuum. But you are not operating in a vacuum. You are operating in a world where the PLA is integrating AI into every layer of its kill chain with zero such scruples, where Chinese military AI development has no institutional review board, no congressional oversight, no ACLU, and no Dario Amodei demanding terms of service compliance. The practical effect of your stand is not that autonomous weapons don’t get built. They get built — by China, by Russia, by anyone not constrained by your moral framework. The practical effect is that the one military that actually has democratic accountability, civilian oversight, courts, a free press, and a functioning inspector general is the one that fights the next war with worse tools. You are not preventing dystopia. You are ensuring that if dystopia comes, it will be imposed by actors who never had to negotiate with you at all. Consider the logic chain: 1.You pull Claude from classified systems. 2.The Pentagon scrambles to Grok or Gemini — inferior models by everyone’s admission, including DoD’s own people. 3.The capability gap between the US and China widens in domains where AI is decisive: cyber, ISR fusion, targeting, logistics optimization. 4.The probability of a successful defense of Taiwan, or deterrence of a move on Taiwan, decreases. 5.The liberal democratic order you claim to value loses its security guarantor. You’ve told me yourself that you believe frontier AI is among the most consequential technologies in human history. If you actually believe that, how can you justify ensuring the US military — the only force standing between liberal democracy and its rivals — fields second-best AI? On what moral calculus does that work out? The Pentagon isn’t asking you to help build Skynet. They’re asking you to not have veto power over how a democratically accountable military uses a tool it purchased. Their point about “all lawful purposes” is actually the correct institutional boundary: the military operates under law, under civilian control, under congressional oversight. Your acceptable use policy is a private company substituting its judgment for the entire apparatus of democratic military governance. That’s the actual God complex here. The surveillance concern is a red herring in this context. The NSA already has authorities and tools for surveillance that dwarf anything Claude enables. You’re not preventing mass surveillance by withholding Claude — you’re just ensuring that whatever AI the government does use for those purposes is less safe, less auditable, and less aligned than yours. Same logic applies to autonomous weapons. Autonomous systems are coming regardless. The question is whether they’re built on a foundation that has your safety research baked in, or on something hacked together by a defense contractor with none of your alignment work. You are selecting for the worse outcome. I know you’re getting praised right now by exactly the people you’d expect. That praise is worth nothing when the strategic balance shifts and there’s no one left to protect the system that allows companies like Anthropic to exist in the first place. You are sacrificing the security of the civilization that makes your principles possible, in the name of those principles.



Prior to their new “Constitution,” @AnthropicAI had an old one they desperately tried to delete from the internet. “Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.”

This gets to the core of the issue more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives? Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a "target" vs collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy, that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds, good, right? Not really - in addition to the value judgement problems I list above, you also have to account for questions like: -What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? -What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the President? -At what level of confidence does the cutoff trigger, both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say "But they will have cutouts to operate with autonomous systems for defensive use!", but you immediately get into the same issues and more - what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why "bro just agree the AI won't be involved in autonomous weapons or mass surveillance why can't you agree it is so simple please bro" is an untenable position that the United States cannot possibly accept.


A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement…










