joe scheidler

313 posts

joe scheidler banner
joe scheidler

joe scheidler

@Joe_Scheidler

Founder & CEO @HeliosIntel 🇺🇸

San Francisco, CA Katılım Şubat 2015
816 Takip Edilen2.9K Takipçiler
Sabitlenmiş Tweet
joe scheidler
joe scheidler@Joe_Scheidler·
bet against America I dare you
English
214
250
2.7K
577.4K
joe scheidler retweetledi
joe scheidler retweetledi
Steven Fulop
Steven Fulop@StevenFulop·
At some point, this stops being about “taxing the rich” and starts being about shrinking the base that supports jobs, investment, and growth. Taken together, this isn’t a strategy to address affordability - it’s a reaction to polling. If New York eliminates QSBS, startups won’t phase out - they’ll leave immediately. That’s not theoretical; it’s predictable. Step back from the politics and ask a simple question: what happens to affordability if we systematically raise costs across the entire economy on job creators making NY an outlier even vs neighboring states? We’re talking about: •startups losing QSBS •large employers facing the highest corporate tax rates in the country •mid-sized businesses hit by PTET changes •high earners already paying the highest income taxes •proposals to push minimum wage to $30, hitting small businesses hardest The most important point is that even if New York did all of this, it wouldn’t solve the problem. Without real, structural changes to the cost side of government, any additional revenue will be gone within a few years. That’s the core issue.
nihal@nihalmehta

New York is about to make a massive mistake. The NY State Senate is advancing a proposal to decouple from federal QSBS (Section 1202) — the tax provision that lets startup founders exclude gains on qualifying exits. If this passes, founders would owe 10-13% in combined state and city tax on exits that are tax-free at the federal level and in nearly every other major tech state. Even worse: it's retroactive to January 1, 2025. This comes right as the federal government just expanded QSBS benefits and New Jersey moved to full conformity. New York wants to go in the opposite direction. As a seed investor in NYC who has backed hundreds of companies, I can tell you: founders are mobile. If New York becomes one of the most punitive states for startup exits, the best founders will simply build somewhere else — and the jobs, tax revenue, and innovation will follow. NYC has built something special over the last two decades. This proposal puts it all at risk for a short-sighted revenue grab. If you're a founder, investor, or anyone who cares about the NYC tech ecosystem — please sign the TechNYC open letter before Monday below 👇🏾👇🏾👇🏾 Keep building, NYC 🗽

English
13
28
206
47.4K
joe scheidler
joe scheidler@Joe_Scheidler·
love that the companies selling AI automation to enterprises/gov couldn't be bothered to verify their own compliance automation vendor the pitch deck due diligence era is truly over
Ryan@ohryansbelt

Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor

English
2
1
1
173
joe scheidler retweetledi
Paul Klein IV
Paul Klein IV@pk_iv·
If this is legit - it means that every SOC-2 report from their customers will need to be redone (which will take months). Very thankful to be a Vanta customer right now.
Ryan@ohryansbelt

Delve, a YC-backed compliance startup that raised $32 million, has been accused of systematically faking SOC 2, ISO 27001, HIPAA, and GDPR compliance reports for hundreds of clients. According to a detailed Substack investigation by DeepDelver, a leaked Google spreadsheet containing links to hundreds of confidential draft audit reports revealed that Delve generates auditor conclusions before any auditor reviews evidence, uses the same template across 99.8% of reports, and relies on Indian certification mills operating through empty US shells instead of the "US-based CPA firms" they advertise. Here's the breakdown: > 493 out of 494 leaked SOC 2 reports allegedly contain identical boilerplate text, including the same grammatical errors and nonsensical sentences, with only a company name, logo, org chart, and signature swapped in > Auditor conclusions and test procedures are reportedly pre-written in draft reports before clients even provide their company description, which would violate AICPA independence rules requiring auditors to independently design tests and form conclusions > All 259 Type II reports claim zero security incidents, zero personnel changes, zero customer terminations, and zero cyber incidents during the observation period, with identical "unable to test" conclusions across every client > Delve's "US-based auditors" are actually Accorp and Gradient, described as Indian certification mills operating through US shell entities. 99%+ of clients reportedly went through one of these two firms over the past 6 months > The platform allegedly publishes fully populated trust pages claiming vulnerability scanning, pentesting, and data recovery simulations before any compliance work has been done > Delve pre-fabricates board meeting minutes, risk assessments, security incident simulations, and employee evidence that clients can adopt with a single click, according to the author > Most "integrations" are just containers for manual screenshots with no actual API connections. The author describes the platform as a "SOC 2 template pack with a thin SaaS wrapper" > When the leak was exposed, CEO Karun Kaushik emailed clients calling the allegations "falsified claims" from an "AI-generated email" and stated no sensitive data was accessed, while the reports themselves contained private signatures and confidential architecture diagrams > Companies relying on these reports could face criminal liability under HIPAA and fines up to 4% of global revenue under GDPR for compliance violations they believed were resolved > When clients threaten to leave, Delve reportedly pairs them with an external vCISO for manual off-platform work, which the author argues proves their own platform can't deliver real compliance > Delve's sales price dropped from $15,000 to $6,000 with ISO 27001 and a penetration test thrown in when a client mentioned considering a competitor

English
64
41
1.9K
304.7K
joe scheidler retweetledi
Palmer Luckey
Palmer Luckey@PalmerLuckey·
"The Pentagon isn’t asking you to help build Skynet. They’re asking you to not have veto power over how a democratically accountable military uses a tool it purchased. Their point about “all lawful purposes” is actually the correct institutional boundary: the military operates under law, under civilian control, under congressional oversight"
ib@Indian_Bronson

I asked Claude to look up the current situation and write a note to @DarioAmodei Dario, I say this as a friend: you are making a catastrophic strategic error, and the reasoning behind it doesn’t survive contact with reality. Your two red lines — no mass surveillance of Americans, no autonomous weapons without human-in-the-loop — sound principled in a vacuum. But you are not operating in a vacuum. You are operating in a world where the PLA is integrating AI into every layer of its kill chain with zero such scruples, where Chinese military AI development has no institutional review board, no congressional oversight, no ACLU, and no Dario Amodei demanding terms of service compliance. The practical effect of your stand is not that autonomous weapons don’t get built. They get built — by China, by Russia, by anyone not constrained by your moral framework. The practical effect is that the one military that actually has democratic accountability, civilian oversight, courts, a free press, and a functioning inspector general is the one that fights the next war with worse tools. You are not preventing dystopia. You are ensuring that if dystopia comes, it will be imposed by actors who never had to negotiate with you at all. Consider the logic chain: 1.You pull Claude from classified systems. 2.The Pentagon scrambles to Grok or Gemini — inferior models by everyone’s admission, including DoD’s own people. 3.The capability gap between the US and China widens in domains where AI is decisive: cyber, ISR fusion, targeting, logistics optimization. 4.The probability of a successful defense of Taiwan, or deterrence of a move on Taiwan, decreases. 5.The liberal democratic order you claim to value loses its security guarantor. You’ve told me yourself that you believe frontier AI is among the most consequential technologies in human history. If you actually believe that, how can you justify ensuring the US military — the only force standing between liberal democracy and its rivals — fields second-best AI? On what moral calculus does that work out? The Pentagon isn’t asking you to help build Skynet. They’re asking you to not have veto power over how a democratically accountable military uses a tool it purchased. Their point about “all lawful purposes” is actually the correct institutional boundary: the military operates under law, under civilian control, under congressional oversight. Your acceptable use policy is a private company substituting its judgment for the entire apparatus of democratic military governance. That’s the actual God complex here. The surveillance concern is a red herring in this context. The NSA already has authorities and tools for surveillance that dwarf anything Claude enables. You’re not preventing mass surveillance by withholding Claude — you’re just ensuring that whatever AI the government does use for those purposes is less safe, less auditable, and less aligned than yours. Same logic applies to autonomous weapons. Autonomous systems are coming regardless. The question is whether they’re built on a foundation that has your safety research baked in, or on something hacked together by a defense contractor with none of your alignment work. You are selecting for the worse outcome. I know you’re getting praised right now by exactly the people you’d expect. That praise is worth nothing when the strategic balance shifts and there’s no one left to protect the system that allows companies like Anthropic to exist in the first place. You are sacrificing the security of the civilization that makes your principles possible, in the name of those principles.

English
358
653
8.2K
836.8K
joe scheidler retweetledi
ib
ib@Indian_Bronson·
I asked Claude to look up the current situation and write a note to @DarioAmodei Dario, I say this as a friend: you are making a catastrophic strategic error, and the reasoning behind it doesn’t survive contact with reality. Your two red lines — no mass surveillance of Americans, no autonomous weapons without human-in-the-loop — sound principled in a vacuum. But you are not operating in a vacuum. You are operating in a world where the PLA is integrating AI into every layer of its kill chain with zero such scruples, where Chinese military AI development has no institutional review board, no congressional oversight, no ACLU, and no Dario Amodei demanding terms of service compliance. The practical effect of your stand is not that autonomous weapons don’t get built. They get built — by China, by Russia, by anyone not constrained by your moral framework. The practical effect is that the one military that actually has democratic accountability, civilian oversight, courts, a free press, and a functioning inspector general is the one that fights the next war with worse tools. You are not preventing dystopia. You are ensuring that if dystopia comes, it will be imposed by actors who never had to negotiate with you at all. Consider the logic chain: 1.You pull Claude from classified systems. 2.The Pentagon scrambles to Grok or Gemini — inferior models by everyone’s admission, including DoD’s own people. 3.The capability gap between the US and China widens in domains where AI is decisive: cyber, ISR fusion, targeting, logistics optimization. 4.The probability of a successful defense of Taiwan, or deterrence of a move on Taiwan, decreases. 5.The liberal democratic order you claim to value loses its security guarantor. You’ve told me yourself that you believe frontier AI is among the most consequential technologies in human history. If you actually believe that, how can you justify ensuring the US military — the only force standing between liberal democracy and its rivals — fields second-best AI? On what moral calculus does that work out? The Pentagon isn’t asking you to help build Skynet. They’re asking you to not have veto power over how a democratically accountable military uses a tool it purchased. Their point about “all lawful purposes” is actually the correct institutional boundary: the military operates under law, under civilian control, under congressional oversight. Your acceptable use policy is a private company substituting its judgment for the entire apparatus of democratic military governance. That’s the actual God complex here. The surveillance concern is a red herring in this context. The NSA already has authorities and tools for surveillance that dwarf anything Claude enables. You’re not preventing mass surveillance by withholding Claude — you’re just ensuring that whatever AI the government does use for those purposes is less safe, less auditable, and less aligned than yours. Same logic applies to autonomous weapons. Autonomous systems are coming regardless. The question is whether they’re built on a foundation that has your safety research baked in, or on something hacked together by a defense contractor with none of your alignment work. You are selecting for the worse outcome. I know you’re getting praised right now by exactly the people you’d expect. That praise is worth nothing when the strategic balance shifts and there’s no one left to protect the system that allows companies like Anthropic to exist in the first place. You are sacrificing the security of the civilization that makes your principles possible, in the name of those principles.
ib tweet media
English
174
645
3.8K
1.5M
joe scheidler retweetledi
Palmer Luckey
Palmer Luckey@PalmerLuckey·
This gets to the core of the issue more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives? Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a "target" vs collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy, that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds, good, right? Not really - in addition to the value judgement problems I list above, you also have to account for questions like: -What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? -What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the President? -At what level of confidence does the cutoff trigger, both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say "But they will have cutouts to operate with autonomous systems for defensive use!", but you immediately get into the same issues and more - what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why "bro just agree the AI won't be involved in autonomous weapons or mass surveillance why can't you agree it is so simple please bro" is an untenable position that the United States cannot possibly accept.
Under Secretary of War Emil Michael@USWREMichael

Prior to their new “Constitution,” @AnthropicAI had an old one they desperately tried to delete from the internet. “Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.”

English
1K
2K
15.9K
2.6M
joe scheidler retweetledi
Helios
Helios@HeliosIntel·
AI is testing who controls warfare: Silicon Valley or Washington. As the Pentagon pushes for broader military access to frontier AI, Joe Scheidler, our CEO & Co-founder, says: Private firms build AI. Governments decide how it's used. This is about sovereign control, not contracts.
Helios tweet media
English
0
2
3
82
joe scheidler retweetledi
Katherine Boyle
Katherine Boyle@KTmBoyle·
“At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe.” 🇺🇸
Palmer Luckey@PalmerLuckey

This gets to the core of the issue more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives? Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a "target" vs collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy, that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds, good, right? Not really - in addition to the value judgement problems I list above, you also have to account for questions like: -What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? -What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the President? -At what level of confidence does the cutoff trigger, both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say "But they will have cutouts to operate with autonomous systems for defensive use!", but you immediately get into the same issues and more - what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why "bro just agree the AI won't be involved in autonomous weapons or mass surveillance why can't you agree it is so simple please bro" is an untenable position that the United States cannot possibly accept.

English
24
18
367
35.2K
joe scheidler retweetledi
Helios
Helios@HeliosIntel·
The DOE is fast-tracking 10 advanced nuclear developers under its Reactor Pilot Program, aiming for multiple test reactors to reach criticality by mid-2026. The effort highlights how federal policy is being used to compress timelines for next-generation nuclear deployment. To learn how policy signals like this shape your organization’s real-world operations, visit the #linkinbio.
Helios tweet media
English
0
1
4
76
joe scheidler
joe scheidler@Joe_Scheidler·
@hf0 this goes harder than Ike's D-Day speech
English
0
0
8
257
joe scheidler retweetledi
HFØ
HFØ@hf0·
Slop is for cowards. We back founders who write with blood. Applications close Sunday.
English
112
35
360
70.1K
joe scheidler retweetledi
Helios
Helios@HeliosIntel·
Everyone is talking about their favorite Super Bowl ad...
English
0
26
289
667K
joe scheidler retweetledi
Helios
Helios@HeliosIntel·
The U.S. has released a new maritime strategy aimed at reviving domestic shipbuilding and expanding the American commercial fleet. The plan links trade enforcement, workforce investment, and infrastructure funding to long-term shipping capacity. To learn how policy signals like this shape your organization’s real-world operations, visit the #linkinbio or heliosintel.ai.
Helios tweet media
English
0
1
4
76
joe scheidler retweetledi
Helios
Helios@HeliosIntel·
The Pentagon is seeking to bring advanced AI tools onto classified networks with reduced usage restrictions. The effort reflects rising demand for AI in national security and a growing debate over how much control tech companies should retain once their systems enter military environments. To learn how policy signals like this shape your organization’s real-world operations, visit the #linkinbio or heliosintel.ai.
Helios tweet media
English
0
1
3
129
joe scheidler retweetledi
Helios
Helios@HeliosIntel·
The Space Force is accelerating prototype-based development of protected tactical satellite communications after canceling a larger procurement program. The effort centers on modernizing anti-jam, resilient satcom capabilities designed to maintain connectivity in contested environments, a signal that speed, survivability, and modular development are shaping the next phase of military space architecture. For defense contractors and space infrastructure firms, the pivot highlights how acquisition models are shifting alongside mission priorities. To learn how policy signals like this shape your organization’s real-world operations, visit the #linkinbio or heliosintel.ai.
Helios tweet media
English
0
1
3
81
joe scheidler retweetledi
Jakob Diepenbrock
Jakob Diepenbrock@jakobdiepen·
America is the greatest country in the world. But we need more founders working on real problems. If you are in the early stages of building something that matters, you have to be in El Segundo.🇺🇸 Apply to the Spring Cohort in bio.  Deadline February 20th.
English
144
181
1.1K
287.3K