

Max Crowley
14.9K posts

@MaxJCrowley
VP of BD @Kalshi Early @Uber (#25) | Founder @DrinkBandit → @Gopuff Host, The Early Podcast




The $1 Billion Kalshi Perfect Bracket Challenge $1 Billion for a perfect bracket $1 Million guaranteed to the top scoring bracket $1 Million to charity and scholarships See the full rules and submit your bracket: kalshi.com/billion-dollar… No purchase or deposit required. SIG Parametrics, LLC, a member of the Susquehanna International Group of Companies, is financially backing this promotion.


Cash App Pay is now available on @Kalshi. Trade on real-world events with the payment method you already trust. Fast and seamless.

NEW: We’ve partnered with @CashApp to make funding your Kalshi account easier with Cash App Pay

Anthropic vs the Pentagon: The Inside Story by Emil Michael The Full Timeline ❓Backstory: Why Anthropic? – Anthropic benefited from Biden’s AI executive order, was designated as an early winner – They smartly used this designation to sell into military and intelligence agencies, with forward-deployed engineers (Palantir style) – Became deeply integrated into DoW workflows, far ahead of other frontier model competitors 📜Tensions Rise Over Restrictive ToS – Anthropic's contracts had a long list of prohibited use cases, incl. certain war-game scenarios – Emil found this incompatible with DoW's mission, pushed for an "all lawful use" standard 🇻🇪Trigger Point: The Maduro Raid – After the Maduro raid, an Anthropic exec contacted Palantir, asking if their software was used, implying a potential ToS violation – This alarmed DoW: what if a guardrail or refusal triggered mid-operation, putting soldiers at risk? – Also raised insider threat concerns: what if a rogue developer poisoned or manipulated the model? 🧠Core Issue: Anthropic’s Own “Constitution” – Emil’s core objection: Anthropic has its own "constitution" and values, which is NOT the US Constitution – This, combined with a restrictive ToS, means the DoW could be subject to the ideological preferences of a private CEO 📊Rival Model Comps: – xAI: Fully on board for all lawful use cases across all networks, maximally truth seeking – Google: Have all lawful use on non-classified networks, working on infrastructure buildout for classified – OpenAI: Cooperative, Sam Altman even tried to broker a deal to help Anthropic 🍿Sam Altman’s Role: – Asked Emil to not designate Anthropic as a supply chain risk – Tried to negotiate blanket terms that Anthropic would find acceptable – Did this while being trashed by Dario 🚨Supply Chain Risk Designation: – Emil said it was protective, not punitive – Reasoning: If Anthropic's model has policy bias baked in, DoW doesn't want it embedded at defense contractors – The concern is that ideological bias could compound across the defense supply chain