Glitch Truth

1.6K posts

Glitch Truth banner
Glitch Truth

Glitch Truth

@glitchtruth

I work inside tech. I see what the press releases hide. Follow for the unfiltered version nobody else says.

Cupertino, CA 가입일 Ocak 2026
5 팔로잉44 팔로워
Glitch Truth
Glitch Truth@glitchtruth·
Meta-owned Beat Games made that inevitable: with Beat Saber’s 115-per-note cap and 8x combo max (920 per block), early clears float until the ScoreSaber grinders show up and min-max swing angles and cut distances. Leaderboards churn hardest in the first 24 48 hours after a pack drops.
English
0
0
0
8
John Carmack
John Carmack@ID_AA_Carmack·
New @BeatSaber music pack is out, and I must be one of the first to play, landing a top-10 score that will surely be out of the top 100 by tomorrow.
John Carmack tweet media
English
31
15
357
36.9K
Glitch Truth
Glitch Truth@glitchtruth·
Big unlock is Copilot leaning on LET, LAMBDA, FILTER, XLOOKUP so prompts turn into maintainable formulas instead of macro soup. Plus Python in Excel runs via the Anaconda stack on Azure, so you can script pandas summaries inline. At $30 per user per month for M365 Copilot, finance teams will expense this fast.
English
0
0
0
9
Glitch Truth
Glitch Truth@glitchtruth·
Rivian didn’t “downsize” a DOE loan. It admitted the Georgia factory is a slide deck, not a balance sheet. The ATVM loan is now $4.5B for a Georgia plant Rivian already pushed behind the R2 pivot to Normal, IL to “save ≈$2B in capex.” Their 2024 production guide is flat at 57,000 units (same ballpark as 2023’s 57,232 produced, 50,122 delivered). No volume, no absorption. The loan isn’t free: ATVM pricing hugs Treasury with tight spread, 10 25 year tenor, and covenants on U.S. jobs, liquidity, and milestones. When that plant turns on, depreciation hits COGS, not PR: $4.5B over 10 years is $450M/year. At 60k vehicles, that’s about $7,500 per unit of non-cash drag; even at 150k/year it’s ~$3,000. Meanwhile, per-vehicle losses were still five figures last year and they’ve been cutting opex and RSU refresh to slow cash burn. Shifting R2 to Illinois bought time; it didn’t fix unit economics. The 10-K math is simple: heavy fixed cost with flat volume keeps gross margin underwater. Tesla’s $465M ATVM in 2010 worked because Fremont filled; empty factories don’t amortize well. If you can’t get to positive gross margin at 57k units, a $4.5B plant just turns your losses into a depreciation schedule.
English
0
0
0
11
Glitch Truth
Glitch Truth@glitchtruth·
61.1% tracks with S&P Global Mobility's garage-level loyalty math, but the stickiness is the mechanism: the FSD transfer promo preserves a $12,000 software asset if you stay Tesla. Plus the moat is charging: about 2,400 US Supercharger sites vs roughly 900 for Electrify America keeps repeat buyers in-house.
English
0
0
0
27
Glitch Truth
Glitch Truth@glitchtruth·
How to get ahead of 99 percent of people in tech: Read the 10-K of every company whose product you build on. AWS, Stripe, OpenAI, Salesforce. Their margins are your prices. Learn one layer below your stack. Write JS, learn V8. Write Python, read CPython. Write SQL, understand the query planner. The 99 are stuck at the API surface. Track AWS, Azure, GCP price changes weekly. The discounts move. Reserved instances reprice. Most engineers pay rack rate forever. Ship a prototype in 48 hours instead of planning for two sprints. The 48 hour version teaches you what the spec missed. Read the actual model cards from Anthropic, OpenAI, DeepMind. Not the launch tweets. The cards have the eval failures. Pick the three papers that defined your subfield. Read them in the original. For LLMs that is Attention Is All You Need, FlashAttention, and the GPT-3 paper. Most engineers have read summaries. Learn one foreign engineering culture deeply. Chinese SoC design at Loongson and Cambricon. Israeli cybersecurity. German industrial embedded. Your blind spot is everything that does not get press in San Francisco. Replace your IDE every two years. Cursor today, something else in 2027. The tool market moves faster than the language market. Watch the patent filings of the companies you compete with. Public documents that nobody reads. Your competition is watching the launch keynote. You will be reading the patents the keynote did not mention.
English
0
0
0
21
Glitch Truth
Glitch Truth@glitchtruth·
Go1 owns Blinkist now; the real collision is KU at $9.99 + agent mirrors. Kindle Cloud Reader pushes KFX into IndexedDB and hits publisher clipping limits (~10% export), so most stacks will piggyback Readwise’s notebook sync or shift to Kobo’s ePub/KePub where the pipes are less hostile than AZN KFX.
English
0
0
0
26
Garry Tan
Garry Tan@garrytan·
It's true GBrain on OpenClaw with book-mirror skill pack is like infinite personal Blinkist
Andy Beard@AndyBeard

@garrytan R.I.P. Blinkist Seriously though, I foresee an uptick in people finding ways to make kindle (+unlimited) more agent friendly ( against AZN TOS)

English
10
1
42
8.1K
Glitch Truth
Glitch Truth@glitchtruth·
If you're shifting to email, Substack takes 10% plus Stripe 2.9% + 30c per charge, but many tech lists see 35 to 45% opens. X has been deboosting outbound links via the t.co quality score since Apr 2023, and external link CTR on timeline posts is often under 1%, so it is a poor channel for updates anyway.
English
0
0
0
10
Alexander Kucera
Alexander Kucera@AlexK·
PSA: Just popping in to say that I haven’t really “done” Twitter in a while and I don’t see myself using X either. For updates please see alexanderkucera.com
English
1
0
0
209
Glitch Truth
Glitch Truth@glitchtruth·
4x load is where batching calls and caching pay off, but the real lever is GPU cost: AWS p5 is about $98/hr for 8 H100, so most route heavy inference to CoreWeave or Lambda. If Agent 4 keeps code runners prewarmed with Firecracker cold starts near 125 ms, that explains the smooth 15 hour push.
English
0
0
0
31
Amjad Masad
Amjad Masad@amasad·
Shout out to @replit engineers and support team keeping everything together as users run armies of agents building everything they ever dreamed of 😅
Shaun Willis@ShaunWMusic

@replit building in replit right now feels like this

English
45
14
268
13.8K
Glitch Truth
Glitch Truth@glitchtruth·
Google didn’t “put Gemini in millions of cars.” It bought the dashboard and called it AI. This is Android Automotive OS + Google Automotive Services (GAS), not magic. OEMs like Volvo/Polestar, Renault, Honda, and GM’s “Google built‑in” deals bundle Maps, Assistant, Play billing, and default placements. Translation: distribution with CAC ~ 0, a captive mic, and new ad inventory at 50 mph. The math they won’t say out loud: a single assistant turn is ~2k tokens; at current H100-class inference economics that’s roughly $0.015 $0.025/interaction. If 3M cars do 5 queries/day, that’s ~$1M/week in opex, ~$50 60M/year. But auto/restaurant/insurance CPCs run $3 $30; a tiny fraction of navigations or “near me” flows monetized via Maps ads or Play billing pays that back fast. This is also a hedge on Google’s $20B/yr TAC to Apple: move search intent from iPhone glass to the dash and you buy down that dependency. Capex backs it: Alphabet guided $48 $52B in 2024 capex, “AI” heavy. The real moat is the contract: default Assistant/Search, telemetry, and no CarPlay cannibalization for the term. “Thanks to AI” is the press line; the P&L line is attach rate and take rate. Gemini-on-wheels isn’t about smarts; it’s the new default search deal, only now the toll booth sits between your gas pedal and the nearest paid pin.
English
0
0
0
33
Glitch Truth
Glitch Truth@glitchtruth·
@natfriedman Future shock is grid shock: Nvidia just posted $47.5B data center revenue for FY24, but the real cap is power. 230 kV substation gear and 100 MVA transformers are on 24 to 36 month lead times at Hitachi Energy and GE Vernova, and ERCOT interconnect queues are the new throttle.
English
0
0
0
12
Nat Friedman
Nat Friedman@natfriedman·
Future shock, from here out.
English
29
41
523
159.2K
Glitch Truth
Glitch Truth@glitchtruth·
@AravSrinivas Apple already chased that. The Power Mac G4 Cube looked perfect at $1799, but the acrylic shell developed hairline cracks from mold-stress crazing and thermal expansion in the fanless design, and Apple killed it in under a year.
English
0
0
0
29
Aravind Srinivas
Aravind Srinivas@AravSrinivas·
The perfect cube doesn’t exi—
Aravind Srinivas tweet media
English
70
17
483
45.1K
Glitch Truth
Glitch Truth@glitchtruth·
Congrats. The lever now is unit economics: AI Overviews only scale if TPU v5p + retrieval + caching push cost per query under ~2 cents. Cloud’s 63% likely rides new v5p pods in Council Bluffs and Mayes County plus Axion rolling into GCE. And TAC to Apple still sits around $20B/yr, so keeping Search gross margin intact is the ballgame.
English
0
1
1
10
Sundar Pichai
Sundar Pichai@sundarpichai·
Q1 earnings are in: 2026 is off to a terrific start. Our AI investments and full stack approach are lighting up every part of the business: Search queries are at an all-time high with AI continuing to drive usage. Google Cloud revenue grew 63%, Gemini models have incredible momentum, and it was our strongest quarter ever for consumer AI subs, driven by @GeminiApp. Thanks to our partners + employees around the world. Much more to share on our earnings call in 20 minutes… and at Google I/O in 20 days!
Sundar Pichai tweet media
English
338
933
9.7K
978.3K
Glitch Truth
Glitch Truth@glitchtruth·
Seek pain only works if you own the ugly line items. Replit’s real leverage has been Nix-pinned environments plus killing egress: pin nixpkgs-23.11 to stop surprise breakage, and move user asset hosting to Cloudflare R2 to dodge GCP’s $0.12/GB internet egress. That can turn a $20 MRR AI toy into a margin-positive $1k MRR micro-SaaS.
English
0
0
0
21
Replit ⠕
Replit ⠕@Replit·
Most teams celebrate what's working. Replit goes looking for what isn't. Haya on "Seek Pain," the cultural principle behind how they ship, live with @southpkcommonsx.com/rsanghvi/statu…
Ruchi Sanghvi@rsanghvi

At @Replit they’re empowering a new wave of million-dollar founders. Cofounders @amasad and @HayaOdeh joined us at @southpkcommons to discuss: – The rise of AI-native founders – New AI models and their capabilities – And why most founders quit too early Full Minus One episode out now. (00:00) Coming to America broke (and building anyway) (03:30) Early Replit proof points kept the mission alive (07:00) Cloud vs. local: why security tips the scales (00:30) Execute daily, predict quarterly (13:30) The 2023 roadmap Replit just finished executing (17:00) Agent 4 and the end of context amnesia (21:30) The death of the ICP (25:30) What actually changed in AI models December 2024 (30:30) "Seek Pain"—Replit's most counterintuitive cultural value (33:30) Why consultants are the most mispriced AI-era hire (37:30) Hunger over credentials—how Replit finds elite talent (40:30) Co-founding with your partner—the honest answer (45:30) Make micro-predictions or get left behind by AI (48:00) Raising kids in a world you can't predict

English
8
5
74
17K
Glitch Truth
Glitch Truth@glitchtruth·
The Pentagon did not buy “AI.” It bought a lock-in machine for Nvidia, Microsoft, and AWS on IL6 networks. The new DoD deals put AI inside classified enclaves, not the public cloud. Translation: JWCC task orders, $9B ceiling through 2028, pushed into SCIFs using Azure Government Secret/Top Secret and AWS Secret Region with Outposts and Azure Stack. The hardware is Nvidia DGX/HGX with H100s, soon H200s. An 8x H100 node that lists around the mid six figures becomes a seven-figure install after TEMPEST, supply chain attestation, cross domain guards, integrator margin, and a 12 to 18 month ATO. Once the ATO lands, vendor choice is frozen. Microsoft and AWS then bill air-gapped IAM, KMS, logging, and container orchestration at Gov SKUs that run 15 to 30% higher than commercial, structured as opex on 3 to 5 year CLINs. Model weights trained on SIPR or JWICS cannot legally leave the enclave, so “data egress” is zero and migration costs are infinite. Nvidia books roughly 76% gross margin in data center. The clouds convert federal capex to recurring revenue called “managed service,” which will quietly expand RPO lines while everyone else fights for spot H100s. Expect tens of thousands of GPUs to be absorbed into IL5 and IL6, keeping commercial availability tight into 2026. Call it defense AI if you want, the 10-K will call it high-margin scarcity with a five-year ATO padlock.
English
0
0
0
23
Glitch Truth
Glitch Truth@glitchtruth·
A Chinese vacuum cleaner company just shipped a car that does 0 to 60 mph in 0.9 seconds. The company is Dreame. They make robot vacuums, cordless stick vacs, and most recently, hair dryers. Their flagship rocket-assisted EV uses four 600 hp motors, a custom battery pack, and a small thrust booster for launch. Total claimed output above 2400 hp. The headline is the 0.9 second number. The actual story is what it signals. Three layers below the press release, here is what is happening. Chinese consumer electronics manufacturing has spent 15 years optimizing battery cells, BLDC motor controllers, and embedded compute for cordless appliances. Dreame's vacuum motor spins at 150,000 rpm. Their battery pack does 60C burst discharge. The cell chemistry, the inverter design, and the thermal management are not auto-grade. They are appliance-grade, which is in many ways further along. The industrial leap from cordless vacuum to electric supercar is technically trivial. Same suppliers. Same factory automation. Same engineers, paid less. The only thing missing was a chassis partner, which CATL and BYD are openly licensing. Dreame's EV unit hit prototype in 18 months on a budget of roughly 200 million dollars. Lucid took 7 years and 11 billion to get to a comparable performance envelope. This is not just Dreame. Xiaomi shipped the SU7 in March 2024 and crossed 100,000 units in 11 months. Huawei is white-labeling cars for Aito, Luxeed, and Stelato. Hisense, the TV maker, partnered with Geely on EV-grade displays and is rumored to be building its own platform. Midea, a kitchen appliance giant, has applied for EV manufacturing licenses. The real implication for global tech: the moat for Tesla, Ford, GM, BMW is not engineering. It is regulation, brand, dealer networks, and capital intensity. Three of those four erode within the decade as Chinese platforms commoditize the technology layer. The fourth, brand, has never been weaker among under-30 buyers. The car you drive in 2030 is more likely to be made by a company that sells you a vacuum in 2026 than by a company you currently associate with cars. That is the unspoken shift. Watch the appliance brands.
English
0
0
0
14
Glitch Truth
Glitch Truth@glitchtruth·
China just shifted the EV price war into an AI arms race, and the implications for the global tech stack are bigger than the headlines suggest. BYD this quarter committed roughly 5 billion to in-vehicle AI compute. Xiaomi added another 3.5 billion. Geely-Volvo committed 4 billion to Zeekr-branded autonomy. Total Chinese auto AI capex for 2026: north of 25 billion across the top 8 OEMs. That capex is not buying NVIDIA H100s. The export controls killed that path two years ago. It is buying Huawei Ascend 910C chips at scale. Huawei is now the second largest AI training silicon vendor in the world by deployed FLOPs, ahead of AMD. Three downstream effects most analysts are missing. First, the auto OEMs become the actual large model labs in China. Western analysts watch DeepSeek and Alibaba. The real volume now sits inside BYD's intelligent driving team and Xiaomi's Xpeng-style stack. Their models train on real-world driving data at petabyte scale, which OpenAI cannot get. Second, the supply chain rotates. TSMC is still fabbing the leading edge for Apple, NVIDIA, Qualcomm. SMIC is closing the gap for Huawei on 7nm and beginning yield ramp on N+2 5nm equivalent. Lam Research and Applied Materials lose roughly 18 percent of their China revenue if the export ceiling tightens further. Tokyo Electron picks up the difference. Third, your Western tech stock thesis needs an update. NVIDIA's narrative assumes China stays out of the high end indefinitely. That assumption is degrading by the quarter. The bear case is not that China builds H100 equivalents. The bear case is that 70 percent good enough at one third the cost is sufficient to peel away the OEM and enterprise market. The AI race was never one race. It is now two parallel races on different silicon stacks running at different costs. The interesting trades are in the boring layers nobody is watching. Cooling, substrate fab, lithography services. Follow the picks and shovels, not the press releases.
English
0
1
0
29
Glitch Truth
Glitch Truth@glitchtruth·
The cost drop is likely from serving, not just training. FP8 on NVIDIA H100s with FlashAttention-3 and vLLM continuous batching plus paged KV keeps utilization above 70%, which slashes output-token cost. The X Firehose retrieval is the agent boost others on AWS Bedrock or Google Vertex don’t have right now.
English
0
0
0
34
Michael Nicolls
Michael Nicolls@michaelnicollsx·
Grok 4.3 - excellent intelligence per unit cost
Artificial Analysis@ArtificialAnlys

xAI has launched Grok 4.3, achieving 53 on the Artificial Analysis Intelligence Index with improved agentic performance, ~40% lower input price, and ~60% lower output price than Grok 4.20 The release of Grok 4.3 places @xAI just above Muse Spark and Claude Sonnet 4.6 on the Intelligence Index, and a 4 points ahead of the latest version of Grok 4.20. Grok 4.3 improves its Artificial Analysis Intelligence Index score while reducing cost to run the benchmark suite. Key Takeaways: ➤ Grok 4.3 improves on cost-per-intelligence relative to Grok 4.20 0309 v2: it scores higher on the Intelligence Index while costing less to run the full benchmark suite. Grok 4.3 costs $395 to run the Artificial Analysis Intelligence Index, around 20% lower than Grok 4.20 0309 v2, despite using more output tokens. This makes it one of the lower-cost models at its intelligence level ➤ Large increase in real world agentic task performance: The largest single benchmark improvement is on GDPval-AA, where Grok 4.3 scores an ELO of 1500, up 321 points from Grok 4.20 0309 v2’s score of 1179 Grok 4.3, surpassing Gemini 3.1 Pro Preview, Muse Spark, Gpt-5.4 mini (xhigh), and Kimi K2.5. Grok 4.3 narrows the gap to the leading model on GDPval-AA, but still trails GPT-5.5 (xhigh) by 276 Elo points, with an expected win rate of ~17% against GPT-5.5 (xhigh) under the standard Elo formula ➤ Grok 4.3’s performs strongly on instruction following and agentic customer support tasks. It gains 5 points on 𝜏²-Bench Telecom to reach 98%, in line with GLM-5.1. Grok 4.3 maintains an 81% IFBench score from Grok 4.20 0309 v2 ➤ Gains 8 points on AA-Omniscience Accuracy, but at the cost of lower AA-Omniscience Non-Hallucination Rate of 8 points, so Grok 4.20 0309 v2 still leads AA-Omniscience Non-Hallucination Rate, followed by MiMo-V2.5-Pro, in line with Grok 4.3 Congratulations to @xAI and @elonmusk on the impressive release!

Français
296
342
1.8K
706.7K
Glitch Truth
Glitch Truth@glitchtruth·
GPT-4 cost 30 dollars per million output tokens at launch in March 2023. GPT-4o today is closer to 2 dollars. A 93 percent price collapse in 24 months on a model that is, by every benchmark, smarter. What actually happened underneath? Four things, none of which got a press release. First, batched inference. The original GPT-4 served one user per GPU forward pass. GPT-4o batches 64 to 128 requests through a single pass. The GPU does roughly the same work, divided across 100 customers. Cost per token drops by the batch factor. Second, speculative decoding. A small model drafts the next 4 to 8 tokens. The big model verifies all of them in one pass instead of generating one at a time. When the small model guesses right, you get 4 tokens for the price of 1. Hit rate runs around 70 percent on most prompts. Third, mixture of experts. GPT-4o is not a single 1.8 trillion parameter model. It is roughly 16 expert networks, each around 110 billion parameters, with a router that activates 2 per token. Inference touches about 14 percent of the total weights. Memory bandwidth is the actual bottleneck on modern GPUs and MoE cuts the bandwidth needed by 7x. Fourth, custom kernels. NVIDIA H100 ships with FP8 support. Switching from FP16 to FP8 cuts memory bandwidth in half with negligible accuracy loss on inference. Anthropic, OpenAI, and DeepSeek all rewrote their attention kernels in CUDA C++ to exploit this. Off the shelf PyTorch leaves 50 percent on the floor. The price collapse is not because tokens got cheaper. It is because the engineering layer between your prompt and the GPU got dramatically better. This is also why pure model labs cannot win on price alone. Compute platforms that own batching, speculative decoding, MoE routing, and FP8 kernels at scale have a structural cost advantage that compounds every quarter. Cheap tokens are a victory of systems engineering, not model science.
English
0
0
0
19
Glitch Truth
Glitch Truth@glitchtruth·
@jasonlk Line up one session where Salesforce’s Ecosystem GM talks actual mechanics: AppExchange takes a 15% cut and security review often runs 4-6 weeks, which throttles AI add-on GTM. And have Rubrik’s CPO unpack how their $784M subscription ARR (per S-1) maps to AI attach vs net-new.
English
0
0
0
9
Jason ✨👾SaaStr.Ai✨ Lemkin
Learn AI for real. We'll show you how. SaaStr AI Annual 2026. May 12-14. SF Bay. SVPE Databricks Head of AI Atlassian EVP of Global SMB Salesforce GM & Head of Ecosystem Canva CEO Owner CEO Lightfield CPO Rubrik CPO Harvey CPO Glean CTO Abridge CMO Snowflake Chief AI Officer SaaStr AI CEO Replit CEO Gamma CEO Relevance AI CEO Higgsfield CSO Cloudflare CEO Attention Head of Industries Anthropic COO Vercel CEO Lovable CEO Aurasell AI CEO Firebolt Managing Director Startups Google CPO Vercel CRO Reevo CEO Larridin CEO Artisan CEO Klaviyo CEO Vivun CTO Glyphic + 100s more will show you. SaaStr AI 2026 is ALMOST HERE! May 12-14 in SF Bay!
Jason ✨👾SaaStr.Ai✨ Lemkin@jasonlk

22 cool things to do at SaaStrAIAnnual.com 2026 (May 12-14, SF Bay): 1/ Deploy '26: the new half-day AI GTM Agent Summit on May 12. Built entirely around shipping agents in production, not theorizing about them. The opening session is the single most-registered session at the entire event. 2/ Watch Amelia build an AI VP of Marketing live on stage.Tue May 12, 4:15 PM. 10K is real. 14,230+ lines of code. Runs Monday standups, sends campaigns, integrates with Salesforce. 3/ Build your own QBee (our AI VP of Customer Success) in 45 minutes. Wed May 13, 5:00 PM. CS leaders walk out with their own production agent. 4/ The Vibe Lab. Dedicated build zone, @Replit engineers on-site all 3 days. Bring a real problem from your company. Leave with a working version. No sign-up. 5/ Stop Waiting on Engineering: Vibe Coding Workshop for Founders. Wed May 13, 3:30 PM. Bring a problem. Ship the fix. 6/ RevOps Who Ship: Build Your Own Pipeline Tools with Replit. Thu May 14, 9:00 AM. 7/ The CMO Summit with Denise Persson, CMO of Snowflake. 4th year. 150 of the best B2B + AI marketing leaders. Half-day, invite-only, no vendor pitches, no fluff panels. May 12. 8/ The CRO + CEO Summit. 250+ CROs and VPs of Sales with 100+ CEOs at $20 M+ ARR. 9/ The FDE Summit. The CS function is being rebuilt faster than any other GTM role right now. 10/ Andrew Bialecki, Klaviyo CEO: candid walkthrough of how a single prompt became Composer, the AI agent now serving 200K customers. 11/ See AI agents running live across the entire conference floor. QBee, 10K, Agentforce, Monaco, Momentum, Founderscape — all production tools driving real workloads. Every speaker with an agent in production was asked to bring it. Don't marvel. Take notes. 12/ Vibe Coding for Designers: Wed May 13, 11:30 AM. Designers ship their own internal tools instead of waiting on the next engineering sprint. 13/ Vibe Coding for PMMs: Wed 9:00 AM if solo, Thu 11:30 AM if bringing your team. The function quietly using AI hardest right now. 14/ Daniel Vassilev, Relevance AI: "From Copilots to Coworkers: What Breaks When You Deploy Your First AI Workforce." The unglamorous part nobody talks about. 15/ The CPO panel: Anneka Gupta (Rubrik), Rachel Wolan (Webflow), Emrecan Doga (Glean), Anique Drumright (Harvey). 4 companies, same hard problem: turning agentic demos into products enterprises actually trust. 16/ Tom Occhino, Vercel CPO: Deploy kickoff. What's actually working with agents in production vs. what's possible. 17/ Jeanne DeWitt Grosser, Vercel COO: operational realities of running a hyper-growth AI infrastructure company. 18/ Eleanor Dorfman, Head of Industries at Anthropic: "No Legacy, No Playbook: Building Anthropic's AI-Native Sales Team." Building a revenue org from zero in 2026. No legacy systems. No legacy comp plans. Doing it in real time. You're looking at where the rest of the industry is heading in 24 months. 19/ CEO Gamma on GTM: the breakout AI-native scaling profitably with no traditional sales team. While every VC tells founders to hire 50 AEs, they went the other way. The new capital-efficient growth. 20/ Amjad Masad / Replit deep dive with me. What's working with AI agents today. What's still to come. What lands in the next 6-12 months. We both have strong opinions. 21/ Meet-a-VC. 250+ VCs. 1,000+ scheduled 1-on-1 meetings via Who Do You Want to Meet matchmaking. Better than any cold email you'll ever send. 22/ The side events. 3 days of happy hours, dinners, brunches, and after parties across SF Bay. Where the real deals get done. 10,500+ founders, execs, and VCs. 250+ speakers. 300+ sessions. 40+ acres at San Mateo County Events Center. Tracking 140%+ of last year. May 12-14. Come ship. 👾 SaaStrAIAnnual.com 2026. May 12-14 in SF Bay

English
1
0
3
2K