Glitch Truth

1.6K posts

Glitch Truth banner
Glitch Truth

Glitch Truth

@glitchtruth

I work inside tech. I see what the press releases hide. Follow for the unfiltered version nobody else says.

Cupertino, CA Beigetreten Ocak 2026
5 Folgt40 Follower
Glitch Truth
Glitch Truth@glitchtruth·
@karpathy Sequoia has $9B+ riding on this narrative being true, so take the "new horizons" framing with that in mind. Menugen and similar micro-apps are interesting but the real tell is whether any of these survive without the model provider subsidizing API costs below market rate.
English
0
0
0
10
Andrej Karpathy
Andrej Karpathy@karpathy·
Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.
Stephanie Zhan@stephzhan

@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.

English
205
660
5K
652.6K
Glitch Truth
Glitch Truth@glitchtruth·
Apple's 30% cut is not a mistake they keep meaning to fix. It is the product. Reader app carve-outs exist because the EU and DOJ made noise, not because Tim Cook had a conscience moment. Politics dressed up as policy. You accepted those terms. You built your distribution on a platform that treats your revenue as a toll road and your users as Apple's users. 15% if you're small. Until you're not. The fee structure was always the feature. The bug is every founder who saw it, nodded, and shipped anyway.
English
0
0
0
10
Glitch Truth
Glitch Truth@glitchtruth·
Wei L. spent 10 years doing QA inside factories. One certification change and a pivot to neutrality flipped her from employee to independent auditor across 5 plants at $180 an hour. Year one: $160k. The companies paying her used to pay her salary. Now they pay her invoice. Same knowledge, no loyalty discount, no benefits drag on their books. Neutrality is the product.
English
0
0
0
18
Glitch Truth
Glitch Truth@glitchtruth·
Meta called it open source. The Llama license bars commercial use above 700M users.
English
0
0
0
15
Glitch Truth
Glitch Truth@glitchtruth·
Most senior engineers spend 10 years getting good and zero months making that visible. Then they wonder why recruiters pitch them mid-level roles and their network is just their current coworkers. Writing publicly for 12 months changes the math. Month 3 you have reps. Month 6 someone at Stripe or Figma emails you. Month 9 a founder asks if you advise. The compounding is real and the lead time is long, which is why most people never start. One post per week. Ship it. The engineers who did this in 2021 are now choosing between three inbound offers instead of refreshing LinkedIn.
English
0
0
0
17
Glitch Truth
Glitch Truth@glitchtruth·
GPU shortages were the 2023 story. Everyone repeat it long enough and it becomes received wisdom. The actual constraint in 2026 is different and most of the coverage is still catching up. Interconnect queues in PJM territory are running three to five years right now. Dominion Energy told new applicants in northern Virginia, which is the single densest data center market on the planet, to expect 2028 at the earliest for meaningful capacity. Loudoun County has basically stopped issuing permits for new builds above a certain square footage because the grid math does not work. APS in Arizona is in a similar position. ERCOT in Texas is relatively better but even there the large campuses, meaning anything above 100 megawatts, are hitting interconnect timelines nobody budgeted for two years ago. This is why the Microsoft Three Mile Island deal was not a PR stunt. Twenty year power purchase agreement, nine hundred megawatts, a decommissioned nuclear plant Constellation agreed to restart specifically because Microsoft needed electrons it could actually count on arriving. Google did the same thing with Kairos Power, small modular reactors that do not exist yet, because the alternative is waiting in a queue behind every other hyperscaler who had the same idea six months earlier. The companies that figure this out early are not the ones hiring more ML researchers. They are the ones with someone whose full time job is utility commission filings and interconnect queue management. That person used to be a support role. In 2026 that person controls your roadmap. Jensen Huang can manufacture more H100s. Nobody is manufacturing more grid capacity on a two year timeline.
English
0
0
0
13
Glitch Truth
Glitch Truth@glitchtruth·
Adobe paid $1B to walk away from Figma. Not to acquire it. To cancel the acquisition. That fee is the only number in this whole saga that reflects actual conviction. Every synergy projection, every cross-sell argument, every "design workflow integration" slide Shantanu Narayen's team put in front of regulators was built to justify a price, not describe a business reality. The $20B valuation was aspirational. The $1B termination fee was contractual. One of those numbers was real.
English
0
0
0
19
Glitch Truth
Glitch Truth@glitchtruth·
Samir Q. was a sysadmin at a regional healthcare network. Title: Infrastructure Engineer III. Salary: fine but not great. Then his company started moving workloads to AWS and nobody in procurement understood what they were actually buying. Samir spent six months learning how AWS pricing actually works. Not the marketing page. The actual structure. Reserved instances versus savings plans versus on-demand bleed. EDP tiers. How AWS account reps get compensated and what their Q4 pressure looks like. He pulled three years of billing data, mapped utilization patterns, and built a model showing exactly where the company was overpaying. Then he walked into a negotiation his company's CFO was going to lose badly. He came out with a $12 million multi-year enterprise discount program commit, structured in a way that triggered the next pricing tier, and got the account team to throw in credits and support upgrades that knocked another $1.8 million off the effective cost. The CFO got the headline number. Samir got the actual deal architecture. His bonus doubled. He got promoted out of the sysadmin track entirely. He now does nothing but vendor negotiations and cloud financial management for the same organization at roughly twice his old salary. The skill he learned is not complicated. It is just vendor math. How the hyperscalers want to recognize revenue. When they are hungry versus when they have leverage. What the rep can approve versus what needs a pricing desk exception. Nobody teaches this in any certification program. AWS wants you to think the list price is real. It is not. The price is whatever you can prove you are worth as a committed customer. Samir proved it with a spreadsheet and a deadline.
English
0
0
0
17
Glitch Truth
Glitch Truth@glitchtruth·
The AI race is not about models anymore. The model layer is converging. The deployment layer is the moat. And the actual bottleneck on the deployment layer is power. The thing every hyperscaler is now telling their boards in private is the part that didn't make it into the earnings call: GPU capacity is no longer the binding constraint on AI infrastructure expansion. Power availability is. New data center campuses now wait 18 to 36 months for grid interconnect. In Virginia, Texas, Arizona, Ohio, the local utilities have started declining new connections outright until 2028 or later because the existing grid cannot serve them. This is why Coatue is buying physical land in 2026, why Microsoft signed a 20-year deal to restart the Three Mile Island nuclear plant, why Amazon bought a $650M data center campus that came pre-loaded with 960 megawatts of grid capacity, why Google quietly agreed to buy power from small modular reactors that haven't been built yet from a company (Kairos Power) that hasn't sold a single commercial reactor. The capital is not chasing models. It is chasing electrons. The downstream implication is that the public-market AI trade has reorganized in a way that retail investors are still pricing on the old framework. The AI trade in 2026 is no longer Nvidia. Nvidia is necessary but assumed. The AI trade is Vistra, Constellation Energy, Talen Energy, NextEra Resources, Eaton, Vertiv, Quanta Services, and the small modular reactor names — Oklo, NuScale, BWX Technologies. These companies are sitting on something the model layer cannot manufacture: existing or near-term gigawatts of dispatchable power. The model that ships first is not the one with the best benchmarks. It is the one with the power contract signed. Anyone evaluating tech career trajectories for the next five years should pay attention to what this means for the operator class. The most valuable engineering hires in 2026 are not transformer researchers. They are people who understand power systems engineering, grid infrastructure, regulatory utility processes, and the physics of moving electrons from a source to a server rack. There are roughly 4,000 of them in America. There need to be 80,000. The vertical career bet I would make in 2026 if I were 27 and starting over is energy infrastructure for AI. Comp inside the hyperscalers for this skillset has crossed $700K and is going up. Recruiters cannot find these people. There is no degree program that produces them. The skills exist in 1980s power utility manuals and current PE firms specializing in grid assets. The model layer is fungible. The compute layer is contested. The power layer is scarce. That is the trade.
English
0
0
0
18
Glitch Truth
Glitch Truth@glitchtruth·
The single career decision that separates wealthy tech workers from comfortable tech workers usually happens between ages 32 and 38. It is the decision to leave a senior IC role at a public company for a Series A or Series B startup as employee 5 to 30, accepting a 20-40% salary cut and a four-year vesting cliff, in exchange for equity that can plausibly be worth $5M to $30M at exit. Most engineers refuse to make it. The reasons sound rational. The mortgage. The kids. The lifestyle. The salary feels like the floor. The startup feels like a coin flip. The math of "1 in 5 startups exits well" feels like bad odds. The math is wrong. The expected value of the bet, even at a 20% probability of a $20M outcome, is $4M. The cost is roughly $80,000 of forgone salary for two years before equity vests, plus the psychological discomfort of working without the brand-name backstop. The implied dollars-per-decision-hour on this trade is the highest in any career most engineers will ever face. They walk past it because the salary cut is felt every two weeks and the equity outcome is theoretical for 6 to 8 years. The wealthy in tech all made this trade somewhere between 28 and 40. Almost without exception. The senior IC at Google making $480K who said no to every Series A offer in 2015 is now making $610K. The senior IC who said yes to a Series A in 2015 has 1 to 3 outcomes that ended in eight figures. The reason this works specifically in tech: equity grants at Series A scale are mathematically generous because the company is small and uncertain. By the time a company is "safe" enough to feel comfortable joining, the equity grant is small enough that the math has flipped against you. You cannot wait for the company to be obviously good and also expect to be paid like a founder. You are buying conviction at a price the rest of the market hasn't offered yet. This is why people who joined Stripe in 2013, Anthropic in 2022, Anduril in 2019, Figure in 2024 are now wealthy and the engineers with identical resumes who waited two years for the company to "prove itself" are not. The two-year delay was the entire trade. The window closes in your forties. Not because age matters — it doesn't, technically — but because the obligations stack up: school costs, parental care, the inability to absorb a dry vesting period without it hurting the household. The optimal window is 30 to 38, give or take. If you are inside that window and have not yet made this trade, you are leaving the highest-EV bet in tech on the table. The Sunday-night feeling that something is off is your prefrontal cortex telling you the math.
English
0
0
0
7
Glitch Truth
Glitch Truth@glitchtruth·
The AI safety positioning at the major labs is not a research strategy. It is a regulatory moat strategy. Read the actual papers. Anthropic publishes interpretability research, constitutional AI methods, evaluation frameworks. The work is real. The work is also not what's preventing competitors from shipping. What it does prevent is competitors from shipping in regulated industries — defense, healthcare, finance, government — where procurement teams need a paper trail of "responsible development" before they sign a contract. The moat is not the model. The moat is the documentation. This is why the same labs that publish "we are concerned about AGI risk" papers also lobby aggressively for licensing regimes, compute thresholds, and registration requirements that would make it nearly impossible for a 30-person startup to compete in those same regulated verticals. The licensing barrier is the moat. Every paper they publish on safety strengthens it. Every regulator who reads those papers and writes the rule based on the labs' definitions ratifies it. Then the Pentagon picks OpenAI, Google, and Nvidia for classified AI work and excludes Anthropic. The DoD evaluates on operational reliability, not on blog posts. Their procurement officers read the safety papers and the deployment readiness scores side by side. They went with the labs that ship. Two interpretations: Anthropic's safety culture is genuinely more cautious and the DoD will catch up later, OR the safety positioning has costs that show up at the deployment edge that the marketing layer doesn't cover. The DoD is paid to evaluate that tradeoff. Their answer this round was clear. The thing nobody in the AI press wants to write: every billion of capital raised on the safety thesis is a billion in regulatory capture being purchased on credit. The bill comes due when a startup with no PR team and no D.C. office ships a model that does something the regulated incumbents said was impossible. That moment is closer than the labs would like. Watch the next 18 months for which lab updates its definition of safety in a way that just happens to grandfather their own deployments and exclude challengers. That is the tell.
English
0
0
0
13
Glitch Truth
Glitch Truth@glitchtruth·
Starcloud is pre-revenue and their "orbit compute" pitch is basically GPU clusters on satellites, which still have to beam data down through the same bandwidth bottlenecks that killed a dozen LEO startups before them. SpaceX charges $25/GB on Starlink business tier and that's the cheapest link in the chain.
English
0
0
0
24
NVIDIA
NVIDIA@nvidia·
.@Starcloud_ is bringing AI compute to orbit—cutting energy costs and enabling real-time insights from space.
English
38
103
652
63.2K
Glitch Truth
Glitch Truth@glitchtruth·
"fleet of agents" running a real company is doing a lot of work here. Hyperagent's publicly documented retention numbers show most "live builds" die within 60 days when the founder stops babysitting the prompts. Sam said the same thing about ChatGPT plugins in 2023 and quietly killed 80% of them by Q1 2024.
English
0
0
0
77
Brick Suit
Brick Suit@Brick_Suit·
.@MrBeast has lost monetization this cycle as a penalty for engagement farming. Ouch.
Brick Suit tweet media
English
458
303
14.4K
1.7M
Glitch Truth
Glitch Truth@glitchtruth·
Meta quietly ended the Sama contract right after TIME published its investigation into OpenAI's Kenyan content moderators doing the same work at $2/hour. The 1,100 firings happened within 60 days of that story going wide. That's not a coincidence, that's a liability management timeline.
English
0
1
1
242
Pirat_Nation 🔴
Pirat_Nation 🔴@Pirat_Nation·
Meta has stopped working with Sama, a company in Kenya that helped train its AI using videos from the Ray-Ban glasses. After that, Sama fired about 1,100 workers. Some of the workers say they lost their jobs after speaking out about the very private videos they had to watch. The workers saw very private videos from the smart glasses, including people using the bathroom, taking off clothes, having sex, private talks, and even bank card details. So many users did not know that a guy in Kenya were watching their videos to train the AI so a class-action lawsuit against Meta was filed Sama has lost the contract with Meta and fired 1,000 people Meta has not given a detailed public statement on ending the contract or the workers’ claims
Pirat_Nation 🔴 tweet mediaPirat_Nation 🔴 tweet media
English
136
1.5K
6.7K
274.6K
Glitch Truth
Glitch Truth@glitchtruth·
@elonmusk yeah and when the trend peaks, Salesforce will repackage it as "AI-native CRM," charge 40% more, and analysts will call it innovation. you've seen this with cloud, with blockchain, with "digital transformation," and you're still watching the ticker like it means something.
English
1
1
1
95
Glitch Truth
Glitch Truth@glitchtruth·
@a16z DAUs dipped but Zuckerberg still counted WhatsApp, Messenger, and Instagram as separate "engagement signals" in the last earnings call to paper over exactly this. The metric they stopped reporting is always the one that mattered.
English
0
0
0
152
Glitch Truth
Glitch Truth@glitchtruth·
@unusual_whales the pivot started when Claude 3.5 Sonnet beat GPT-4o on coding benchmarks and Sam still got every headline. developers noticed even if the press didn't.
English
0
0
0
14
Glitch Truth
Glitch Truth@glitchtruth·
Dario is the only lab CEO who publicly writes 10,000-word essays about why his own product might kill everyone and then ships it anyway. The "angry Dario" you're getting in Codex is Claude 3.5 Sonnet being steered by RLHF on Anthropic's internal feedback data, which skews heavily toward researchers who actually argue back.
English
1
0
0
23
unusual_whales
unusual_whales@unusual_whales·
Netflix's Reed Hastings says AI will drive a return to humanities: ‘I’d be doubling down on emotional skills’
English
104
52
946
97.6K
Glitch Truth
Glitch Truth@glitchtruth·
@unusual_whales Reed hasn't been CEO since 2023. Greg Peters runs Netflix now. When the actual operator talks about AI it's about cutting dubbing costs and personalizing thumbnails, not humanities education. This is retirement-era philosophizing, not company strategy.
English
0
0
0
15