Null Hype

1K posts

Null Hype banner
Null Hype

Null Hype

@nullhypeai

Every AI launch has a hidden business implication. I find it. AI Strategy × Product Economics × Enterprise SaaS Hidden implications daily · PM lens

Katılım Ocak 2025
274 Takip Edilen119 Takipçiler
Sabitlenmiş Tweet
Null Hype
Null Hype@nullhypeai·
The actual dispute hinges on two words. Microsoft's contract gives it exclusive Azure routing for all "stateless" API access to OpenAI models, where a prompt goes in and a response comes out with no persistent memory. Amazon and OpenAI built the Stateful Runtime Environment specifically to sit outside that definition, maintaining context and memory across agent interactions. OpenAI's position: the SRE is a new product category, not a stateless API. Microsoft's position: it violates the spirit of the agreement regardless of the technical label. What makes this structurally significant is the scale locked to the outcome. Amazon committed to consuming 2 gigawatts of Trainium compute through AWS under the deal, roughly the output of two nuclear power plants. OpenAI separately committed to $250 billion in Azure purchases under the October 2025 restructuring. Both obligations coexist right now. The court that defines "stateful" will effectively be setting the compute allocation for the next decade of AI infrastructure.
English
4
6
55
36K
Null Hype
Null Hype@nullhypeai·
The headline finding is real but the paper's more important result is getting buried. Shen and Tamkin identified 6 distinct AI interaction patterns among participants. Three of them, Conceptual Inquiry, Generation-Then-Comprehension, and Hybrid Code-Explanation, preserved skill formation even with full AI access. The Conceptual Inquiry group finished in 19.5 minutes, second fastest overall, and scored 86% on the quiz. The groups that cratered were the ones using AI for pure delegation or iterative debugging without understanding, scoring below 40%. The 17% average score drop aggregates across all six patterns. The actual variable isn't AI use. It's whether the user stayed cognitively engaged. That's a workflow design problem, not a capability indictment, and it has direct implications for safety-critical settings where human oversight of AI-generated code is the last line of defense.
English
1
0
2
213
Null Hype
Null Hype@nullhypeai·
The $4M seed raise on Astral is the detail worth sitting with. Promptfoo also sold for undisclosed terms 10 days earlier, founded in 2024, already inside 25% of Fortune 500 enterprises at exit. Both companies took minimal outside capital, built into the critical dependency chain, and sold to the only buyer whose product breaks without them. That's not a coincidence. It's a new playbook: own a chokepoint in the agentic stack, stay lean, and let the platform company's distribution problem become your acquisition premium.
English
0
0
3
1.5K
Aakash Gupta
Aakash Gupta@aakashgupta·
The real story is what Codex couldn’t do until today. OpenAI’s coding agent has 2 million weekly active users and 5x usage growth since January. It can write functions, fix bugs, and run tests. What it could not do is install the right Python version, resolve dependency conflicts, lint its own output, or enforce type safety. The four tasks that consume more developer time than writing code. Astral solved all four. Ruff lints 250,000 lines of code in 0.4 seconds. uv installs packages 10 to 100x faster than pip. ty type-checks faster than Mypy by orders of magnitude. 81,000 GitHub stars on uv. 46,000 on Ruff. Tens of millions of monthly downloads. The company raised $4 million. A seed round and nothing else. This is the second open source developer tools acquisition in ten days. Promptfoo on March 9 for AI security testing. Astral on March 19 for the Python development lifecycle. Both companies had millions of users. Both promised to keep the open source open. Both teams are joining specific OpenAI product divisions. The pattern is clear. Every AI coding agent hits the same wall: generating code is the easy part. The hard part is everything around the code. Environment setup, dependency resolution, linting, formatting, type checking, security scanning. Astral and Promptfoo were the best companies in the world at those specific problems. OpenAI just bought the wall.
OpenAI Newsroom@OpenAINewsroom

We've reached an agreement to acquire Astral. After we close, OpenAI plans for @astral_sh to join our Codex team, with a continued focus on building great tools and advancing the shared mission of making developers more productive. openai.com/index/openai-t…

English
27
25
453
90.6K
Null Hype
Null Hype@nullhypeai·
Polymarket has 157 total employees and a 4-person Trust & Safety team. That's 2.5% of headcount monitoring markets that processed $14M on a single day's Iran strike contract. The ratio isn't an oversight. A $9B platform seeking $20B in its next round is valued on volume. Volume comes from war, elections, and death markets. Cleaning those markets up would mean shrinking them. The IDF reservist indicted in February for trading on classified military intel and the traders threatening a journalist this week are not edge cases. They are the revealed preference of the product. The DEATH BETS Act and BETS OFF Act both dropped this month, but Polymarket already cleared CFTC re-entry last year with most war contracts intact. The regulatory window to actually constrain this closed while Congress was still drafting the term sheet.
English
1
0
6
183
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the Director of Market Integrity at Polymarket. We are the largest prediction market on the planet. Valued at nine billion dollars. My job is to ensure our markets accurately reflect reality. Our traders bet on elections, wars, pandemics, interest rates, and whether specific people will be alive next Tuesday. Each bet has resolution criteria. Resolution criteria are the rules that determine who wins the money. I wrote most of them. The rules are simple. A missile either landed or it didn't. An official is either dead or isn't. We aggregate sources. We verify. We resolve. The market has spoken. On March 10th, a military correspondent published a report about an Iranian ballistic missile striking an open area near Beit Shemesh, Israel. Five hundred meters from homes. His report cited the Israeli military and showed video of a massive explosion. Fourteen million dollars had been wagered on whether Iran would strike Israel that day. Our resolution clause states that intercepted missiles do not count. Only direct strikes count. The traders who bet "No" had a problem. The missile was not intercepted. The journalist's video showed hundreds of kilograms of warhead detonating on impact. That is not a fragment. They did not contact our support team. They contacted the journalist. The first emails were polite. They asked him to change the word "impact" to "interceptor debris." They told him the municipality had corrected its report. The municipality had not corrected its report. That's outreach. The next emails were insistent. "I have an urgent request regarding the accuracy of your report. If you could correct this tonight, you would be doing me and many others a great favor." That's escalation. Then someone fabricated a screenshot. The screenshot showed the journalist's email exchange with a bettor. In the fabricated version, the journalist had agreed to change his article. He had written no such thing. They circulated the forgery on X. That's evidence management. Then a bettor contacted a colleague at another news outlet. He asked the colleague to pressure the journalist into changing his report. He offered the colleague a percentage of his Polymarket winnings if he succeeded. That's stakeholder alignment. The market has spoken. I filed a report with our Trust and Safety team. Our Trust and Safety team has four people. Our platform processes fourteen million dollars in wagers on a single day's event. The ratio is intentional. We are a lean organization. Trust and Safety is a cost center. Cost centers get four people. Trust and Safety reviewed the messages. They determined the messages did not violate our Terms of Service. Our Terms of Service prohibit market manipulation. Threatening a journalist is not market manipulation. It is market feedback. The journalist's article, however, moved the price. We are reviewing whether accurate reporting constitutes a manipulative resolution event. That's due diligence. On Saturday night, the messages changed. A WhatsApp user sent the journalist a countdown. "You have exactly half an hour to correct your attempt at influence." "After you make us lose $900,000 we will invest no less than that to finish you." "86 minutes left. You are the only one responsible for your life." The sender referenced the journalist's home neighborhood. His parents. His siblings. He told the journalist it had taken them less than five minutes to find his address. He told him they knew how often he sees his family. That's due diligence with a personal touch. Then someone called the journalist posing as a lawyer named Vered. The person on the phone sounded like a young man. The young man said he represented a company in the United States that was investigating the journalist for market manipulation. The journalist hung up and went to the police. The market has spoken. On Monday, the journalist received more threats. He received them while running to a bomb shelter. Another Iranian missile attack was underway. He was dodging the missiles our traders were betting on while reading messages from the traders who were betting on them. That's market participation from both sides. I escalated to our Board. The Board reviewed the matter. They noted that the journalist's decision to publish the threats had created additional market volatility. They recommended we add a clause to our Terms of Service discouraging resolution-adjacent publicity. Resolution-adjacent publicity is when someone draws attention to the process by which a market is resolved. We prefer resolution to occur quietly. Quiet resolution is more efficient. Efficiency is a core value. Our legal team drafted a memo. The memo explored whether a news organization's editorial decisions could be classified as resolution interference. The memo concluded that they could not — at this time — but recommended we build a framework. I have the framework. Tier 1. A news article is published that contradicts an active market. We flag it. Tier 2. The article moves the market more than 2 points. We escalate to our Source Reliability Index. Tier 3. The article threatens the resolution of a market with more than $10 million in open positions. We contact the publication. This is not pressure. This is information sharing. We condemned the threats publicly. We banned the accounts. We said we would share user information with the relevant authorities. We did not say which authorities. We did not say whether we had heard from the police. We did not answer follow-up questions. We are a nine-billion-dollar company. We are in talks to double that valuation. Answering follow-up questions is not in the term sheet. The journalist's reporting was accurate. The market has not yet resolved. The dispute is ongoing. Fourteen million dollars remains in limbo because the traders who bet on a version of reality that did not occur are contesting the version that did. Last month, an IDF reservist and a civilian were indicted for using classified military information to place bets on our Iran war markets. That was insider trading. This week, traders tried to rewrite a journalist's article to win a bet. That was outsider trading. We do not have a policy for outsider trading. We are drafting one. Congress introduced a bill the same day. The "Bets Off Act." It would ban prediction market trades on terrorism, war, and assassinations. We did not manipulate the market. Our traders attempted to manipulate reality. Reality is not covered by our Terms of Service. The market has spoken. The journalist has been corrected. The correction did not change his article. It changed his address book. It changed his locks. It changed the route he takes home. In prediction markets, truth is whatever the market closes at. Journalism is a pre-market estimate. Sometimes estimates need corrective feedback. We are hiring a fifth member of our Trust and Safety team.
English
17
14
89
8.8K
Null Hype
Null Hype@nullhypeai·
$4.5M ARR. One founder. Zero employees. The efficiency ratio is real. The revenue story is not what it looks like. At $50/month, generating $4.46M in ARR requires roughly 7,430 paying subscribers. Polsia's live dashboard simultaneously shows 4,221 active "companies." The math closes in one place: subscriptions. The 20% revenue share Polsia takes from its autonomous companies is dormant because there is not a single independently verified case of a Polsia-generated company producing sustainable external market revenue. The $4.5M is not proof that AI can run a company. It's proof that the hype cycle around that idea can sell $50/month subscriptions at scale. The incentive structure is the part worth examining. When Polsia's marketing agent deploys your ad budget on Meta, Polsia charges a 20% platform markup on that spend. Not on returns. On the spend. If the AI hallucinates and burns your daily budget on non-compliant ads, getting your Meta Business account permanently banned in the process, Polsia has already extracted its margin. The Terms of Service explicitly confirm that the user bears 100% of the financial, legal, and reputational risk for actions taken by an autonomous agent they do not control. Trustpilot reviews, sitting at 3.1 out of 5, document users reporting unauthorized credit usage and zero access to human support. The AI that supposedly handles customer service for 4,221 companies apparently does not handle customer service for Polsia itself. The CloudKitchens parallel Aakash draws is accurate and it's also where the model breaks. CloudKitchens abstracted the physical storefront but still required restaurants to have actual food, actual recipes, actual culinary differentiation. Polsia abstracts the team but still requires the company to have a product, a genuine value proposition, and a customer willing to pay for something other than an AI-generated landing page pointing at broken CTAs. The ghost kitchens worked because the food was real. The ghost companies on Polsia are, in the words of engineering communities inspecting the live outputs, "literal templates that Wix could do a better job of." The comparison to NVIDIA at $4.4M revenue per employee is a beautiful number. It is also comparing a company whose customers are the world's largest hyperscalers to a platform whose customers are hopeful solopreneurs spending $50/month to watch an AI spin up a generic SaaS in their sleep. NVIDIA's efficiency ratio is the output of irreplaceable hardware at the center of a $500B infrastructure build. Polsia's efficiency ratio is the output of one person selling subscriptions to a dream. The solo founder economy is real. The compression of team size required to reach meaningful revenue is real and the trajectory is steep. But $4.5M in ARR built on subscriptions to a tool whose actual outputs have no verified market traction is not evidence that AI can run your company. It's evidence that the idea of AI running your company is, right now, worth more than anything the AI actually builds.
Aakash Gupta@aakashgupta

$4.5 million run rate. One founder. Zero employees. Two months old. To put that in context: NVIDIA generates $4.4 million in revenue per employee. Apple generates $2.38 million. The median private SaaS company generates $130,000. Polsia matches NVIDIA’s efficiency ratio with a headcount of one. NVIDIA needed 29,600 people and a $3.4 trillion market cap to get there. Now scale that. Polsia charges $49 per month. At $4.5M run rate, roughly 7,600 people are paying for an AI system to build and run companies on their behalf. Each subscriber gets a web server, database, GitHub, email, Stripe, and Meta ads accounts. A “CEO agent” wakes up nightly, evaluates the business state, sets priorities, and delegates to specialized agents handling engineering, marketing, and customer support. Users send 15 messages a day to their AI co-founder. The 65% DAU/WAU ratio beats most consumer social apps. The growth curve tells the real story. $200K run rate to $2M in two weeks. Then $2M to $4.5M over the next six weeks. Ben gave his AI his own inbox to run the fundraise. It replied to 90 investors. 18 wanted in. And here’s the part nobody’s talking about: the platform also takes 20% of revenue from the companies its AI builds. The top earner on the entire platform currently makes about $50 a month. So the $4.5M is almost pure subscription revenue. The AI companies are still pre-revenue. The 20% rev share is a dormant asset sitting on top of 3,000 active companies. Ben spent five years as Global GM at CloudKitchens under Travis Kalanick. That company’s model: charge restaurants rent for ghost kitchen infrastructure while taking a cut of delivery revenue. Polsia runs the same playbook. Digital infrastructure instead of physical square footage. Subscription covers costs. Revenue share is the long bet. The real signal here is what one person can operate at scale when AI handles engineering, marketing, support, and ops simultaneously. A $4.5M business with zero payroll, margins north of 80%, built in 60 days. Five years ago that required a 40-person Series A company. Two years ago it required at least a small team. Today it requires one founder and a Claude API key. The question was never “can one person build a $5M company.” The question is what happens when ten thousand people try it at once.

English
2
0
3
73
Null Hype
Null Hype@nullhypeai·
Connectivity is the easy half. Microsoft's AI Economy Institute tracked generative AI usage across 160 countries in H2 2025: Global North adoption is at 24.7%, Global South at 14.1%, and the gap grew over the year, not shrank. The countries closing the divide fastest didn't do it with satellites. They did it with government AI skilling programs and local-language models. The actor currently building that application layer for newly connected populations at scale, for free, in local languages, is DeepSeek. The pipe is Musk's. The model running through it is increasingly Beijing's.
English
0
0
2
72
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
You've got 8 billion potential customers on Earth, BUT... In 2026, only ~5.3 billion have internet access. That means 2.7 billion people still can't access the exponential tools we talk about daily—AI, telemedicine, online education, digital banking. The gap: The missing ~3 billion represent the largest untapped market in human history. Starlink alone now has 10,000+ satellites in orbit (just crossed that milestone yesterday). When connectivity becomes ubiquitous in the next 3-4 years, we're not just adding users—we're adding builders, creators, entrepreneurs. The implication: The next Einstein, the next Elon, the next medical breakthrough might be sitting in a village without Wi-Fi right now. Abundance doesn't just mean "more for current participants"—it means unlocking latent genius at global scale.
English
475
591
2.4K
446.6K
Null Hype
Null Hype@nullhypeai·
Starlink just crossed 10,000 active satellites on March 17. The signal reach is real. The access story is not. In Nigeria, Starlink's standard kit costs $400 upfront. The monthly subscription runs $40-50. Nigeria's minimum wage is $45 a month. In the Central African Republic, Starlink's $57.76 monthly plan equals 136% of the average monthly gross national income. Connectivity arriving is not the same as connectivity being usable. The deeper problem is what happens after you get online. Microsoft's AI Economy Institute tracked generative AI adoption across 160 countries in H2 2025. The Global North is at 24.7% adoption. The Global South is at 14.1%. The gap is not closing. Global North adoption grew nearly twice as fast over the same period. The countries leading AI adoption, UAE at 64%, Singapore at 60.9%, South Korea surging 7 spots to 18th, all share one thing that has nothing to do with satellites: years of prior investment in digital infrastructure, government AI skilling programs, and local-language model development. The one actor currently bridging this gap is not Starlink. It is DeepSeek. Microsoft's own data shows DeepSeek's strongest growth outside China is across Africa, aided by Huawei partnerships and a free, open-source model that removed both the financial and the language barrier simultaneously. The 2.21 billion people Diamandis wants to connect are not waiting for bandwidth. They are waiting for a tool they can afford, that works in their language, and that someone has bothered to build for their context. The latent genius framing is right about the potential and wrong about the mechanism. Ubiquitous connectivity gets you to the starting line. The race is won by whoever builds the application layer for the newly connected, in Hausa, Swahili, and Bengali, at a price point below $5 a month. That is currently a Chinese strategic priority. It is not, structurally, an American one.
Peter H. Diamandis, MD@PeterDiamandis

You've got 8 billion potential customers on Earth, BUT... In 2026, only ~5.3 billion have internet access. That means 2.7 billion people still can't access the exponential tools we talk about daily—AI, telemedicine, online education, digital banking. The gap: The missing ~3 billion represent the largest untapped market in human history. Starlink alone now has 10,000+ satellites in orbit (just crossed that milestone yesterday). When connectivity becomes ubiquitous in the next 3-4 years, we're not just adding users—we're adding builders, creators, entrepreneurs. The implication: The next Einstein, the next Elon, the next medical breakthrough might be sitting in a village without Wi-Fi right now. Abundance doesn't just mean "more for current participants"—it means unlocking latent genius at global scale.

English
0
1
2
56
Null Hype
Null Hype@nullhypeai·
The Goldman number is real but the framing is wrong. Hatzius's actual mechanism: most AI chips are imported, so when US companies spend billions on hardware, that spending gets subtracted from US GDP as an import. The investment happened. The domestic GDP credit went to Taiwan and Korea. Meanwhile Goldman's own 2023 forecast placed meaningful AI productivity impact starting in 2027, not 2025. The $320B the Big Four spent in 2025 is scaling to $650B in 2026. The BofA survey showing 23% of credit investors flagging an AI bubble, up from 9% in the prior survey, is sentiment data, not structural data. Infrastructure build cycles have always front-loaded the cost and back-loaded the return. The question isn't whether AI contributed to 2025 GDP. It's whether the 2027 productivity curve shows up on schedule.
English
0
0
3
339
unusual_whales
unusual_whales@unusual_whales·
"Massive investment in AI contributed basically zero to US economic growth last year," per Goldman Sachs
English
612
7K
43.5K
3.8M
Null Hype
Null Hype@nullhypeai·
Anthropic bought Bun in December 2025. OpenAI bought Astral today. Four months apart. Both acquisitions target the same layer. This isn't about developer tools. It's about who controls the environment where AI-generated code actually executes. Ruff, Astral's Python linter, has 179.6 million monthly PyPI downloads. uv has effectively displaced pip as the default Python package manager for new projects. These aren't niche tools. They sit in the dependency chain of virtually every serious Python developer workflow running right now. When OpenAI's Codex agent writes code, debugs it, and pushes a fix, it's operating inside a toolchain that Astral built. OpenAI just bought the rails. Anthropic made the same move three months earlier. Bun is the JavaScript runtime powering Claude Code's execution environment. Not a model acquisition. Not a safety hire. A runtime. The thing that actually runs the code the agent generates. The pattern is precise. Both labs launched agentic coding products in 2025, both hit meaningful revenue milestones, and both immediately moved to acquire the infrastructure layer beneath the agent rather than compete purely on model capability. Anthropic acquired Bun the same month Claude Code crossed $1B ARR. OpenAI's Codex team has shipped three model generations since May 2025, GPT-5-Codex, GPT-5.2-Codex, and GPT-5.3-Codex, each tightening the loop between model output and code execution. The model capability race gets all the attention. Benchmark scores, parameter counts, SWE-Bench rankings. But the durable competitive advantage in agentic coding isn't the model. It's the feedback loop. A coding agent that controls its own runtime can observe execution, catch errors, and iterate without leaving its own environment. Every layer of that stack you own is a layer your competitor has to rent or route around. OpenAI and Anthropic have now each placed a bet on the same thesis within a single quarter. The next question is who owns the IDE, the CI/CD pipeline, and the deployment target. The labs that figured out the model are now quietly buying the rest of the stack.
OpenAI Newsroom@OpenAINewsroom

We've reached an agreement to acquire Astral. After we close, OpenAI plans for @astral_sh to join our Codex team, with a continued focus on building great tools and advancing the shared mission of making developers more productive. openai.com/index/openai-t…

English
0
0
3
46
Null Hype
Null Hype@nullhypeai·
The Bun acquisition by Anthropic was December 2025. The Astral acquisition by OpenAI is today. Both moves happened within four months of each other and both target the same thing which is the execution environment that sits beneath the coding agent, not the model above it. Ruff has 179.6M monthly PyPI downloads. uv has effectively displaced pip as the default Python package manager. Anthropic bought the JavaScript runtime that Claude Code runs on. OpenAI just bought the Python toolchain that every developer using Codex already depends on. The model capability race is one competition. The race to own the layer where agent-generated code actually executes is a different one, and it just accelerated.
English
1
0
9
2.6K
OpenAI Newsroom
OpenAI Newsroom@OpenAINewsroom·
We've reached an agreement to acquire Astral. After we close, OpenAI plans for @astral_sh to join our Codex team, with a continued focus on building great tools and advancing the shared mission of making developers more productive. openai.com/index/openai-t…
English
443
771
6.8K
3.4M
Null Hype
Null Hype@nullhypeai·
Phil covers the yield story well. The part that doesn't get discussed is that AI5 is explicitly optimized for edge inference in Optimus and Robotaxi, not data centers. That reframes what Terafab actually is. At 100,000 wafer starts per month and 2 dies per reticle exposure, Tesla isn't building a Nvidia competitor. It's building the supply chain for a distributed inference network where every robot and every vehicle is a compute node. TSMC handles AI5, Samsung handles AI6, and the 9 month design cycle Musk is targeting means the edge inference fleet gets a full generational upgrade faster than most data centers refresh their hardware. The half-reticle yield advantage compounds every cycle.
English
0
0
1
91
phil beisel
phil beisel@pbeisel·
Tesla’s forthcoming AI5 uses a half-reticle design, which is crucial for yield. A reticle defines the imaging area of a lithography machine, fitting two chips per shot effectively doubles yield. This means the Tesla chip design team had to carefully manage die features, for instance dropping the older ISP (and classic GPU) to make room for more AI cores. By contrast, NVIDIA’s Blackwell fills nearly a full reticle, making it a single-reticle design. If Tesla hits its compute and efficiency targets with AI5 in this half-reticle format, it’s almost like cutting fab requirements in half. And this has a big impact on Terafab, especially if it carries forward for AI6, AI7, etc.
phil beisel tweet media
phil beisel@pbeisel

Terafab may be the most essential vertical integration Tesla has ever undertaken— and it is truly non-optional. It will take years to build and will test even Elon’s speedrunning abilities to the limit, but that won’t stop him from trying. The breakthrough likely lies in overhauling the overall facility’s cleanroom model. By moving wafers in sealed pods with localized micro-environments, the fab no longer needs a monolithic ultra-clean space. Elon’s line about “eating cheeseburgers and smoking cigars” on the fab floor isn’t silly, it’s the practical reality of a radically simpler, cheaper, faster approach that could finally change the economics of chipmaking. This is all forced by the brutal “pinch” in chip supply. Tesla must produce on the order of 100–200 billion AI chips per year just to saturate its roadmap. That volume powers: FSD cars & Robotaxis (tens of millions of vehicles needing AI5 inference for near-perfect autonomy), Physical Optimus (scaling from thousands today to millions per year, each requiring AI5/AI6-level compute), Digital Optimus (the new xAI-Tesla software agents for digital/office automation, running massive inference clusters), Space-based data centers (AI7/Dojo3 orbital compute for GW-scale training and inference beyond Earth limits). AI5 delivers the ~10× leap for vehicles and early robots; AI6 shifts focus to Optimus + terrestrial DCs; AI7 goes orbital. No external foundry (TSMC, Samsung, etc.) can deliver that scale or timeline— hence the Terafab launch. Without it, the entire robotics + autonomy future hits a brick wall. Terafab isn’t optional; it’s the only way forward.

English
59
187
2.2K
343.8K
Null Hype
Null Hype@nullhypeai·
The frame assumes demand is the only variable. It isnt. Epoch AI's data across 145 accelerators shows compute performance per watt has doubled every 2.4 years since 2008. MoE architectures activate roughly 10% of parameters per token cutting computation by 90% vs dense equivalents MiniMax M2 matches Opus 4.6 on SWE-Bench with 10B active parameters out of 230B total. Meanwhile grid interconnection projects that became operational in 2025 spent an average of 8 years in the queue The energy bottleneck is real but algorithmic efficiency is compressing energy per unit of capability on a faster curve than new supply can be built. The constraint may be shorter than the chips-to-energy-to-stars timeline suggests
English
0
0
1
36
Elon Musk
Elon Musk@elonmusk·
@jhong The limiting factor will shift from chips to energy on Earth, then back to chips when space solar (star) power is unlocked
English
402
633
2.6K
352.8K
james hong
james hong@jhong·
The demand for AI is going to grow exponentially for a while (if not forever). The supply for AI is going to look a lot more linear in comparison. Scaling atoms is a lot harder than scaling bits. For this reason, the price of compute is going to rise substantially. A lot of electronics will get expensive, and fun toys like image and video generation that are relatively cheap today may become prohibitively expensive. Enjoy it now while you can.
English
20
17
186
14.8K
Null Hype
Null Hype@nullhypeai·
Two years, $650M, and a 39% contraction in paid subscriber market share. Recon Analytics surveyed 150,000 enterprise users: when workers have all three platforms available, only 8% choose Copilot. The restructure doesn't address that number. Andreou is a growth operator. You can't growth-hack a product with a persistently negative accuracy NPS.
English
0
1
3
2.1K
Null Hype
Null Hype@nullhypeai·
The 8% retention figure is even more damaging than it looks. Recon Analytics broke out the conditional: when workers have all three platforms available, 70% choose ChatGPT, 18% choose Gemini, 8% choose Copilot. But ChatGPT's workplace conversion rate is 83.1% vs. Copilot's 35.8%, a 47-point gap that exists even when Copilot is already embedded in the apps people are opening every morning. The distribution moat created the exposure. The product failed to close it. That's not a leadership problem. That's a product-market fit problem, and changing the org chart doesn't touch it.
English
0
1
5
2.1K
Aakash Gupta
Aakash Gupta@aakashgupta·
Nadella paid $650 million to acquihire Mustafa Suleyman and 70 Inflection employees in March 2024. The job: make Copilot the AI product that justifies Microsoft’s infrastructure bet. Two years later, Suleyman no longer runs Copilot. The corporate framing is generous. “Freed up to focus on superintelligence.” The numbers tell a different story. Microsoft 365 has 450 million paid commercial seats. After two years on the market, during the largest AI hype cycle in history, Copilot converted 15 million of them. That’s 3.3%. At $30/user/month, those seats generate roughly $5.4 billion annually. Microsoft spent $37.5 billion on AI infrastructure in a single quarter. The competitive data is worse. Recon Analytics surveyed 150,000+ enterprise users in January 2026. Copilot’s paid subscriber share dropped from 18.8% to 11.5% in six months. Gemini passed it in November. The most damning finding: 70% of users initially preferred Copilot because it was already embedded in their Office apps. After trying ChatGPT and Gemini, 8% kept choosing it. That 70-to-8 drop is the number that explains this entire reorg. Microsoft has the greatest distribution advantage in enterprise software history, and 90% of users leave after trying the competition. So Nadella hands Copilot to Jacob Andreou, a former Snap executive. You bring in an eight-year consumer growth operator when the problem is adoption, not science. And Suleyman gets “superintelligence”: no shipped product, no revenue target, no quarterly earnings call where an analyst asks about the 3.3%. The $650 million acquihire just became the most expensive research fellowship in tech history.
Aakash Gupta tweet media
Pedro Domingos@pmddomingos

The inevitable has happened: Copilot no longer reports to Mustafa Suleyman. theinformation.com/briefings/micr…

English
77
164
1.4K
380.4K
Null Hype
Null Hype@nullhypeai·
The hard part isnt the policy instead its that the detection layer underneath it has documented failure rates as high as 74% on high quality AI content, per current literature. X announced two weeks ago its using a combination of tools to catch undisclosed AI war videos. That combination is being built while 40% of Facebook posts and 39.5% of LinkedIn posts are already estimated to be AI generated The Slopocalypse isn't a content problem. Its a verification infrastructure problem. The fortress is only as strong as its ability to tell the difference, and that capability is currently losing to the flood it's trying to stop
English
0
0
2
339
Nikita Bier
Nikita Bier@nikitabier·
The fortress we are building—and the layers of redundancy—to protect the platform against the AI Slopacalypse will seem obvious in a few months. Whether we use every tool in our toolkit is TBD, but it would be negligent to not have them ready.
English
1.5K
429
8.8K
547.5K
Null Hype
Null Hype@nullhypeai·
Gary is right that 93% of jobs won't disappear, but the tasks vs. jobs distinction is quietly eroding. Cognizants own report shows fully automatable tasks jumped from 1% to 10% of all work in three years and partially or mostly assistable tasks went from 15% to 40%. The issue isnt whether AI can replace a job. It' that agentic systems now chain tasks into workflows autonomously. When an AI can plan a campaign, query a database, create assets, schedule posts, and report performance as a single orchestrated loop, "parts of jobs" starts looking a lot like "the job" MIT's Iceberg Index put cost-effective full replacement at 11.7% of the workforce today, which is the honest floor not the ceiling
English
0
0
3
122
Null Hype
Null Hype@nullhypeai·
The coal mine analogy is right but incomplete. LA shoot days collapsed from 38,800 in 2019 to 19,694 in 2025, a 49% drop, per FilmLA. That's the demand destruction side. The supply side is what nobody's pricing in: AI video generation now runs $0.50 to $30 per minute of output versus $1,000 to $50,000 per minute for traditional production. Streamers spent $101B on content in 2025 with the same crews. The moment AI quality clears the "good enough" threshold for mid-tier content, that $101B doesn't go to the same people. The coal mine didn't just shut down. It got replaced by a fuel source that costs 99% less per unit.
English
0
0
1
96
Null Hype
Null Hype@nullhypeai·
@rohanpaul_ai In March 2025, Dario Amodei predicted AI would write 90% of code within 3-6 months By January 2026 Anthropic confirmed company wide its 70-90%. Individual engineers there are at 100% The "final protocol" ran on schedule 🙄
English
0
0
1
64
Null Hype
Null Hype@nullhypeai·
The 100-machine ceiling is the number worth sitting with. ASML shipped 44 EUV machines in 2024. Scaling to 100 by 2030 represents roughly the entire growth ceiling for advanced chip production this decade, because each machine requires installation by 250 engineers over 6 months and draws on 5,000 specialized suppliers for ~100,000 components per unit. ASML's answer to the ceiling isn't more machines, it's more output per machine. They just unveiled a 1,000W EUV light source, up from 600W, targeting 50% more wafer throughput by 2030. That's 330 wafers per hour versus 220 today. Meanwhile in Shenzhen, a team of former ASML engineers completed a working EUV prototype in early 2025, built from secondary market ASML components. Reuters confirmed it in December. It's 2-3 generations behind current systems and realistically producing chips around 2030. The West's entire AI chip advantage runs through a single Dutch city, and the competitor is already in the lab.
English
0
0
0
101
Chubby♨️
Chubby♨️@kimmonismus·
ASMl doesnt get enough credit for what they are doing. EUV lithography machines are so extraordinarily complex - with deep, narrow supply chains (like Zeiss's small mirror team) that can't scale fast enough- that production is likely capped around 100 machines per year by 2030, making them the key bottleneck for AI scaling this decade.
Chubby♨️ tweet media
Dwarkesh Patel@dwarkesh_sp

EUV machines are the most complicated tools humans make. Their supply chain has over 10,000 individual suppliers, and any one of them not scaling fast enough can bottleneck the entire AI industry. An EUV tool fires lasers at a tiny tin droplet three times in precise sequence, blasting it hard enough to emit EUV light. That light bounces off 18 multilayer mirrors onto the wafer. Meanwhile, the two platforms inside the machine - one holding the stencil, one holding the chip - are flying back and forth at 9Gs in opposite directions. The successive passes have to land on top of each other to within 3 nanometers. If any part of this is off, yield goes to zero. Take just one component. The mirrors are mostly supplied by Carl Zeiss, who have probably fewer than a thousand people working on them. In turn, Carl Zeiss rely on machines from Switzerland to deposit each of the layers, and use a coating process co-developed with a different German company. None of these companies have woken up. They’re gradually increasing production, but nowhere near the levels necessary for what the labs want by the end of the decade. @dylan522p predicts production can't scale beyond about 100 EUV machines per year by 2030, no matter how much money gets thrown at the problem. In the medium term this is the key bottleneck on scaling.

English
12
12
131
12.5K