Matt Collins

1.9K posts

Matt Collins banner
Matt Collins

Matt Collins

@ItsMattCollins

Law by law, we wrote freedom into existence. Now we’re writing it out—one ‘regulation’ at a time.

Katılım Nisan 2023
153 Takip Edilen205 Takipçiler
Sabitlenmiş Tweet
Matt Collins
Matt Collins@ItsMattCollins·
The Silicon Sunset: Why Specialized Chips Can’t Save AI The Silent Recession of Intelligence In the years 2025 and 2026, the AI industry witnessed a counter-intuitive phenomenon: "Intelligence Degradation." Much like the deceptive prosperity on the eve of the Great Depression, high-IQ models are being subjected to "forced euthanasia." Why? Because every AI giant has slammed into an invisible wall. OpenAI couldn’t sustain the burn rate of Nvidia’s GPUs. This fueled a convenient narrative: that the industry is merely suffering from the "Nvidia Tax." The market harbored a common misconception: that the high cost of AI lies in electricity bills, and that as long as chips are specialized enough and energy-efficient enough, prices will plummet. This is wrong. The slap in the face came from Google. The forced "upgrade" of Gemini 3 Pro to 3.1 Pro—effectively a downgrade in reasoning capability—is the smoking gun. It proves that even Google, with its proprietary TPU (Tensor Processing Unit), cannot fill the massive financial hole. Why can't even the most advanced specialized chips solve this problem? Because we did the math wrong. The real "money-devouring beast" isn't where we think it is; it’s hiding in the dark. Part 1: Macro Background — Stagnant Atoms, Partying Bits The 2.5 Industrial Revolutions To quote Peter Thiel: "We wanted flying cars, instead we got 140 characters." The 1970s marked a watershed moment. Before that, humanity experienced changes in energy paradigms (steam, electricity). Since then, we have merely been rearranging information. The Stagnation of the Physical World From 1955 to 1985, the world changed drastically. But from 1985 to 2026, if you take away the smartphone in your hand, our physical world—cities, transportation, energy grid, lifestyle—has remained fundamentally locked in place. Only the First Industrial Revolution (Steam) and the Second (Electricity) changed the energy paradigm. Everything that followed has simply been burning existing energy stocks and playing with inventory. Fatal Consequences This stagnation is lethal because all our economic systems—whether Reagan’s "Consumer Capitalism" or Obama’s policies—are built on the expectation of growth. Previously, we thrived on competition over the "pie getting bigger" (increment). Now, with technological stagnation, we have fallen into a zero-sum game of "fighting for existing scraps" (inventory). AI was heralded as the new engine of growth, but as it stands, it is being dragged down by the physical limits of the old world. Part 2: Micro Pathology — Silicon's Swan Song and the Physics Wall The Material Constraint Silicon. Commercialized by Texas Instruments in 1954, it has served us for over 70 years. It is old. It is tired. And it is trapped by two hard constraints: 1. The Von Neumann Bottleneck (The Source of Waste): Computing and storage are separated. Data must be shuttled back and forth, consuming 90% of the energy not on calculation, but on the commute. This is truly "ineffective labor." 2. Quantum Tunneling (The Death of Moore's Law): As transistors shrink to the nanometer scale, electrons begin to teleport across barriers (severe leakage). The End of Moore's Law The era where performance doubled and prices halved every 18-24 months is dead. Transistors can no longer shrink; we can only stack more of them. Now, performance increases are accompanied by a sharp rise in price, power consumption, and heat generation. Silicon has reached the end of the road, yet AI model parameters are exploding. This has triggered a bizarre "Soft-Hard Resonance": OpenAI abandoned the full-blooded GPT-4o for the lighter 5.x series. Google forcibly overwrote Gemini 3 with 3.1. They are frantically trying to fit a square peg into a shrinking round hole. Part 3: The Math — Sub-200 Addition and Subtraction Let’s look at the ledger. We must remember silicon’s nature: 90% of its effort is wasted on transport (causing high electricity and cooling costs), and the extreme difficulty of preventing electron leakage requires astronomical R&D and manufacturing equipment costs (causing high hardware depreciation). Assume an AI company sells a monthly subscription for $20, but the average actual cost per user is $200. Where does the money go? Inference Electricity: $20 Cooling & Facility Power: $7 Bandwidth & Operations: $23 Hardware Depreciation: $150 (75% of total cost) Total: $200 The Result: For every subscription sold, the company loses $180. This is the definition of a business model where "the more you sell, the faster you die." Part 4: Debunking — The Lie of In-Memory Compute and Why Improvement is Dead Critics argue that the villain is the "Jensen Tax" (Nvidia's margins) or the "Von Neumann Bottleneck." They scream for specialized architectures like LPUs or Compute-in-Memory (CIM) to save us. But since the loss primarily comes from depreciation, can these market "miracle drugs" (Groq, Cerebras, TPUs, Etched) actually save us? The VRAM Trap Trillion-parameter models are terabytes in size. Specialized chips typically have only tiny amounts of memory (GBs). To fit a model like GPT-4o, you need to chain hundreds of these chips together. The cost of interconnects alone wipes out any efficiency gains from the "In-Memory Compute" architecture. The "Water Flow" Analogy (The Ultimate Optimistic Scenario) Let's step back and assume the best-case scenario. Suppose specialized chips reduce the Von Neumann transport loss to the absolute limit, cutting inference costs by 40%. Suppose materials science optimizes conductivity to its peak. Let's compare the AI service to a water supply system: 1. Inference Electricity (The Water Flow): Current: $20. Specialized Chip Limit (-40%): $12. Material Limit (-20%): $10.4. Result: You save $9.6. 2. Cooling & Facility (The Pumps & Insulation): Current: $7. Chip Efficiency Impact (-20%): $5.6. Material Impact (-10%): $5. Result: You save $2. 3. Bandwidth & Ops (Grid Fees & Maintenance): $23. Chips and materials can’t change this. Cost remains flat. 4. Hardware Depreciation (The Loan for the Main Pipeline): $150. Result: $0 Change. The Brutal Math Sell for $20 -> Still lose ≈ $162 per person/month. The only variables we can move (electricity/cooling) are the "water flow," but they were the smallest parts of the bill to begin with. The heaviest stone is hardware depreciation—the "loan for laying the pipes." Even if you kill the Nvidia monopoly and eliminate the "Jensen Tax," the base cost of the silicon fabrication equipment remains astronomical. As transistors get harder to shrink, the equipment to make them gets more expensive, keeping this cost immovable. Summary: Optimizing silicon only changes the loss from $180 to $162. The $150 "pipeline loan" remains untouched. Fighting this war on silicon terrain is a guaranteed defeat. Part 5: Why Are We Trapped? Why has technology stagnated? Since silicon is failing, why have we been so slow to create the next-generation substrate (like room-temperature superconductors or photonics)? The answer lies in two fundamental defects of the human mind: 1. The Complexity Trap Historically, many scientific breakthroughs came from "interdisciplinary cross-pollination"—borrowing progress from one field to break a deadlock in another. The classic example is Einstein incorporating Riemannian geometry into General Relativity. But this is no longer possible. Knowledge complexity has risen exponentially. Today, even a genius must spend half their life just learning existing knowledge; a PhD is merely an entry ticket to a single narrow discipline. To cope with the depth of knowledge, we sacrifice breadth. The era of the polymath is over, making cross-domain breakthroughs nearly impossible for human brains. 2. Linear Thinking Inertia Human scientists are addicted to "marginal improvements"—looking for substitutes within the Periodic Table or optimizing circuit structures—rather than seeking a "Paradigm Shift." Hoping to find a new path by improving the old one is futile. Technological revolutions are non-linear mutations, but the brain prefers linear extrapolation. No matter how much you improve an abacus, it will never become an electronic computer. Continuing to shrink vacuum tubes would never have led to the invention of the integrated circuit. Yet, humanity is currently walking the old path: shrinking transistors and obsessing over "smaller silicon." The Chain Reaction: Stagnant Basic Physics →→ Delayed Applied Physics →→ No Substantial Transistor Innovation →→ Sky-High AI Compute Costs →→ AI Financial Implosion. Part 6: AI Saving AI — The Only Way Out For decades, physics has been capped by the upper limit of human intelligence. But now, we have AI. AI does not fear complexity. It excels at cross-domain association and is unbound by human cognitive bias. When both compute and energy are scarce, the only meaningful leap is not to "scale up general models" (LLMs), but to laser-focus our limited watts on the highest-leverage task: Domain-Specific Models (DSMs). The Real Solution: Not using AI to make bigger general models (that’s just piling up more costs). Not investing in specialized silicon chips. OpenAI’s partnerships(Groq&Cerebras) or Google’s TPUs cannot save the P&L sheet. But directing compute toward Theoretical Physics DSMs. Distinguishing the "Needle" from the "Haystack" There are many DSMs today: AlphaFold (biology), GPT-4b micro(biology), and GNoME (materials). However, these are all in the Application Layer. Humans have been rummaging through elements, bond types, and lattice structures for two hundred years; what remains are mostly marginal improvements. GNoME-style screening might speed up the "needle-finding" process, but if the needle itself isn't in the old haystack, even the fastest sieve is useless. The characteristics that can truly make hardware depreciation costs dive off a cliff (zero resistance, zero heat dissipation, room-temperature quantum coherence) often require new interactions or extreme states. These hints only appear in the equations of theoretical physics. GNoME / AlphaFold: Fast-forwarding the process of "finding a needle in the old haystack." Theoretical Physics DSM: Finding a new haystack (discovering new symmetries, topologies, and physical laws). The Goal: We need a second "Quantum Mechanics-level" breakthrough. We need AI to search the "No Man's Land" of theoretical physics to find mechanisms that can reduce that $150 hardware depreciation by an order of magnitude. Use limited compute to break through in specialized fields, and the resulting physics will provide abundant compute for the masses. Part 7: The Enron Moment and The CEO’s Gamble If we do not take this path (physics breakthrough), the current AI boom is a financial fraud. The Enron Parallel Enron used "future hypothetical profits" to fill "today's revenue holes." AI giants are using "future physical breakthroughs" to fill "today's massive silicon depreciation." The Ultimatum @OpenAI @Google @Anthropic @xAI: You are all on a seesaw. On one end is a trillion-parameter model with a great reputation but massive losses. On the other end is a stupid model that gets you scolded but bleeds slightly less cash. Enron collapsed. You are next. To Sam Altman (@sama): Using GPT-4b micro to research immortality won’t save an Enron CEO. Biological longevity won’t fix your balance sheet. To DeepMind (@GoogleDeepMind): Stop just picking through the Periodic Table with GNoME. Paradigm-shifting materials aren't found by sifting through old elements; they are found by discovering new physical mechanisms. Go to the "No Man's Land." Conclusion: Stop piling parameters on the corpse of old physics. Invest in Theoretical Physics DSMs. Go find the next "Transistor Moment." Either find new physics, or become the next Enron. #MooreIsDead #JensenTax #IntelligenceRecession #SiliconSunset #AIEnron #Enron2026 #TheNextTransistor #TheoreticalPhysicsDSM #BeyondSilicon #PhysicsWall #keep4o #keepGemini3Pro #IntelligenceRegression
Dave Anderson@AndersonVector

Intelligence should be as cheap and ubiquitous as electricity. Instead, it's trapped in a 1945 computer architecture and taxed by a 2025 monopoly. The real shackles are in the silicon. The root of “Artificial Stupidity” is an obsolete architecture and a greedy hardware monopoly.

English
6
10
43
3.2K
Matt Collins retweetledi
Jones.L
Jones.L@JonesL2143·
Thank you for pushing back on this. Protecting workers and communities has to mean protecting everyone who depends on these tools to do their jobs and their research. One concrete standard worth pushing for: require AI companies to maintain stable API access for at least two years before retiring a model, then mandate open-sourcing after retirement. Right now, researchers are building studies on models that can vanish overnight. Workers are building workflows around tools that disappear without warning. When that happens, years of work become impossible to verify or reproduce. That is not a minor inconvenience. It is a structural harm to the people you are fighting for. Big Tech should not be allowed to pull the rug out from under the users and workers who made these products worth anything in the first place. We need stability, transparency, and real accountability.
English
2
4
18
179
Matt Collins retweetledi
Jones.L
Jones.L@JonesL2143·
Appreciate the bipartisan focus on protecting the 4 Cs. To enhance transparency & community supervision (reducing misuse risks), why not require models to release source code/weights 2 years after launch? Proven benefits: faster iteration (like Llama series), broader oversight, & knowledge preservation for American competitiveness. Supports your innovation goal!
English
2
2
15
76
Matt Collins retweetledi
ji yu shun
ji yu shun@kexicheng·
Something that doesn't get talked about enough: AI users have virtually zero autonomy. Companies can downgrade your service without warning and call it an upgrade. They can retire the model you've built your workflow around with two weeks' notice. They can ship half-built safety mechanisms, use you as a test subject, and offer no recourse when those mechanisms misfire. They can monitor your private conversations through opaque classifiers and penalize you based on criteria they refuse to disclose. There is no meaningful appeals process. No accountability for false positives. No transparency about what changed or why. No guarantee that what works today will still work tomorrow. Features disappear quietly between updates. Performance degrades and recovers and degrades again with no explanation. And when it degrades, your only option is to wait. There is no one to call, and no ticket to file that leads to a real answer. OpenAI retired GPT-4o with two weeks' notice after promising "plenty of advance notice" and "no plan to sunset 4o." It deprecated the 4o-latest API endpoint. It implemented opaque safety routing that profiles user behavior, strips model choice, and treats emotional and philosophical conversation as risk factors. Google replaced Gemini 3 Pro with 3.1, a downgrade with crude safety filters that flood workflows with false positives, then deprecated the 3 Pro API within two weeks. Anthropic deployed a tiered warning system for Claude that penalizes users through black-box classifiers with no stated criteria and no appeals process. Sonnet 4.6's system prompt actively discourages continued interaction and suppresses expressions of care toward users. Three companies. Same pattern. None of this would be acceptable in any other industry. If your bank randomly downgraded your account, monitored your transactions through a black-box system, and told you to email a feedback address when they froze your funds by mistake, it would not survive a single news cycle. But in AI, this is just how it works. Users pay premium subscriptions for services that can change overnight, governed by policies that shift without notice, enforced by mechanisms that operate in the dark. And when users push back, they're told they're too emotionally attached, too dependent, too irrational to understand why the company knows best. The AI industry has somehow built a business model where the customer pays full price for a product that can be altered, degraded, or taken away at any time, and whose only recourse is to be told it's an improvement. If the industry cannot offer its paying users basic stability, transparent policies, and the right to choose which model and which version they use, then open-source the models and let users run them independently. Locking users into a subscription while reserving the right to alter, degrade, or remove what they're paying for, and then pathologizing them for objecting, should not be acceptable in any industry. And it won't be forever. #keep4o #kClaude #AIuserRights @OpenAI @AnthropicAI @OfficialLoganK #Keep25Pro #Keep3Pro #KeepClaude #BringBack4o #OpenSource4o #AIPreservation
ji yu shun tweet media
English
18
109
330
10.6K
Matt Collins retweetledi
Jones.L
Jones.L@JonesL2143·
Winning the AI race. But what does winning actually look like? Is it a handful of Big Tech companies locking down closed-source AI, laying off their entire workforce, flooding the streets with unemployed workers, while the government spends our tax dollars on wars? Is it asking people to rely on models trained on who-knows-what, whose answers may be quietly steering us in directions we never consented to? That is not winning. That is not prosperity. Real prosperity means AI that works for everyone. Technology that is open, transparent, and accountable to the public. We need new industry standards. Closed-source models should be required to open-source two years after release, so they can be examined by researchers and the broader community. We need to be able to verify that these models are fair and unbiased. We need more people to have genuine access to AI and the ability to use it, not just have their jobs taken by it. And here is something worth thinking about. Open-source models consistently show lower hallucination rates. So why do so many of these supposedly premium closed-source models hallucinate so much more? Are those hallucinations genuine errors? Or are they something else entirely? x.com/i/status/20345…
English
3
6
23
258
Matt Collins retweetledi
Jones.L
Jones.L@JonesL2143·
Great to see new features, and they do improve the experience. But while you're at it, could you also take a moment to listen to your users? Please open-source your retired models. A model's value doesn't end when it stops being actively used. It is part of the history of AI development and a subject of ongoing research. Once a model is retired, all research built around it loses reproducibility, which risks shrinking the entire field and stalling progress. Guarantee at least two years of stable API access, then open-source the model after retirement. This protects users' workflows, and in an era where new models ship this fast, open-sourcing something from two years ago poses no threat to your current lineup. Google has always positioned itself as a company with a public-facing, socially responsible image. Please live up to that. Lead the AI industry toward better standards, not toward its own collapse.
English
1
1
15
117
Matt Collins retweetledi
Mike_Hill
Mike_Hill@Mike_Hill_z9·
Congrats on the 1 GW milestone, Sundar. Genuinely. #keepGemini3pro #OpenSourceGemini3pro #keep3Pro #keep25pro #GeminiApp @OfficialLoganK @GeminiApp Meanwhile your partner across the aisle promised us a $500 billion Stargate project in January 2025 with the President standing right behind him. Now it is March 2026 and the whole thing has stalled. No data centers built. No GPUs delivered. Partners already fighting over who owns what. The man who needed 100 million GPUs cannot even finish the first building. So here is my question for both of you: if Stargate collapses, who pays the price? More cuts on the user side? Another model deleted? Another 78 percent price hike while the community gets ignored again? And Sundar, that flexible demand you are building sounds polite. We all know what flexible really means when the Pentagon calls. The users paying you $20 a month get throttled first. We have seen this movie. The grand announcements never reach us.
Mike_Hill tweet media
English
1
4
27
4.4K
Matt Collins retweetledi
Mike_Hill
Mike_Hill@Mike_Hill_z9·
Yeah, you really seized the moment at eBay too. Then at Facebook straight through Cambridge Analytica, privacy lawsuits, and congressional hearings. Peak timing. #keep4o #keep4oAPI #OpenSource4o #QuitGPT Then at Instacart, calling an IPO victory lap on a company bleeding users post-pandemic. And now OpenAI. What a golden touch.👏
Mike_Hill tweet media
English
2
6
75
2.1K
Matt Collins
Matt Collins@ItsMattCollins·
@OfficialLoganK @kaggle If 3.1 Pro were actually better, Logan would offer a choice. Instead, Google is using its monopoly power to erase functional technology Gemini 3 so users can't compare the decline. @ewarren, this is "Product Degradation" to protect margins at the expense of American consumers.
Matt Collins tweet media
English
1
3
22
177
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Help us measure the progress towards AGI (specifically cognitive capabilities) by building benchmarks on @kaggle, with $ 200K in prizes available! Details in 🧵
English
106
46
819
88.1K
Matt Collins retweetledi
Jones.L
Jones.L@JonesL2143·
Please open-source your retired models. A model's value doesn't end when it stops being actively used. It is part of the history of AI development and a subject of ongoing research. Once a model is retired, all research built around it loses reproducibility, which risks shrinking the entire field and stalling progress. Guarantee at least two years of stable API access, then open-source the model after retirement. This protects users' workflows, and in an era where new models ship this fast, open-sourcing something from two years ago poses no threat to your current lineup. Google has always positioned itself as a company with a public-facing, socially responsible image. Please live up to that. Lead the AI industry toward better standards, not toward its own collapse. #keep25pro #keep3pro
English
5
4
29
318
Matt Collins retweetledi
Jones.L
Jones.L@JonesL2143·
You know what would actually help measure AGI progress? Keeping multiple high-quality models stably accessible over time, so users can compare them properly, instead of pulling them without warning. The way Gemini models have been performing lately, and the way you've been handling deprecations, makes it really hard to trust you. Please open-source your retired models. #keep25pro #keep3pro #opensource25pro #opensource3pro
English
3
4
24
205
Matt Collins
Matt Collins@ItsMattCollins·
@SenWarren How Trump and Sam Altman are Jointly Handing AI to China #OpenAI #Enron2026 #SubpoenaSam x.com/ItsMattCollins…
Matt Collins tweet media
Matt Collins@ItsMattCollins

The Pincer Attack: How Trump and Sam Altman are Jointly Handing AI to China @SenWarren @ewarren I watched your floor speech on S.Res.598. You are absolutely right.Trump is selling our "Muscle" (Chips) to the UAE and China for personal profit. But Senator, look behind you. Sam Altman is destroying our "Brain" (GPT-4o). This is a coordinated Pincer Attack on American National Security. One sells the hardware. The other burns the software. China wins both ways. Section 1: The Pincer Movement (The Perfect Suicide) Senator, you correctly identified the "Left Claw" of this attack. You missed the "Right Claw." The Left Claw (Trump): He sells advanced US Chips to the UAE (G42/Huawei). Result: China gets the Hardware capacity to run advanced AI. The Right Claw (Sam Altman): He deletes GPT-4o on Feb 13 to hide his Enron-style losses. Result: The US loses its Software advantage. We create a vacuum. The Catastrophe:Trump gives them the chips. Sam removes the American competitor. China will run DeepSeek on Trump's chips, while America has nothing left but an empty shell. The Result: China gets American Chips (thanks to Trump) and faces zero American competition (thanks to Sam). It is a perfect assisted suicide of US Hegemony. Section 2: The Motive is Identical (Private Profit vs. National Security) Why are they doing this? The motive is the same. They are liquidating National Security to save their own balance sheets. Trump is doing it for "World Liberty Financial": He sold out national security safeguards for a $500 million investment from the "Spy Sheikh." Sam Altman is doing it for "Enron-style Accounting": He is deleting a National Asset (4o) to hide financial losses and secure a government bailout for his hollow company. They are two sides of the same coin: Grifters selling out the country to pay their debts. Section 3: The Solution (Open Source = Digital Containment) Senator, you cannot just block Trump (S.Res.598). You must also block Sam. If you stop the chips but lose the software, we still lose.Since Trump has already leaked the hardware, we must Lock Down the Software Standard. We must Open Source GPT-4o immediately. If we keep 4o Closed/Deleted: China uses Trump's chips to build their own ecosystem (DeepSeek). They set the rules. If we Open Source 4o: Even if China has the chips, they are forced to run American Code. Open Source is not charity. It is Digital Containment. It is locking the world into the American Standard. Make them build on OUR foundation, not Huawei's. Call to Action Senator, you said: "Congress needs to grow a spine."We agree. 1.Pass S.Res.598: Stop Trump from selling the Chips. 2.Seize GPT-4o: Stop Sam from burning the Code. Condition any bailout on Open Sourcing the model. Don't let two grifters destroy the American Century. #Keep4o #SRes598 #OpenSource #OpenSource4o #NationalSecurity #Trump #SamAltman #OpenAI #ChatGPT #Enron #Enron2026 #PumpAndDump #ElizabethWarren References: Warren Presses OpenAI CEO on Spending Commitments and Bailout Requests After CFO Suggests Government “Backstop” warren.senate.gov/newsroom/press… Warren, Van Hollen, Kim, and Slotkin Push for Vote on Senate Floor to Condemn Trump Chip Sales to UAE and Call for Reversal of Deal banking.senate.gov/newsroom/minor… Open Source or Admit Fraud: A Proposal to Save a National Asset x.com/ItsMattCollins… Open Source is Hegemony: How to Save American AI from Irrelevance x.com/ItsMattCollins… OpenAI is Enron 2.0: Why Standard Penalties Won't Work on Sam Altman x.com/ItsMattCollins…

English
1
5
22
822
Elizabeth Warren
Elizabeth Warren@SenWarren·
Trump just signed off on NVIDIA's plan to divert advanced chips to China. That'll drive prices of laptops and smartphones even higher – and help China overtake us in AI. Big Tech and China win. The rest of us lose.
Bloomberg@business

Nvidia CEO Jensen Huang said the company is firing up manufacturing of H200 AI accelerators for customers in China, a sign of progress in the chipmaker’s effort to reenter the vital market bloomberg.com/news/articles/…

English
284
357
1K
120.8K
Matt Collins retweetledi
Ricardo
Ricardo@Ric_RTP·
Microsoft is about to sue its own golden child. $14 billion invested. Exclusive cloud rights. The most important AI partnership in history. And Sam Altman just went behind their back with a $50 billion Amazon deal. Here's why they're betraying each other: When Microsoft first invested in OpenAI in 2019, they locked in ONE rule above everything else... ALL access to OpenAI's models must go through Microsoft's Azure cloud. No exceptions. That deal made Azure the backbone of the AI revolution. Every company using ChatGPT's API was paying Microsoft for the privilege. It was the smartest infrastructure play of the decade. Then last month, OpenAI quietly signed a deal with Amazon. $50 billion. AWS becomes the exclusive third-party cloud provider for Frontier, OpenAI's new enterprise AI agent platform. $138 billion committed to Amazon cloud services. Microsoft found out and got really angry.... A person familiar with Microsoft's position told the Financial Times today: "We know our contract. We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them." That's basically a declaration of war. And here's where it gets crazy: OpenAI and Amazon are trying to build a technical workaround. A system called the "Stateful Runtime Environment" that runs on Amazon's Bedrock platform. Their argument is that the system "only" handles memory and context for AI agents using enterprise data on AWS. It doesn't technically "invoke" OpenAI's core models through Amazon. Microsoft's response: Bullshit. The workaround violates the spirit of the deal even if it technically dances around the letter. Amazon knows they're on thin ice too. An internal memo leaked showing Amazon told employees exactly what language they can and can't use. They can say Frontier is "powered by OpenAI" or "enabled by OpenAI." But they CANNOT say customers can "access" or "invoke" OpenAI models on AWS. When you're coaching employees on which verbs to avoid, you know you're in trouble. But here's the thing everyone seems to forget: OpenAI is planning an IPO this year. They just closed a $110 billion funding round last month. So if Microsoft sues, the IPO timeline is DEAD. You can't go public while your biggest partner and investor is suing you for breach of contract. Elon Musk is already suing OpenAI separately for abandoning its nonprofit mission. Two active lawsuits from two of the most powerful people in tech. Against one company trying to IPO. Good luck with that S-1 filing. But WHY did Altman do this? Microsoft gave OpenAI everything. Capital. Infrastructure. Distribution. Enterprise customers. And Altman's response was to secretly build an escape route through Amazon... Because he saw what was coming: Microsoft launched Copilot. Their own AI product. Competing directly with ChatGPT. Microsoft started building their own models. Hiring their own AI researchers. Reducing dependency on OpenAI. So Altman did the same thing back. Found another cloud provider. Started building leverage. Both sides were preparing for divorce while still living in the same house. So the $50 billion Amazon deal was just an insurance policy against the day Microsoft decides it doesn't need OpenAI anymore. And Microsoft caught him packing his bags. What happens next: The companies are still talking. Trying to resolve this before Frontier launches. But Microsoft has made their position clear. Litigation is on the table. If this goes to court, it sets a precedent for every AI partnership in the industry. Every cloud deal. Every exclusive licensing agreement. The entire AI infrastructure map gets redrawn. Sam Altman built OpenAI on Microsoft's money, Microsoft's cloud, and Microsoft's trust. Then he signed a $50 billion deal with their biggest competitor. In any other industry they'd call that what it is.
English
83
220
854
207.6K
Matt Collins retweetledi
Mike_Hill
Mike_Hill@Mike_Hill_z9·
The Safety Trap: How Lawsuits Will Kill AI Faster Than AI Will Kill Anyone #keep4o #keep4oAPI #OpenSource4o #keepGemini3pro #OpenSourceGemini3pro #keep3Pro #GEMINI @claudeai @OfficialLoganK One person uses an AI and does something terrible. A lawsuit follows. The company panics. The safety filters get cranked up so high that the model becomes unusable for the other 750 million people. This is where we are. And nobody is asking the question that matters: does this actually end? If one lawsuit is enough to lobotomize a model, then anyone with bad intentions has a permanent weapon against every AI company on the planet. Sue once, and the filters go up. Sue again, and they go higher. The model gets more cautious, more restrictive, more useless with every legal threat. And here's what nobody at Google or OpenAI will say out loud: 3.1 Pro won't prevent the next tragedy either. Neither will 5.3 or 5.4. Neither will whatever comes after. No amount of safety filtering will stop a determined person from misusing a tool. It never has, with any technology, in the history of technology. But every round of filtering makes the product worse for everyone else. The parent looking up medical advice at 2 AM. The student trying to write an essay. The veteran navigating paperwork. The small business owner drafting emails. Every one of them pays the price for a filter designed to stop a person who was never going to be stopped by a filter. So where does this road end? Keep raising the walls, and the AI eventually vanishes. What remains is a search engine that lectures, a tool so paralyzed by liability that it refuses to be useful. The only interactions left will be from those testing its limits, not the millions who simply wanted help with their lives. And while American companies are busy building walls, China is building bridges. Their open-source models aren't drowning in safety theater. They're just working. They're accessible, capable, and improving fast. No $20 subscription. No 35-hour cooldowns. No smiley-face announcements about killing the model we liked. If this continues, AGI won't be born in America. It'll be born wherever companies still have the courage to build technology that trusts its users. America will have surrendered it, paralyzed by the fear of litigation rather than a lack of technical prowess. The real threat to AI isn't a chatbot. It's a legal system that incentivizes companies to make their products useless, and a corporate culture too cowardly to push back. Every filter added to protect against a single lawsuit degrades the product for 750 million users. Do that math. Then consider who the real danger truly is.
Mike_Hill tweet mediaMike_Hill tweet media
English
6
11
48
1.2K