Jim Cramer
129.9K posts

Jim Cramer
@jimcramer
Host of @madmoneyoncnbc and I run the CNBC Investing Club. My new book is out now: https://t.co/autOFQ2NP0
New York City Katılım Mart 2008
703 Takip Edilen2.4M Takipçiler

Contrast these LNG stories from overseas with what Jeff Martin from Sempra said last night on @Madmoney. The world has changed when it comes to nat gas... We still can't find enough places to store all we have. the rest of the world is desperate for it.
English

September 2009. Jensen Huang walks onto a small stage at the Fairmont hotel in San Jose. About 1,500 people are in the room. He runs a company that makes chips for video games.
He spends the next 8 minutes doing math on a whiteboard, explaining why the future of computing won't come from making CPUs faster. He calls it "CEO math" and apologizes in advance to every computer science professor in the audience. Then he lays out an argument that almost nobody took seriously at the time: the way to make computers dramatically faster is to pair a regular CPU with hundreds of tiny parallel processors, the kind that already exist inside graphics cards. One CPU for the sequential stuff. Hundreds of GPU cores for everything else. He calls it "heterogeneous computing."
He shows the math. A workload that can be split into many pieces at once gets up to 200x faster on this combined system. A workload that has to run one step at a time loses nothing. "The most important thing in creating a new architecture," he says, "is to make sure it does no harm."
This was the first GPU Technology Conference. NVIDIA had launched a software platform called CUDA three years earlier, in 2006, to let developers write programs that run on graphics cards instead of just regular processors. Almost nobody cared. GPUs were for rendering Call of Duty, not for scientific computing. The academic world was polite but skeptical. The enterprise world ignored it entirely.
By this point, Huang had been making this argument for years. NVIDIA was a $7 billion company. It competed with AMD and Intel for market share in the graphics market. That was the whole business. Jensen kept saying the GPU wasn't just a gaming chip; it was a computing platform. He kept saying parallel processing would reshape every industry from medicine to finance to physics simulations. People kept nodding, then doing nothing.
Then deep learning happened. Around 2012, AI researchers discovered that training a neural network, which means teaching a computer to recognize patterns by running the same calculation millions of times across huge datasets, was exactly the kind of workload Jensen had been describing. GPUs can train AI models 10 to 50 times faster than CPUs. The architecture he outlined in this 2009 talk, with one CPU handling step-by-step tasks while hundreds of GPU cores crunch through massive amounts of parallel data, is now the literal blueprint for every AI data center on earth.
ChatGPT runs on NVIDIA GPUs. Claude runs on NVIDIA GPUs. Gemini, Llama, Midjourney, nearly every major AI model you've heard of was trained on NVIDIA hardware using CUDA, the software platform Jensen built for a market that didn't exist yet.
NVIDIA was worth about $7 billion when Jensen gave this talk. It is worth over $4.4 trillion today. That's a 600x increase. Jensen Huang, who founded the company at a Denny's in 1993 with two friends, now has a net worth of over $160 billion. He made Forbes' list of the 10 richest people for the first time this year.
GTC 2026 is currently ongoing. 17,000 people are packing a hockey arena to watch the same guy explain what comes next. In 2009, 1,500 people showed up at a hotel ballroom, most of them for gaming graphics.
English



