“Complete nonsense” 🚨
Jensen Huang just shut down the "rift" with OpenAI on live TV, regarding that he’s disappointed in OpenAI
“That's complete nonsense... We are going to make a huge investment in OpenAI."
He didn't stop there. Jensen confirmed they are doubling down on the partnership, calling OpenAI "one of the most consequential companies of our time"
and admitting this is "probably the largest investment we've ever made."📈
@LinuxReal53208 Misleading, this is possible as Microwave PCs have existed before but no active-in-development Microwave Linux machine exists. These kinds of PCs for the most part have their screens as the window of the microwave, not a miniscreen.
GeForce NOW native Linux app is now in beta (Ubuntu 24.04+), streaming RTX cloud gaming up to 5K/120fps or 1080p/360fps. 10 new games incl. The Bard’s Tale IV + Trilogy, and Delta Force hits cloud Feb 3. #Linux#GeForceNOW#RTX#CloudGaming
@soraofficialapp is iterating fast based on real user + rightsholder feedback. #Sora#AI
Next: rightsholders get tighter character controls + opt-out. #Creators#IP
Video gen will be monetized, with revenue-share tests for rightsholders. #VideoAI#RevenueShare
AMD rolls out Ryzen AI 400/PRO 400 (up to 60 NPU TOPS), new Ryzen AI Max+ + “AI Halo” mini-PC, plus Ryzen 7 9850X3D (Zen 5 + 3D V-Cache). OEM adoption is rising into 2026, and ROCm 7.2 lands on Windows/Linux w/ ComfyUI support. #AMD#RyzenAI#AIPC#CopilotPlusPC#ROCm#Zen5#PC
Gaza ‘doctor’ who slammed Israel in NY Times op-eds is Hamas colonel, seen in military uniform: watchdog, IDF #Echobox=1769866333" target="_blank" rel="nofollow noopener">nypost.com/2026/01/31/wor…
Just reported our quarterly results.
We are still in the beginning phases of AI diffusion and its broad GDP impact, and already we’ve built an AI business that is larger than some of our biggest franchises that took decades to build.
Our quarterly cloud revenue crossed $50 billion for the first time. What’s striking is it was less than 10 years ago that our annual cloud revenue was $10 billion! (That is what expanding TAM + good execution looks like)
A few other highlights from across the stack:
@OpenAI and @Cerebras have signed a multi-year agreement to deploy 750 megawatts of Cerebras wafer-scale systems to serve OpenAI customers.
This has been a decade in the making.
Deployment begins in early 2026, and when fully rolled out, it will be the largest high-speed AI inference deployment in the world.
OpenAI and Cerebras were both founded in 2015 with radically ambitious goals.
OpenAI set out to build the software that would push AI toward general intelligence.
Cerebras set out to rethink computing hardware from first principles.
Our teams met as far back as 2017. We shared ideas, early work, and a common belief:
there would come a point when model scale and hardware architecture would have to converge.
That point has arrived.
ChatGPT set the direction for the entire industry. It showed the world what AI could be.
Now we’re in the next phase - not proving capability, but delivering it at global scale.
The history of technology is clear on one thing:
speed drives adoption.
The PC industry didn’t operate at kilohertz.
The internet didn’t change the world on dial-up.
AI is no different.
As models grow more capable, speed becomes the bottleneck.
Slow systems limit what users can do, how often they engage, and whether AI becomes infrastructure or remains a novelty.
Cerebras was built for this moment.
By keeping computation and memory on a single wafer-scale processor, we eliminate the data-movement penalties that dominate GPU systems. The result is up to 15× faster inference, without sacrificing model size or accuracy.
That speed changes product design, user behavior, and ultimately productivity.
For consumers, it means AI that feels instantaneous.
For the economy, it means agents that can finally drive serious productivity growth.
For Cerebras, 2026 will be a defining year.
With this collaboration with OpenAI, Cerebras’ wafer-scale technology will reach hundreds of millions - and eventually billions - of users.
We’re proud to work alongside OpenAI to bring fast, frontier AI to people around the world.
This is what a decade of long-term thinking looks like.