
Bluechip
27 posts

Bluechip
@aibluechips
The Foundation of AI 🟦





HUGE: The Fed will inject $26.3 BILLION into the market starting next Monday. Liquidity will hit the market for 3 consecutive weeks.


New listings dropping Monday, May 18. The AI stack is about to be tradeable end-to-end. Long or short, with leverage, 24/7. Four stock perpetual futures spanning the AI infrastructure stack: Cerebras Systems $CBRS the company with the most efficient AI chip purpose-built for inference. Just IPO'd and now it's getting a perp. Taiwan Semiconductor $TSM the foundry that fabricates every leading-edge AI chip on earth. Nvidia and Cerebras design. TSM builds. Nebius Group $NBIS AI-native cloud infrastructure. GPU compute, purpose-built and rentable. The AWS of the AI era. Bloom Energy $BE on-site power generation so AI data centers don't wait years for the grid. Already deployed with Oracle. The opening of these markets will begin if liquidity conditions are met, in regions where trading is supported. Perpetual futures are available to retail traders and institutions in select jurisdictions.


Cerebras just had the biggest IPO of the year. Founder @andrewdfeldman says the 3 most important things he had to convince investors of while doing the roadshow were that demand for inference is going to 1,000,000x, the GPU isn't the only way to do compute, and that the CUDA moat is overstated. What he said: "Jensen said some time ago on @altcap's podcast that the demand for inference will grow by a 1,000,000x, and nobody believed him. And at the same time, you saw Sam Altman displaying real vision and going out and trying to lock up huge amounts of compute, memory, data centers, and power, because he saw it too." "[We tried] to share what that means — what exponential demand means. And that we're still so early, and yet the demand for AI compute is overwhelming." "The other thing is that there are lots of ways to do this. The GPU isn't the only way. You've got TPUs, Trainium, and us. There are lots of different ways to build a solution here." "And finally — the notion that CUDA is this grand lock-in is overplayed. Gemini 3, which is an excellent model, was trained on TPUs with no CUDA. The Anthropic models were trained on Trainium with no CUDA. Some of the best models, some of the most interesting things are being done without CUDA. And that lock-in might be overplayed." $CBRS









🚨 BREAKING: 🇺🇸🇨🇳 The U.S. has cleared Nvidia to sell advanced chips to 10 Chinese firms, opening up a potential $50 billion market.








