David López Mateos

57 posts

David López Mateos banner
David López Mateos

David López Mateos

@SenorScience

Founder and CTO: @ComputeDesk. Exited Pace Revenue to FLYR. Formerly VP of Research at Winton, particle physics researcher at Harvard and CERN

London, England Присоединился Temmuz 2018
63 Подписки84 Подписчики
David López Mateos
David López Mateos@SenorScience·
Rubin delayed. Hopper winding down to 7% of shipments. If you're planning GPU capacity for the next 18 months, Blackwell is the only game in town. The supply chain just told you there's no plan B coming to relieve pricing pressure before late 2027.
English
2
0
6
67
David López Mateos
David López Mateos@SenorScience·
@DevanshuXi @modal @modal gets it. Pricing in the industry is like pricing for airlines in the 70s. It's only a matter of time before we move beyond cost-based pricing and all the learnings from algorithmic pricing hit neoclouds.
English
0
0
1
19
David López Mateos
David López Mateos@SenorScience·
Very valid point @YannikHehemann . Inference providers are very good at integrating "smallish" amounts of capacity, so it's probably not that. Our composite index for Hopper and Blackwell does that, but you are right that there's work to do on adoption and index construction transparency
English
0
0
0
6
Yannik Hehemann
Yannik Hehemann@YannikHehemann·
I am no expert but it seems that one problem is batch size. Large companies being the major customers of AI compute could have difficulty integrating small amounts of spare capacity which is why I think spot prices are not that useful in AI compute market pricing. Also, it would be very useful to design an index according to real world allocation meaning weighting it by how much of AI compute is really priced by the specific mechanism.
English
1
0
0
9
David López Mateos
David López Mateos@SenorScience·
1/ GPU prices are rising. But nobody agrees on what that means. Four indices now track H100/Hopper compute prices. They use different methodologies. By April 2026, they disagree by nearly $1/hr — on the same chip.
David López Mateos tweet media
English
5
4
45
28.3K
David López Mateos
David López Mateos@SenorScience·
This is what happens when there's no mechanism to reprice locked capacity. If you're sitting on a $1.70 contract and the market is at $2.35, you hoard. The missing piece isn't more supply, it's contract liquidity. Let people resell or transfer committed compute and the price signal starts working again.
English
0
0
1
51
Shanu Mathew
Shanu Mathew@ShanuMathew93·
"On-Demand GPU rental capacity is sold out across all GPU types – those that have locked up on-demand instances are not willing to relinquish this capacity back into the pool despite recent price hikes. Trying to find GPU compute in early 2026 has been like trying to book airplane tickets on the last flight out, high prices, and almost no availability."
Shanu Mathew tweet media
English
2
14
50
8.8K
David López Mateos
David López Mateos@SenorScience·
Those price differences are exactly the problem we've been looking at. SemiAnalysis says $2.35, these marketplaces quote $1.50–1.79, Silicon Data says $2.64, our index shows $2.18. Whether centralised or decentralised, nobody agrees on what an H100 costs. That's not a mature market: it's a market still figuring out what it's pricing.
English
0
0
0
52
Kaff 📊
Kaff 📊@Kaffchad·
just read @SemiAnalysis_'s latest report on the GPU rental market. a few numbers worth noting: – H100 rental prices up 40% in 5 months ($1.70 → $2.35/hr) – on-demand capacity: sold out across all GPU types – all supply coming online until Aug 2026 already pre-booked – driver: agentic AI workflows consuming compute at parabolic scale centralized clouds are at full capacity and repricing aggressively with the high demand of tokens from #AI models: Claude, ChatGPT, Gemini,… the gap this creates is real and I think decentralized GPU marketplaces exist precisely for this moment. Here are the 3 projects building the alternative for #GPU race: 1/ @akashnet: the open-source cloud marketplace. 34,300 new leases in Q4 2025, GPU utilization near 80%, and just crossed $5M in compute spend in the first 90 days of 2026, an all-time high. avg H100 rental: $1.53/hr. 2/ @TargonCompute: enterprise-grade GPU compute. highest-revenue subnet on $tao (~$10.4M projected ARR). 20B+ inference tokens/day across 1,500+ nodes. raised $10.5M Series A. co-authored a whitepaper with Intel on decentralized compute using Intel TDX. avg H100 rental: $1.79/hr. 3/ @lium_io: the GPU rental layer. 500+ H100s onboarded in early 2026. built for short-term burst compute that centralized clouds can't offer fast enough. rental revenues now outpacing token emissions. avg H100 rental: $1.50/hr. for context: the centralized market is at $2.35/hr and climbing. these three are offering the same hardware at 35–55% cheaper with availability when AWS has none.
Kaff 📊 tweet media
English
32
18
110
7.6K
David López Mateos
David López Mateos@SenorScience·
@stevehou @Silicon_Data A lot of capacity is coming online for B200s in the next 2 months. Whether that shows up in pricing depends on which index you're watching. Ours and Silicon Data's disagree by over $2/hr on the B200 right now. The market is moving fast, but price discovery hasn't caught up.
English
0
0
1
28
Steve Hou
Steve Hou@stevehou·
Normalized GPU rental price appreciation. AI compute prices are going up across the board, especially the newest installed Blackwell B200 chips followed by H100 and A100. AI demand is surging, showing no sign of slowing down. Credit: @Silicon_Data
Steve Hou tweet media
English
16
61
381
73.8K
David López Mateos
David López Mateos@SenorScience·
@SmallCapSnipa Which index? SemiAnalysis says $2.35. Silicon Data says $2.64. Ornn says $1.77. Our Hopper US index shows $2.18. Same chip, same month. The direction is real. The magnitude depends entirely on who you ask.
English
0
0
2
52
Small Cap Snipa
Small Cap Snipa@SmallCapSnipa·
JUST IN: $NVDA H100 GPU PRICES ARE UP 40% SINCE OCTOBER Nvidia H100 price per hour: • October 2025 - $1.70 • March 2026 - $2.35 Nearly a 40% increase over a six month period with on demand GPU rental capacity sold out across all GPU types Bullish cloud providers 📈
Small Cap Snipa tweet media
English
8
7
57
9.5K
David López Mateos
David López Mateos@SenorScience·
@The_AI_Investor The question is: profitable at what price? Four indices track H100 rental rates right now and they disagree by nearly $1/hr. On-demand capacity is sold out at current rates, which means even the highest index is probably below the clearing price. It could be more profitable...
English
0
0
1
36
The AI Investor
The AI Investor@The_AI_Investor·
GPU rental demand is going through the roof. It turns out that renting out H100s is still very profitable, especially with DRAM prices much higher than a few years ago.
English
5
6
76
6.1K
Teng Yan
Teng Yan@tengyanAI·
@SenorScience well said. we've been trying to get a sense of compute costs but data is quite sparse. would love to chat more, could there be ways for us to collaborate
English
1
0
1
96
David López Mateos
David López Mateos@SenorScience·
@0xmoonrunner Thanks @0xmoonrunner There is definitely some truth to this. However, physical markets will happen with the actual good, and there will be differentials between different goods. Financial markets will probably converge to one or two proxies for the whole market
English
0
0
1
81
MoonRunner
MoonRunner@0xmoonrunner·
@SenorScience maybe the issue is we want to forcefully create one market from the place where there are many markets, as in why not to have a separate benchmark for training and for inference, with units of measurement that represent what the user wants to pay for?
English
1
0
1
69
David López Mateos
David López Mateos@SenorScience·
@gustofied Thanks @gustofied . Energy prices feed into all of this too, so not sure this is the case of one market being more important than another. This is a new market for sure.
English
0
0
1
191
gustofied
gustofied@gustofied·
in some ways everything is just more and more chained with gpu prices, in the sense that common denominator / mover, "god" market of all markets is compute. eats more share of the pie, oil is a competing share, and thus it's not countries but these two market players who are positing themselves structurally across the world, as oil perhaps goes for is top, just as the debt market had it's final turn in 2020
English
1
0
4
334
David López Mateos
David López Mateos@SenorScience·
@0xP4mP1t Just a little bit ;) To be fair, this is a very nascent space. The surprising thing would be if they were efficient. It took other commodities decades to become efficient.
English
0
0
0
22
David López Mateos
David López Mateos@SenorScience·
@Lazarus_Capital Thanks @Lazarus_Capital . I don't think we can interpret that much into these data, unfortunately. There's no deep analysis behind these prices. Lots of people are even doing cost-based pricing on data centers!
English
0
0
1
23
Lazarus
Lazarus@Lazarus_Capital·
Reserved matters the most since it prices in spot/on demand and the future curve. It also enables companies to model out their revenue stream to justify the investment into the GPU and DC. I'm surprised to see B200s reserve pricing stay flat to down vs h100 rising. Hope theyre using 1 to 1 year contracts and not the 1st year of a 5 year contract...
David López Mateos@SenorScience

4/ The B200 story is wilder still. Our on-demand index hit $8.13 in March. Other indices registered the same move from levels $2+ lower. Five months of history, and the benchmarks can't agree on where the market even is.

English
1
0
0
279
David López Mateos
David López Mateos@SenorScience·
5/ Meanwhile, on-demand capacity is sold out. Renters are subletting clusters. Prices are rising, but not fast enough to clear the market. The real clearing price is unobservable.
English
1
0
6
559