Shitty power trader

75 posts

Shitty power trader

Shitty power trader

@DomResidAgg

who tf let a zoomer trade virtuals? up 4m ytd on dom:dom

Ignoring risk management 가입일 Ekim 2025
42 팔로잉23 팔로워
hearstwhile
hearstwhile@ee11iss·
@dwarkesh_sp The real lunacy is Jensen's claim that "dialogues with China" will prevent weaponization of AI. The enriched uranium analogy is perfectly apt, and pretending otherwise (while claiming we're on the verge of cancer-curing AGI) is reckless. Great interview and debate.
English
5
0
24
4.2K
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia make multiple different chip architectures? Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!
English
247
620
4.8K
1.8M
Shitty power trader
Shitty power trader@DomResidAgg·
@platonicspace @sjgadler Idk being concise is a very valuable skill (which I apparently lack over text at least) but it seems to not be grouped with a lot of other high performing skills as far as I can tell, and even if it is im guessing people “turn it off” when they’re not surrounded by dumb masses
English
0
0
0
10
Shitty power trader
Shitty power trader@DomResidAgg·
@platonicspace @sjgadler Seems a massive number of the maybe 500 or so good debaters per year (and 500 might be an undercount) go to T20 schools, so they probably make up a non trivial portion of the undergrad body. There’s also a bunch of people who made it without being good at debate but were in it
English
1
0
0
12
Steven Adler
Steven Adler@sjgadler·
Dwarkesh: Why would we want to sell China the materials for a serious cyberweapon? It's like selling them nukes with a casing that says 'made by Boeing' and claiming that's good for the US Jensen: Comparing AI to nukes is lunacy. Enriched uranium is a lousy analogy. It's an illogical analogy. What we have to recognize is that AI is a five-layered cake.
English
105
40
1.2K
270.7K
Shitty power trader
Shitty power trader@DomResidAgg·
@platonicspace @sjgadler Also like 99% of historical smart people have either 1) literally no recordings of them speaking 2) the recordings exist in contexts in which they’re speaking to lay people (a political speech in the early/mid 20th century that was recorded, for example)
English
1
0
0
13
Shitty power trader
Shitty power trader@DomResidAgg·
@platonicspace @sjgadler I think SF smart self selects for a different type of person than “historical person” smart. I admittedly did debate (in which people talk very fast) so maybe that clouds my judgment here but I also know nearly all top HS debaters go to t20 colleges so
English
2
0
0
25
Shitty power trader
Shitty power trader@DomResidAgg·
@sjgadler “Have dialogues” also seems wholly unresponsive - why can’t we sell enriched uranium to antagonistic country X then just “have a dialogue” so they won’t hate us?? Like why can’t we do this without simultaneously providing them the means to harm us?
English
0
0
8
3.1K
Shitty power trader
Shitty power trader@DomResidAgg·
@sjgadler Legitimately can’t tell if Jensen is doing a “but I didn’t eat breakfast” or if he considers the analogy between chips and enriched uranium to be so weak that he doesn’t even piece it together immediately
English
0
0
4
2.8K
Shitty power trader
Shitty power trader@DomResidAgg·
@platonicspace @sjgadler yeah expect he’s probably not doing it consciously (to appear smart) but instead unconsciously. If his audience is lay people he should slow down to be comprehensible, if it’s not he should keep the speed to show in group membership
English
2
0
0
36
Platonic Space
Platonic Space@platonicspace·
@sjgadler dwarkesh has the SF habit of talking too fast. it doesnt necessarily signal intelligence. horsepower is better than torque.
English
1
0
2
223
peepeepoopoo
peepeepoopoo@DeepDishEnjoyer·
lion's mane mushroom with young leek and peanuts. herb salad/gazpacho on the side
peepeepoopoo tweet media
English
9
1
110
4.9K
Shitty power trader
Shitty power trader@DomResidAgg·
@MBazilian I’m going to guess they mean additional data center demand + current demand > current supply, which may or may not be true (depends on how aggressively you forecast demand and how pessimistic you are on supply growth, or supply performance generally)
English
0
0
0
100
Shitty power trader
Shitty power trader@DomResidAgg·
@MartinShkreli @KyleHend1 No, and that’s too much nuance for retarded politicians, but ideally be correct I guess. Or maybe it’s morally permissible to make knowingly false statements (some may call this lying) if it’s directionally correct. God knows politicians think that’s fine
English
0
0
0
131
Martin Shkreli
Martin Shkreli@MartinShkreli·
@DomResidAgg @KyleHend1 pltr doesnt owe federal income tax! they have NOLs and only recently have been profitable. expect them to pay taxes they don't owe?!
English
1
0
5
523
Shitty power trader
Shitty power trader@DomResidAgg·
@MartinShkreli @KyleHend1 Martin im generally a fan but “federal” generally refers to either federal income tax or at least eh corporate equivalent (corporate income tax) and federal *income* tax certainly refers to that. Agree with the message (warren is retarded) but I think it’s important to be right
English
1
0
2
1.3K
Martin Shkreli
Martin Shkreli@MartinShkreli·
@KyleHend1 how does ITEP know what companies paid the IRS? also payroll tax is federal, "you dumb bitch"
English
24
0
210
57.5K
Shanu Mathew
Shanu Mathew@ShanuMathew93·
Trying to bridge thoughts from different sources & podcast given the focus around the AI buildout. How much data center capacity is actually coming online per year — and who is absorbing it? I've been trying to square some numbers across multiple sources on real, energized gigawatts being added annually in the US, who's consuming them, and what we actually know versus what's estimated. The installed base FERC confirmed in their March 2026 State of the Markets report that US data center capacity exceeded 50 GW at year-end 2025. Industry estimates put total US capacity in the 35-40 GW range at year-end 2024 (Bain was at ~35GW, Morgan Stanley's model pegged it at 37 GW). That implies roughly 10-15 GW of net additions in 2025, a massive step-up from prior years. Total facility power, critical IT load, and hyperscale-only all produce different baselines — I haven't seen two sources use the same definition consistently. Frontier labs Brad Gerstner @altcap (investor in both @OpenAI and @AnthropicAI) says OAI and Anthropic have 1.5-2 GW each today, going to ~5 GW by year-end. @dylan522p at @SemiAnalysis (@dwarkesh_sp @dwarkeshpodcast Podcast, March 2026): Both at roughly 2-2.5 GW today. Both reach 5-6 GW by year-end 2026, OpenAI slightly higher. Both targeting ~10 GW by end of 2027. @sarahfriar disclosed 1.9 GW for OpenAI at year-end 2025. Anthropic's operational capacity is likely in the 1.5-2 GW range. On year-end targets, there's a wide gap between what's been contracted (Stargate US + UAE, NVIDIA 10 GW partnership, CoreWeave, Google TPU mega-deal) and what will physically be energized by December. Dylan's 5-6 GW per lab is likely the more physically grounded number, built bottom-up. Per Dylan, Anthropic was conservative on locking up compute early while OpenAI signed aggressively with Microsoft, CoreWeave, Oracle, & even SoftBank Energy — so Anthropic has to now pay premium rental rates or go to lower-quality providers to catch up (but Gerstner's comments made it sound like the take rate wasn't that high). Neither leading lab owns or builds data centers. Their ~6 GW of combined incremental capacity in 2026 is physically built and operated by AWS, Google, Microsoft, CoreWeave, Oracle, and others but contractually dedicated to serving OpenAI and Anthropic workloads. Assume a meaningful chunk of AWS's disclosed additions goes to Anthropic's Trainium/Rainier clusters, and a meaningful chunk of CoreWeave's build goes to OpenAI. CoreWeave also recently signed a multi-year agreement to support Anthropic's Claude models, with new capacity coming online in 2026. Frontier lab demand and hyperscaler supply overlap — they are not additive. Hyperscaler disclosures on physical delivery These are a mix of US and global figures, and facility power vs. IT load definitions vary across companies. Amazon (AWS): @ajassy disclosed AWS added 3.9 GW of new power capacity in 2025 (1.2 GW in Q4 alone). Operating from a base of roughly 8 GW at year-end 2025, with a target to double total capacity by year-end 2027 implying ~16 GW total. Still describes demand as outpacing supply. AWS operates 38 regions across 27 countries = the 3.9 GW is almost certainly global, not US-only, though the US is the clear majority. Microsoft: @SatyaNadella's team disclosed over 2 GW added in FY2025, with roughly 1 GW brought on in the December quarter alone. 400+ data centers globally. Also targeting roughly double capacity by 2027. SemiAnalysis reported that Microsoft paused over 3.5 GW of capacity that would have been built by 2028, though Reuters/TD Cowen put the figure lower at ~2 GW of terminated leases in the US and Europe, and Bernstein says actual cancelled contracts total only "a couple hundred megawatts." The precise number is disputed. The directional point is clear that Microsoft was recalibrating its self-build vs. lease mix but now seems to be building again. Google (Alphabet): @sundarpichai and team guided 2026 capex at $175-185B, nearly double 2025. No explicit "we added X GW" disclosure comparable to AWS. Dylan describes them as "still capacity constrained" and acting fast = buying an energy company, putting down turbine deposits for 2028-29, negotiating long-term power agreements. A large chunk of new capacity is going to TPUs for internal products (Gemini across Search, Android, Workspace) and the Anthropic deal (~1 GW in 2026, ~3.5 GW from 2027). Without a disclosed GW figure, I'd estimate 3-5 GW of 2026 additions based on capex trajectory similar to the other giants. Meta: @finkd guided $115-135B in 2026 capex, nearly double 2025. Building for internal AI workloads (Llama training, inference across Instagram-WhatsApp-Threads) + Meta Superintelligence Labs. 1 GW campus in El Paso (investment scaled from $1.5B to $10B), 1 GW campus in Lebanon, Indiana, JV in Louisiana (~$27B estimated), Prometheus bringing 1 GW online in 2026. On top of the self-build, Meta committed $35.2B to CoreWeave across two deals for third-party capacity. Independent builders and neoclouds @elonmusk's @xai: Colossus 2 in Memphis is targeting 1-2 GW of capacity to support 550,000 next-gen Nvidia chips, scaling to 1 million GPUs. Deployed 35 natural gas turbines generating 420 MW behind the meter to work around grid constraints. @CoreWeave's team added 490 MW across 11 data centers in 2025 (260 MW in Q4). Total active capacity hit 850 MW at year-end against 3.1 GW contracted. Planning $30-35B of 2026 capex. Also acting as lead builder on the 1.2 GW Stargate Abilene campus for OpenAI. @nebiusai: Tracking toward 800 MW - 1 GW of available capacity in 2026. 310 MW facility in Finland. Meta agreed to buy $12B of AI computing capacity from Nebius by 2027, with an option for an additional $15B over five years — up to $27B total. Sense-checking the total - A few different ways to triangulate: Morgan Stanley forecasts ~24 GW of global additions in 2026 * 50-60% US = ~13-14GW. BloombergNEF has something like ~8-10GW of IT Load * 1.4 PUE = ~12-13GW. @climatetech_vc data showing at least 16 GW of US data center capacity slated to come online in 2026 across 140 projects but warns 30-50% may face delays due to power constraints and equipment shortages. Crude capex math: $600-700B in 2026 hyperscaler capex at roughly $40-50B per GW of fully-built capacity also implies mid-teens GW. That's an imprecise conversion as capex covers equipment, data center shells, chips, and land that enter service across different years but it provides another directional anchor. Colliers reported that North American data center absorption hit 15.6 GW in 2025, double the 2024 level. The narrower CBRE primary-colocation-market figure of 2.5 GW only captures a subset of traditional leased space and misses hyperscaler self-build, behind-the-meter neocloud facilities, and training clusters entirely. @EpochAIResearch's frontier data center tracker confirms the step-function: most of the largest campuses (e.g., Meta Hyperion at 2.2 GW, Microsoft Fairwater above 1 GW) don't fully arrive until 2027-2028. It seems a reasonable base case for 2026 US net energized capacity additions: ~15 GW vs. bear case 12-13 GW (permitting delays push energizations into 2027) vs. bull case 18-20 GW (everything announced delivers on schedule). The bucket breakdown: Frontier labs (OpenAI + Anthropic): ~6 GW. Physically built by AWS, Google, Microsoft, CoreWeave, Oracle but contractually dedicated to OpenAI and Anthropic training and inference workloads. ~3 GW incremental per lab. Hyperscaler first-party AI: ~4-5 GW. Microsoft Copilot across 900M MAUs and GitHub Copilot. Google Gemini across Search, Android, Workspace plus DeepMind. Amazon Alexa+ rebuild, internal retail/logistics AI. Meta ad retrieval, recommendations, Llama training. Third-party AI cloud and independent builders: ~2-3 GW. xAI and Meta as external customer of CoreWeave/Nebius. Enterprise builders. Sovereign AI. Inference demand through Bedrock, Vertex, and Foundry APIs. Non-AI cloud + overbuild/commissioning lag: ~2 GW. Traditional enterprise workloads plus power energized ahead of full rack load. Where I'm probably wrong and why the number could be higher than 15 GW ~90% of the incremental build in this framework is AI-related. Only ~1 GW goes to traditional cloud. The most likely source of upside: enterprise inference and cloud AI demand growing faster than this would model. Oracle's remaining performance obligations have exploded to $523B. AWS's non-Anthropic AI business is running at $15B+ ARR. Amazon's custom chip business alone is a $20B+ run-rate. The absorption data supports this. Colliers/Jefferies put North American absorption at 15.6 GW in 2025 = demand is tracking much closer to total additions than people assume. If enterprise adoption of AI APIs is inflecting harder than I'm capturing, the "third-party AI cloud" and "hyperscaler first-party" buckets could each be 1-2 GW bigger, pushing total additions toward 18-20 GW. If the enterprise inference layer is scaling as fast as the hyperscalers are betting (Copilot seats, Gemini in Search, Claude Code adoption, agentic workflows) then 15 GW is conservative and $600B+ in 2026 hyperscaler capex is well supported. Lot of figures and disclosures so I'm sure I slipped up along the way. What did I get wrong? Anything else to include?
Shanu Mathew tweet mediaShanu Mathew tweet mediaShanu Mathew tweet mediaShanu Mathew tweet media
Shanu Mathew@ShanuMathew93

.@altcap had some insightful takes and some arguably newer disclosures on OAI/Anthropic (he's an investor in both) worth tracking: Anthropic's Revenue Ramp: Brad called this the fastest revenue explosion in technology history. $1B run rate end of 2024, $4B by mid-2025, $9B by end of 2025, then $30B by end of March 2026. He noted they hit their year-end target by Q1. To contextualize the monthly adds, he said Anthropic added the equivalent of Databricks plus Palantir combined in a single month. He wouldn't be shocked if Anthropic exits the year at $80-100B in revenue. "TAM for Intelligence" Thesis: Brad's central argument is that intelligence has a near-infinite TAM, fundamentally different from any prior technology market. He stressed this isn't zero-sum between Anthropic and OpenAI. Millions of self-interested actors (consumers, enterprises, 1,000+ paying $1M+ annually) are all demanding the product simultaneously. Same Jevons paradox argument: unit cost of intelligence is plummeting because model capability is surging, which drives more consumption. Gross Margins and "Accidental Profitability": Brad pushed back hard on the narrative that these companies are bleeding cash. His logic: the biggest cost input is compute, and Anthropic only has ~1.5-2 GW of capacity. That compute cost is relatively fixed whether revenue is $1B or $80B. So gross margins are expanding 'explosively.' He suggested the companies may hit 'accidental profitability' because they literally can't spend revenue fast enough on compute buildout. He also noted Anthropic has only 2,500 people versus Google crossing similar revenue thresholds with 120,000. Inference costs are down 90% year over year. Anthropic's Strategic Focus as Competitive Advantage: Brad credited Anthropic's discipline in saying no. No multimodal, no video, no hardware, no chips, no building data centers. They concentrated entirely on coding and co-work as the path to AGI/ASI, executed with 2,500 people all pulling in the same direction. That focus, combined with the coding lead, is what let them come from being "counted out" a year ago to now dominating that market OpenAI Feelin Shor-term Pain but Still Optimistic: Brad said he's a buyer of OpenAI shares today despite the negative vibes (employees leaving, strategy questions, secondary market trading below last valuation). He called it "peak OpenAI FUD." His case: it starts with great researchers and models. The upcoming Spud model (first Blackwell-trained model) is being previewed and people are telling him it's on par with Mythos. Gross vs. Net Revenue Distraction: Brad dismissed the gross vs. net revenue debate (Anthropic reportedly presents gross, OpenAI net). He said the hyperscaler distribution commissions are single-digit percentages of total revenue. Whether you haircut Anthropic by 5-10% or gross up OpenAI, the comparison is roughly apples-to-apples and it's a distraction from the real story.

English
15
59
441
116.2K
Xiao Wang
Xiao Wang@xiaowang1984·
Thanks to @theEnergyMads for telling me about the Electrify Vermont summit and for letting me know @JigarShahDC would be in town so I can ask him a question Kudos to Jigar for being a good sport as well and taking a picture together. 🤎
Xiao Wang tweet mediaXiao Wang tweet mediaXiao Wang tweet mediaXiao Wang tweet media
English
10
4
70
4.3K
Xiao Wang
Xiao Wang@xiaowang1984·
@theripsnorter $80 isn't that terrible a hedge. I would watch out for *future* contracts that are $120+ tho
English
3
0
9
483
RIP MASSACHUSETTS
RIP MASSACHUSETTS@theripsnorter·
Remember when officials said Offshore Wind (OSW) would be cheaper? Even recently Ed Markey said Vineyard Wind (VW) when completed would deliver electricity to the grid at "a fraction of the cost of current wholesale prices." In 2024, with low gas prices, the wholesale price of electricity was $39.50/MWh. In 2025, with higher gas prices, it was $68.50/MWh. Now with VW fully operational, MA ratepayers will be forced to pay ~$80/MWh for everything the turbines generate because we locked ourselves into a fixed price (with escalation) over 20 years. Unless gas prices rise substantially, OSW was a bad deal for MA ratepayers. And of course, they don't want the lower and stable gas prices that would come with more pipelines — otherwise they would look even more foolish. The only saving grace — VW output will represent only 6-6.5% of MA load and hopefully only add ~$2 to your monthly bill.
RIP MASSACHUSETTS tweet media
Cape Cod Times@capecodtimes

Vineyard Wind 1 completes turbine installation after court ruling capecodtimes.com/story/news/env…

English
13
29
92
6.9K
Dr. Jill Stein🌻
Dr. Jill Stein🌻@DrJillStein·
holy shit. if confirmed this changes everything.
English
7.1K
1.8K
10.3K
4M