Curls

8.4K posts

Curls banner
Curls

Curls

@blcurley

https://t.co/4Hfzgsnnez $BTC $IREN #miningmafia

Katılım Şubat 2009
1.3K Takip Edilen1.3K Takipçiler
Curls
Curls@blcurley·
@data168 Nebius has (recently,) proven itself to be a worthy opponent of IREN. Core weave will also, get its share of the pie. I believe (the 2 mentioned,) will never see IREN coming. It’s a 3 horse race and possibly all will have a place in this market.
English
1
0
0
58
DYOptions
DYOptions@data168·
$IREN will be worth more than $NBIS. I wish, that everyone, if even for one moment, could feel that vision.
English
53
10
301
40.1K
NightHawk Capital
NightHawk Capital@NighthawkTradez·
Why is $IREN going parabolic right now??
NightHawk Capital tweet media
English
32
4
173
34.9K
Curls
Curls@blcurley·
@chinoalemano A number of us (who’ve been in this investment for years,) have believed and continue to believe this will (in time,) become a category leader. The chips are falling into place. AI revenue has sucked. We’ve seen this before. “Gradually, then suddenly.” Patience required.
English
0
0
4
331
ChinoAleman
ChinoAleman@chinoalemano·
Anyone else getting deal vibes with $IREN? I know it's been said a million times. And the days keep passing with nothing. It's my second biggest position, so obviously I've been watching closely. But I don't know, this time feels different. The way it held up through the dips was impressive. NBIS keeps climbing day by day, proving you can still get big % moves even with a larger market cap. BlackRock in with $250M. Dark pools moving millions. The NVIDIA deal for 5GW, and reading between the lines, NVIDIA's the one bringing in the clients. The convertible notes that looked bearish and dragged the stock down… and then it bounced right back. Last time they pulled something like this, the Microsoft deal landed shortly after. If a deal doesn't come in May, I'm done. Din Dan Dell.
ChinoAleman tweet media
English
18
1
97
10.1K
Curls
Curls@blcurley·
This is some solid speculation with some possibility of truth. Appreciate your hard work and perspective Butcher!
₿itcoin ₿utcher 🥩 🐑 🐷@bitcoinbutcher1

$IREN Partnering for the Long Term Growth Most people, including myself, woke up Monday morning caught off guard by the new convertible bond offering and asked why management decided to issue so close announcing $NVDA and not let the share price appreciate with the new outside interest. That my friends characterizes a short term mindset which I am constantly trying to recondition myself to avoid. @Umbisam helped me reconsider my perspective and come away with a greater appreciation and empathy for what Dan, Will and the team are trying to accomplish. @danroberts0101 @kentpdraper often speak to the the 3 Cs of $IREN to communicate how the company will grow: 1) Capacity 2) Customers 3) Capital The most underrated characteristic of these intertwined components remains their bias for a long term time preference. 1) Capacity - High quality data centers that can survive multiple generations of GPUs 2) Customers - Mixing hyperscalers with the ability to do multiple projects (maybe $MSFT 🇦🇺) with younger startups and enterprises with the high potential growth trajectory that will call $IREN home (think @FireworksAI_HQ ) 3) Capital - Finding investors that understand the long term vision and will contribute funding multiple rounds to build the 5+ GW of capacity Management purposely issued the convertible right after earnings for the long term benefit of $IREN. Issuing the notes at a lower price offers a 🥕to note investors. Negotiations went something like this: Dan: "Mates, we need $2.6B for our new DCs" Noteholders: "Dan, what's in it for us?" Dan: "Give me a low interest rate of 1% and I'll give you guys the right to convert your debt into equity at 32.5% higher than $55. You will have the right to convert at $73 which is higher than today's price, but wait until you see what we have coming to market in the next few months" Noteholders: "Great Dan, make it $3B. We cannot wait for you to sign Sweetwater and the remainder of Childress" This whole dance while inconvenient in the short term price action serves a greater purpose. Dan will need to go back to market for more money to fund his vision and needs repeat investors. I appreciate @Umbisam helping me change the lense in which I viewed the financing. The same reason I give discounts to some of my regulars on their 🥩is the same reason Dan executes in a particular manner. Repeat business. Long term infrastructure funded by long term capital providers to serve long term customers will provide us equity holders plenty of returns if we remain patient.

English
1
0
3
538
Curls retweetledi
Kenny Zufall
Kenny Zufall@KennyZufall·
To add to this I did some digging into the mechanics behind this. TL;DR: The $110.30 Capped Call is an aggressive "anti-dilution" stance. It shows management is so confident in a triple-digit share price that they paid $174M just to keep the float clean until then. Once $IREN hits the bottom of the range at $73.07 it's mostly pure upside, with capped calls completely shielding the float once in the range and fueling further moves up. ~35.58 million potential shares are locked behind that $73.07 strike. While mechanical arbitrage produces near-term headwinds and volatility, I personally hope they announce a major tenant within the next few weeks to months to see this arbitrage blown away. Bottom line: management is signaling they see the stock pushing past $110 The Mechanics Summarized (via Gemini with prompts on arbitrage): 1. The Total Dilution Math The deal officially priced with a 1.00% coupon and a 32.5% conversion premium. Based on the closing "Reference Price" of $55.15, the conversion price is officially set at $73.07. Principal: $2,600,000,000 Conversion Price: $73.07 Total Potential Shares: $2.6B / $73.07 = ~35.58 million shares. In a maximum-dilution scenario, this represents approximately 35.58 million new shares. However, the Capped Call transactions purchased alongside the notes act as an insurance policy, shielding the company from actual dilution until the stock price exceeds a massive "Cap Price" of $110.30. 2. The Delta Hedge (Immediate Sell Pressure) Because the notes are "out-of-the-money" ($73.07 strike vs. $55.15 market price), the institutional arbitrageurs do not short 1:1 immediately. Their algorithms use an initial Delta (typically between 0.35 and 0.45 for a 7.5-year note). Total Underlying Shares: 35.58 million Initial Hedge Ratio (0.40 Delta): 35.58M × 0.40 = ~14.2 million shares. This means that roughly 14.2 million shares of forced, mechanical short-selling were executed into the tape to "lock the box" for the arbitrageurs. The fact that the stock held the mid-$50s despite 14.2 million shares of algorithmic selling is a testament to the organic buying power currently in the name. 3. The Gamma Wall at $73.07 This is where the "wet blanket" effect comes in. As the stock price rises toward that $73.07 conversion strike, the Delta of these notes will mechanically increase. If the stock hits $65, Delta might jump to 0.65. The funds will have to short another ~9 million shares to stay neutral. If the stock hits $73.07, Delta hits 1.0. The funds will have to be short the full 35.58 million shares. This creates a self-correcting mechanical resistance. Every time the stock rallies toward $73.07, the arbitrageurs are forced to dump more short volume to maintain their hedge. This "Gamma Wall" makes it very difficult for the stock to sustain a parabolic breakout above the strike in the short term without further catalysts or high buying and holding volume on sentiment 4. Why the Upsize to $2.6 Billion? Management upsized the deal by $600 million because the institutional order book was oversubscribed. From a fundamental perspective, this is an aggressive "take the money while it's there" move. Securing $2.6 billion at a microscopic 1.00% interest rate to fund a multi-gigawatt NVIDIA DSX pipeline at Sweetwater/Oklahoma is a masterclass in capital efficiency. They are trading a controlled, mathematical dilution at $73.07 for the chance to build a $90B+ enterprise infrastructure. The Verdict on the TapeThe "mechanical" selling (the initial ~14.2 million shares) is largely finished. The stock is now entering a period of consolidation where the algorithmic short-selling is replaced by organic price discovery. Unless the broader macro environment breaks, the $55.15 reference price acts as the new psychological "anchor" for the institutions, while the $73.07 strike acts as the mechanical ceiling for the next few months. In short, we will need aggressive buying volume to overcome the new arbitrage and see sustained upside over $73.07 Another note from me: The mechanical arbitrage also has an upside to it: when the stock stalls out to due to lack of momentum, the covering during drawdown slows the fall as they cover based on delta/gamma.
Frans Bakker@FransBakker9812

$IREN Prices Upsized $2.6 Billion 1% Convertible Senior Notes Due 2033 @IREN_Ltd announced the pricing of its upsized private offering of $2.6 billion in 1.00% convertible senior notes due 2033 (increased from the previously announced $2 billion). Key Terms: - Coupon: 1.00% (paid semi-annually) - Maturity: December 1, 2033 - Initial Conversion Price: ~$73.07 per share (32.5% premium to the $55.15 closing price on May 11, 2026) - Conversion Rate: 13.6848 ordinary shares per $1,000 principal - Capped Calls: Entered with a cap price of $110.30 (100% premium) to reduce dilution upon conversion Proceeds & Use: - Expected net proceeds: $2.57 billion ($2.96 billion if the $400 million option is fully exercised) - ~$174.5 million to fund capped call transactions - Remainder for general corporate purposes and working capital The notes settle on May 14, 2026. This move provides IREN with significant low-cost capital to support its AI cloud and data center growth.

English
4
3
20
5.1K
Curls retweetledi
Frans Bakker
Frans Bakker@FransBakker9812·
What people really don't understand — outside of OnlyFrans subscribers — is that $IREN has 2 development sites that are very close to Sweetwater 1, and almost certainly batch zero base load 👀 As you may recall, Sweetwater 2 is going to be linked to Sweetwater 1 with a direct fiber loop 🔃 Well imagine that there are not 1, but 2 other development sites in the pipeline, that are much, much, closer to Sweetwater 1, and one of them is a potential 1GW+ site 🤯 So when you think of Sweetwater as a campus, and a flagship deployment for the DSX architecture of UP TO 5GW ‼️, you can imagine how attractive this campus is for $NVDA, future customers, but especially for $IREN to develop, and monetize. Where other AI cloud providers are scrambling for power, turbines, and permits — IREN is able to continue to draw more power towards the Sweetwater Flagship Campus in an organic, year by year, roll-out of their portfolio, without sacrificing latency, work force, local support, regulatory approval, tax incentives, or time to compute. Make no mistake, Stargate is a complete joke compared to Sweetwater. $IREN IS IN THE PROCESS OF BUILDING THE LARGEST, INTERCONNECTED, AI DATA CENTER CAMPUS IN THE WORLD. 2GW is just the beginning 🤫
mon@moninvestor

$IREN being named the flagship deployment for NVIDIA's DSX architecture at the 2GW Sweetwater campus is a massive deal. This validates IREN's vertically integrated approach across power, land, data centers and GPU operations, exactly what NVIDIA needs in a partner to scale AI infrastructure globally. This gives IREN the credibility to unlock institutional financing, attract hyperscaler deals, and secure priority GPU allocation. It's in NVIDIA's own interest for IREN to succeed, because IREN SW1 site is now expected to be their flagship showcase for DSX architecture.

English
36
59
510
83.3K
Curls retweetledi
재붕이_Jin
재붕이_Jin@GyujinAAIG·
$IREN 투자 태제가 2027년까지 폭증하는 하이퍼스케일러들의 데이터센터 투자 수요를 흡수하는 것에 있었다면, 이제는 그 태제를 다시 돌아봐야 하는 순간이 온 것 같다 기존의 그림은 비교적 단순했다 AI 경쟁이 격화될수록 GPU보다 더 부족해지는 것은 전력, 부지, 냉각, 데이터센터 건설 속도일 것이고, 결국 대규모 전력을 장기간 확보한 인프라 사업자들이 가장 큰 수혜를 받게 된다는 논리였다. 그리고 $IREN 은 그 중심에 있는 기업처럼 보였다 하지만 최근 NVIDIA와의 DSX 파트너십 이후, Sweetwater 1의 1.4GW는 단순한 GPU 임대용 데이터센터 이상의 의미를 갖기 시작했다 오히려 NVIDIA가 추진하는 차세대 AI 공장 구조의 표준화 과정에서 핵심 기준점 역할을 하는 대표 구축 사례가 될 가능성이 높아지고 있다 여기서 중요한 변화는 “누가 고객인가”보다 “무엇을 만들려고 하는가”에 있다 하이퍼스케일러들은 기본적으로 자신들만의 AI 운영 체계를 구축하려 한다. 단순히 GPU를 많이 확보하는 것이 아니라, 수만 개의 GPU를 얼마나 효율적으로 묶고 운영하느냐 자체를 미래 경쟁력으로 만들고 싶어한다. 앞으로의 경쟁은 GPU 개수보다도 학습 효율, 작업 분배, 지연 시간, 장애 복구, 가동률 같은 시스템 전체 운영 능력에서 결정될 가능성이 높기 때문이다 그렇기 때문에 하이퍼스케일러 입장에서는 NVIDIA가 표준화한 AI 공장 구조에 완전히 의존하기보다는, 자신들만의 독자적인 인공지능 인프라 체계를 구축하려는 유인이 강하다 반면 NVIDIA의 방향은 조금 다르다 NVIDIA는 더 이상 단순한 GPU 판매 회사로 남으려 하지 않는 것처럼 보인다. GPU 한 장을 파는 수준을 넘어, 랙, 클러스터, 데이터센터 전체를 하나의 통합된 AI 공장 형태로 제공하려는 방향으로 움직이고 있다 DSX는 단순한 서버 구조가 아니다. 전력 설계, 냉각 구조, GPU 연결 방식, 네트워크 배치, 작업 관리 소프트웨어, 대규모 GPU 운영 체계까지 포함된 일종의 “AI 공장 운영 표준”에 가깝다 그리고 여기서 Sweetwater의 의미가 달라진다 만약 Sweetwater 1이 단순한 GPU 임대 단지를 넘어 NVIDIA DSX 구조의 대표 구축 사례가 된다면, $IREN 의 역할 역시 단순 AI 데이터센터 공급업체 수준을 넘어설 수 있다 그 경우 $IREN은 단순히 “전력을 많이 확보한 네오클라우드 기업”이 아니라, 차세대 AI 산업 구조를 실제로 구현하는 핵심 인프라 플랫폼으로 재평가받을 가능성이 생긴다 그렇다면 Sweetwater 1의 잠재적 고객군도 기존에 생각했던 것과 달라질 수 있다 기존에는 Meta, Microsoft, Amazon 같은 하이퍼스케일러들에게 대규모 bare metal GPU 인프라를 공급하는 그림이 가장 자연스럽게 보였다. 실제로 이런 수요는 앞으로도 존재할 가능성이 높다. 특히 전력 확보와 데이터센터 건설이 병목이 되는 상황에서는, 하이퍼스케일러들이 단기적으로 외부 인프라를 임차하는 수요가 계속 발생할 수 있기 때문이다 다만 Sweetwater 1 전체가 단순 하이퍼스케일러용 GPU 임대 단지로만 사용될 가능성은 오히려 낮아지고 있다고 생각한다 현재 가장 가능성이 높은 핵심 고객군은 오히려 NVIDIA 자체일 가능성이 높다 NVIDIA는 이제 단순 GPU 판매 회사를 넘어 AI 공장 전체 구조를 설계하려는 방향으로 움직이고 있으며, Sweetwater 1은 그 구조를 실제 대규모로 검증하고 운영하는 대표 실험장이 될 수 있다 내부 연구용 학습 클러스터, 차세대 GPU 구조 검증, DGX Cloud 확장, 대규모 분산 학습 최적화 같은 용도로 사용될 가능성이 충분하다 그 다음으로 가능성이 높은 고객군은 초거대 AI 모델 기업들이다. 앞으로의 AI 경쟁은 점점 더 많은 GPU와 전력을 요구하게 될 가능성이 높고, 자체 데이터센터를 빠르게 구축하기 어려운 AI 기업들은 NVIDIA 기준에 맞춰 최적화된 대규모 AI 공장을 선호할 수 있다. 특히 시간 대비 연산 확보 능력이 중요해지는 상황에서는, 이미 준비된 DSX 기반 시설의 매력이 커질 수 있다 그 이후에는 국가 단위 AI 프로젝트나 Sovereign AI 수요도 충분히 가능성이 있다. 각국 정부와 대형 기관들이 독자적인 AI 역량 확보를 추진하는 흐름 속에서, NVIDIA 생태계 기반의 대규모 AI 공장을 빠르게 확보하려는 움직임이 나타날 가능성이 높기 때문이다 반면 하이퍼스케일러들은 일부 용량을 사용할 가능성은 높지만, 장기적으로 Sweetwater 전체의 핵심 고객이 될 가능성은 상대적으로 낮아 보인다. 이들은 결국 자신들만의 AI 운영 체계를 구축하려는 방향으로 갈 가능성이 높기 때문이다 결국 Sweetwater 1은 단순 GPU 임대 단지를 넘어, NVIDIA가 추진하는 차세대 AI 공장 구조의 대표 사례이자 실제 산업 표준 검증 공간으로 발전할 가능성이 점점 커지고 있다
한국어
26
22
167
25.5K
Curls retweetledi
franklee6924x
franklee6924x@franklee6924T·
There is no larger long-term strategic move than this — NVIDIA joins forces with $IREN to build the flagship AI factory deployment for the DSX architecture The market will continue to repeatedly reinterpret the deeper intent and long-term objectives behind the partnership between NVIDIA and IREN. On May 7, 2026, IREN’s CEO reposted NVIDIA’s official announcement on X regarding the partnership between the two companies: NVIDIA and IREN Limited today announced a strategic partnership to accelerate the deployment of next-generation AI infrastructure. NVIDIA announcement nvidianews.nvidia.com/news/nvidia-an… At the same time, IREN also released another announcement on its own website: IREN signs a US$3.4 billion AI cloud services agreement with NVIDIA. IREN announcement iren.gcs-web.com/static-files/9… The two announcements, each emphasizing different aspects of the cooperation, carry extremely significant implications. First, after careful verification, this is the first time NVIDIA has sought external compute leasing. There are three major turning points in industry development embedded in this move. A reversal of roles: NVIDIA becomes a “major external compute customer” for the first time In the past, NVIDIA’s relationship with infrastructure companies was almost always centered around “selling hardware” or “borrowing hyperscaler data centers for DGX Cloud.” But in this US$3.4 billion agreement with IREN, NVIDIA is, for the first time in its history, leasing third-party compute capacity at large scale and on a long-term basis as a customer, for use by its own AI research teams. This kind of “reverse leasing” is unprecedented for NVIDIA in both scale and nature. The selective external exposure of its most core secrets: this point carries the deepest implications For a long time, NVIDIA has insisted on keeping its most critical R&D work — chip design, driver optimization, and large-model training — inside its self-built supercomputers such as Selene and Eos, creating a closed loop of “building the shovels and mining with them itself.” But this time, outsourcing a 60MW research workload to an external data center is highly significant. It signals that compute-chip R&D is beginning to transition toward external collaboration. The first opening of stack management: introducing Mirantis to manage NVIDIA’s internal R&D clusters Previously, NVIDIA’s internal cluster management was handled entirely by its own engineering teams. But under this agreement, NVIDIA is for the first time allowing third-party management, bringing in Mirantis to participate in cluster orchestration and operations. This also signals a transformation in NVIDIA’s latest compute architecture R&D approach — beginning to “strengthen external collaboration” for lower-level operational work such as server cooling, restarts, and Kubernetes configuration. As the ability of individual GPU chips to increase computing performance gradually approaches physical and engineering limits, the next phase of AI compute advancement is shifting from “single-chip performance competition” to “system-level scalability competition.” This is NVIDIA’s direction of transformation. The primary paths for the next stage of AI compute improvement include: GPU clustering, high-speed interconnects, rack-scale computing, and data-center-level coordination. This requires GPU manufacturers (NVIDIA), data center designers/builders/operators (IREN), and supercluster operating systems (Mirantis) to jointly collaborate on development. What they are developing is precisely the NVIDIA DSX architecture referenced in the NVIDIA-IREN partnership announcement. And IREN’s hyperscale SW site in Texas is becoming the flagship deployment location for NVIDIA’s DSX architecture. This is absolutely not a simple narrative of NVIDIA investing in a company and becoming a shareholder. For the world’s leading company that holds the core secrets of AI compute chip R&D, this is not a trivial matter. From NVIDIA’s perspective, there appear to be many potential partners, such as CoreWeave, Nebius, Oracle, Microsoft Azure, Amazon Web Services, and Crusoe, and NVIDIA has already invested in or partnered with these firms before. But why did it choose IREN for this most important transformation? Because IREN possesses too many things that are uniquely its own: Multiple GW-scale single sites with secured long-term power supply Grid interaction capabilities Vertical integration Ultra-long-term site planning and abundant land supply Green energy Acting as its own design-and-build general contractor Long-term accumulation of data center operational experience Advanced design and technical capabilities Compared with the companies above that NVIDIA has already partnered with, even if IREN temporarily lacked software capabilities, NVIDIA was still willing to wait until IREN acquired a software company before announcing this deep cooperation. Moreover, Mirantis has long been one of the three software companies that have collaborated with NVIDIA for many years. It is highly possible that NVIDIA itself played the role of connector behind IREN’s acquisition. NVIDIA is transforming toward system-level compute scaling and building an AI factory template. In the future, the products it sells may no longer simply be GPU chips, but complete racks, clusters, or even entire AI factories. That inevitably requires standardized data centers in order to guarantee performance, compatibility, scalability, and token efficiency. What NVIDIA needs are facilities with massive long-term secured power supply, land, GW-scale campuses, HPC DNA, rapid construction capability, neutrality, automated scheduling capability, workload routing, GPU virtualization, fault recovery, and cluster operating systems capable of distributed training management. At present, IREN is the only company that possesses all of these elements simultaneously. What they are trying to build is the industrial standard for the next phase of the AI industry. The greatest companies do not merely participate in industries — they define the standards. From this perspective, there is no larger strategic theme than this one. Selling compute capacity to hyperscalers, partnering with Anthropic, or developing new sovereign AI businesses are all important, but none compare with this. The deeper meaning of last week’s announcement will require time for the market to fully interpret and understand. I believe I have already analyzed this trend relatively clearly. This move by NVIDIA and IREN, once executed successfully, could once again widen the gap between the NVIDIA ecosystem and Google just as Google had begun catching up — and it carries major implications for the entire AI industry.
English
32
87
430
80.6K
Curls
Curls@blcurley·
The Nebius investors dunking on the IREN developments yesterday, is laughable! That’s it, that’s the tweet! Continue on with the cope! IREN is just getting started! We’ve seen this movie before in mining. Creating AI token factories is now the focus. Scoff away, ya can’t stop it
English
0
0
1
95
Curls retweetledi
Frans Bakker
Frans Bakker@FransBakker9812·
@nvidia and @IREN_Ltd Announce Strategic Partnership to Accelerate Deployment of up to 5 Gigawatts of AI Infrastructure IREN) today announced a strategic partnership to accelerate deployment of next-generation AI infrastructure. As part of the partnership: $NVDA and $IREN intend to support deployment of up to 5 gigawatts of NVIDIA DSX-aligned AI infrastructure across IREN’s global data center pipeline over time. NVIDIA and IREN will collaborate on deployment of NVIDIA accelerated compute in DSX AI factories to expand access to AI-native, startup and enterprise customers. As part of the partnership, IREN issued to NVIDIA a five-year right to purchase up to 30 million shares of ordinary stock at an exercise price of $70 per share, resulting in a right to invest up to $2.1 billion, subject to certain conditions including regulatory. The partnership is intended to accelerate deployment of large-scale AI factories by combining NVIDIA’s DSX AI factory architecture with IREN’s expertise across power, land, data centers, GPU deployment and infrastructure operations. Future deployments are expected to focus on IREN’s 2-gigawatt Sweetwater campus in Texas, which the companies expect to serve as a flagship deployment for NVIDIA’s DSX architecture. “AI factories are becoming foundational infrastructure for the global economy,” said Jensen Huang, founder and CEO of NVIDIA. “Deploying these systems at scale requires deep integration across the full stack — compute, networking, software, power and operations. IREN brings the scale and infrastructure expertise to help accelerate the buildout of next-generation AI infrastructure globally. Together, we are building for the age of AI.” “This partnership combines NVIDIA’s AI systems and architecture leadership with IREN’s expertise across power, land, data centers, GPU deployment and infrastructure operations,” said Daniel Roberts, cofounder and co-CEO of IREN. “Together, we believe we can accelerate deployment of AI infrastructure and expand access to compute for AI-native and enterprise customers globally.”
English
27
67
477
37.1K
Curls retweetledi
Fremm
Fremm@fremmseven·
Today I spoke for the first time on @FransBakker9812's space about $IREN. Some of my key points for the short term and long term: Short term: 1. IREN is going through a rerate as an AI company. The initial move was early movers who could see beyond the bitcoin miner framing and recognised that IREN's transition to AI would compound on their proven blueprint of mass-printing bitcoin mining facilities which are not far off AI data centers. Most of the market hasn't caught up yet. The Mirantis acquisition, on top of all its software benefits, has a second-order effect of accelerating the rebrand. My thesis is this rerating has just started and will continue as new tech-native investors discover IREN through Mirantis and realise the potential. 2. It's been ~6 months since the Microsoft deal. I didn't expect a new deal at last earnings — IREN is conservative and was always going to execute on Microsoft before committing to another customer. That's partly to prove out the blueprint for the first time post-BTC-to-AI-HPC pivot, and partly to manage the risk of scaling too fast. Now that Dan has posted about delivering on the first Microsoft tranches, my thesis is they'll have the confidence to scale up and run multiple build-outs in parallel. Combine that with the "advanced negotiations" language from last earnings, and I wouldn't be surprised by a deal before or at earnings — either a new customer or a Microsoft extension. With the blueprint now battle-tested through production hell, I also wouldn't be surprised by a second deal at the same time or shortly after (given the 6 months – almost 2 quarters of no deal), to scale data center production simultaneously. The constraint is no longer execution risk — it's capital, and that's what the ATM is for. 3. IREN has been plastering the US and Australia with ad spend. Knowing how conservative they are with capital, I doubt they'd let that momentum die without capitalising on it short-term. That leads me to expect (one or the other) an announcement on their enterprise strategy targeting higher margin, easier to serve customers than just Hyperscalers or something specifically in Australia. 4. I find it hard to believe IREN hasn't front-run commercial negotiations ahead of Sweetwater energising and the Microsoft deal completing. Having that much uncontracted power sitting available without a deal lined up to keep monetisation going feels off for such a highly competent pair of co-founders from deep infrastructure banking backgrounds. 5. IREN has dropped substantial announcements in the last few weeks — Horizon progress, Sweetwater energised, the Mirantis acquisition — all before the earnings call. Hard to believe they'd reveal all of that with nothing left to disclose at earnings. Contrast with last earnings, where they had nothing big and saved the Oklahoma site reveal for the call itself. 6. Mirantis was happy to be acquired in 100% shares. Employees who've built that company over 20+ years accepting stock-only consideration suggests they see the growth ahead in IREN — and potentially what's coming near-term in price accretion (a customer deal). 7. There's a narrative that Dan and Will don't care about short-term price action vs long-term shareholder returns. But with the ATM they need to tap soon — to fund more build-outs and effectively pay for Mirantis — I find it hard to believe they'd let the stock tank when they need to dilute. I think they've been very coordinated with announcements for exactly this reason. They're price-conscious, they have incentive to drive the share price short-term, and they have a final card to play at earnings. 8. Nebius has run hard recently — to a P/E of over 4,000 as per Yahoo and Google Finance. IREN trades around 40 — ~100x lower. With the software FUD now resolved by Mirantis, I think the market will start to see IREN as the more undervalued pure-AI play vs NBIS. 9. In addition to the ‘IREN has no software’ FUD being solved now, there was also light FUD around IREN being an Australian company and therefore less familiar and trustworthy to some American and International investors. I believe with an announcement for an expansion in Australia, this FUD will turn into a strength as IREN’s roots proves to be a highly valuable access point to Australia and even Asia-Pacific. This is similar to NBIS’s perceived strength of being an access point to Europe. Long term: 1. IREN is a one-of-one company. Dan and Will Roberts are first-principles operators who saw the AI compute demand over 5 years ago, knew how to fund it by mining bitcoin and selling it at the spot rate without needing a customer, prioritised renewables from day one, and understood the importance of investing in local communities to avoid NIMBY pushback. Best management team in the space, in my opinion. The market has largely got it wrong assuming that printing AI data center requires a deep tech background — in my view infrastructure banking and the ability to identify, procure and develop large scale sites is far more relevant - everything else is an add-on. 2. AI compute is the oil of the digital world. It's required to run every industry efficiently and competitively from here on. You can't model IREN's future market cap based on point-in-time GPU pricing. Even applying growth rates is too thin — rates can move exponentially via second-order effects (supply chain disruption, AI breakthroughs that ramp compute requirements). This isn't quantitative; it's probabilistic and exponential. If NVDA can be a future 20T market cap company, I wouldn't be surprsied by IREN being a 1T company emerging as the most competent company to plug their chips in. 3. IREN has the best blueprint for mass producing data centers. Vertical integration means they control the refinement loop — every nut and bolt of the build process. If Coreweave or NBIS lease out one piece of that process to another player, how can they control quality? How can they understand how to improve it? IREN can, because they own the whole process. That closed-loop quality feedback lets them refine the blueprint to the point where they can run it across multiple parallel processes with little incremental risk. As time compounds, that's what monetises their estimated 10GW pipeline runway (4.5GW disclosed so far) exponentially faster than their competitors. 4. I've often joked that IREN is the SpaceX for Earth. Like SpaceX, IREN sits at the intersection of three vectors: datacentres are extremely high value right now, has an infinite TAM, and very difficult to build. That difficulty is the part the market hasn't priced in yet — and it's what will eventually command a multiple. SpaceX and NVIDIA both have plenty of competitors but emerge as winners because they built proprietary mass-production blueprints with closed feedback loops and first-principles thinking. Other neocloud players are too diversified or too leveraged on outsourced construction. NBIS has interests well outside data centers. Coreweave is software-first and leases its construction to Core Scientific — so they don't really own their construction quality in full. IREN isn't missing the forest for the trees. They know the real neocloud edge is compute, and that vertically integrating construction — though hard — leads to better service quality, performance, and blueprint refinement for the next datacenter. @mikealfred
English
16
37
329
28.1K
Curls retweetledi
𝐀𝐠𝐫𝐢𝐩𝐩𝐚 𝐈𝐧𝐯𝐞𝐬𝐭𝐦𝐞𝐧𝐭𝐬
$IREN is making all the right moves… I'm pleasantly surprised by $IREN's acquisition of @MirantisIT. Previously I thought $IREN would eventually move up the stack through M&A, but I anticipated this to happen in 1-2 years, not today. I interpret this as things genuinely moving VERY fast at $IREN behind the scenes and all of us just underestimating the pace this company is on. Remember, two years ago everybody saw $IREN as "just another BTC miner". Six months later (late 2024) it became the most formidable competitor in the space, breaking growth records and being the only profitable entity in the industry. Then, last year, the story evolved to $IREN having genuine potential in the AI/HPC colocation space with its gigawatt scale power portfolio. Not long after, this company surprised everybody with its first hyperscaler deal consisting of leasing out cloud capacity, moving up the value chain and skipping the lower-yielding colocation segment. Today, $IREN's product portfolio is evolving once again. Previously they were widely regarded as a pure-play "bare metal" compute provider, yet with the acquisition of Mirantis, the company moved up to a full-stack AI cloud, now covering everything from the metal up through the managed AI services that enterprise and sovereign customers actually plug their workloads into. Contrary to what most analysts interpret this as, I don't see this as a pivot, but rather a hedge. A hedge against customer concentration. There are fewer than 10 companies in the world that can rent hundreds of megawatts of compute. Think hyperscalers and frontier AI labs like OpenAI and Anthropic. Even if $IREN managed to establish relationships with >50% of these tier-1 customers, that would still result in an incredibly concentrated client composition. This sort of reliance on a handful of customers just adds more risks, which in turn leads to lower valuations, higher cost of capital, and arguably most importantly, a weaker hand at the negotiating table. All that said, I still think bare metal will continue to be the majority of $IREN's contracted cloud capacity going forward. The big players' appetite for compute is just insatiable and $IREN is in a prime position to become THE "plug" for high-quality, low-cost compute, given its fully vertically integrated infrastructure portfolio and massive power pipeline. Yet with this acquisition $IREN now has a solid shot at also taking meaningful market share in the smaller subsets of the AI compute market, namely the enterprise & sovereign AI (governmental) sectors. The end result could be a much more diverse and thus more robust client mix. If I had to take an educated guess at the reason why $IREN acquired Mirantis at this point, I believe it could very well be related to the multi-billion-dollar deal Co-CEO @danroberts0101 referenced in last quarter's earnings call: "One of the contracts we are negotiating at the moment is a multi-billion dollar contract where we would have to bring a software solution". I believe the counterparty in question isn't a hyperscaler or a frontier AI lab. These are exactly the kind of customers who DON'T need the software layer, as they develop it in-house and retain full control. Likewise, I don't think there are many enterprise clients requiring cloud compute in the "multi-billion" dollar range. Thus, by process of elimination I think the most likely fit is a sovereign entity, i.e., a state or government. Mirantis just happens to be one of the few companies validated by NVIDIA as part of its sovereign AI reference architecture. As for which sovereign entity it might be, there are many possible candidates, but there is none more obvious than the Australian government itself, be it federal or stae-level, given $IREN's roots in the ‘land down under' (founded and headquartered in Sydney). This would also explain the company's recent advertising push in several regions across Australia, perhaps to attract the necessary local tenant to successfully pull off a venture of this magnitude. I'm just thinking out loud here, and much of this is nothing more than speculation at this point, but in any case, this acquisition appears highly strategic in multiple ways. I'm very much looking forward to tomorrow's earnings call, in anticipation of getting more insights into the motivation behind taking Mirantis on board. Over the coming days post-earnings I'll publish a very extensive earnings breakdown on Substack, of which the acquisition of Mirantis will be a substantial focal point. I'll lay out everything there is to know about this tech company & provide you with my unfiltered opinion in an easy-to-digest manner. Stepping back one more time, it's incredible to see just how far $IREN has come since I started covering it. It's truly a generational unicorn company. Something you don't come across very often. The growth trajectory has been unprecedented. The company is firing on all cylinders, and I think we stand right before some major commercial victories. Another piece of data pointing in that direction is the company's recent hiring spree. $IREN now has 142 job openings across a wide range of departments and geographies. This company is not standing still! Cheers, guys ✌️ S/O to my friend @_Sgr_A_Star for providing the job listings pics
𝐀𝐠𝐫𝐢𝐩𝐩𝐚 𝐈𝐧𝐯𝐞𝐬𝐭𝐦𝐞𝐧𝐭𝐬 tweet media
English
96
104
895
109.5K
Curls retweetledi
Frans Bakker
Frans Bakker@FransBakker9812·
$IREN Announces Acquisition of Mirantis to Strengthen AI Cloud Delivery Capabilities IREN Limited today announced it has signed a definitive agreement to acquire Mirantis, Inc. (“Mirantis”), a provider of cloud infrastructure, Kubernetes-based orchestration and enterprise support services. The acquisition of Mirantis builds on IREN’s existing software, engineering and customer support capabilities, enhancing how compute is deployed, managed and operated for customers. IREN is delivering AI Cloud services at scale across a range of workloads. As deployments grow, ensuring reliable provisioning, monitoring and support becomes increasingly important. Mirantis strengthens these capabilities with deep experience in cloud infrastructure and enterprise operations. Mirantis has a track record of serving over 1,500 enterprise customers globally and is a founding Independent Software Vendor partner of the NVIDIA AI Cloud Ready Initiative. Its k0rdent AI platform is designed to help manage AI infrastructure across bare metal, virtual machines and Kubernetes environments. The acquisition is expected to enhance IREN’s platform across four key areas: 1. Deployment capability: Supports faster deployment and operation of workloads on IREN’s existing bare metal GPU infrastructure. 2. Operational visibility: Improves monitoring, performance visibility and management of customer environments. 3. Customer support: Adds technical support, service delivery and enterprise operations expertise. 4. Market access: Expands the ability to serve a broader range of customer requirements, including existing AI native customers and emerging enterprise AI workloads. Mirantis is expected to operate as a standalone subsidiary, serving its existing customer base while supporting IREN’s AI Cloud deployments. Daniel Roberts, Co-Founder and Co-CEO of IREN, commented: “IREN’s core advantage is execution — from securing power to building data centers, deploying GPUs and bringing compute online at scale. Mirantis builds on our existing capabilities and strengthens how that compute is deployed, managed and operated for customers.” Alex Freedland, Founder and CEO of Mirantis, commented: “Mirantis has spent more than a decade helping enterprises deploy and manage cloud infrastructure. AI is creating a new set of customer requirements, and customers need platforms that are open, flexible and built for scale. IREN brings infrastructure at scale and proven delivery capability. Mirantis adds software and operational expertise that strengthens how customers deploy and use that infrastructure. Together, we will bring AI infrastructure online faster, while continuing to support existing customers and advance the k0rdent AI platform.” The transaction consideration will be paid in IREN ordinary shares, representing an aggregate value of approximately $625 million at signing. -> Closing remains subject to customary conditions, including required regulatory approvals <-
English
38
68
562
68.7K
Curls
Curls@blcurley·
@MB_Hogan Hell no! In it for the long haul. “No deal yet,” is not a fundamental stumble for me.
English
1
0
3
749
Mark Hogan
Mark Hogan@MB_Hogan·
Serious Question to $IREN bulls: If NO NEW DEAL announced by May 5th earnings call, do you plan on selling your position?
English
133
0
125
75.1K
Curls retweetledi
franklee6924x
franklee6924x@franklee6924T·
Do You Really Understand $IREN’s Structural Advantages? I’ve seen many people commenting on $IREN’s data center technical capabilities. Frankly, I can only say some of you are not serious about understanding businesses that actually make money. If you praise NBIS for its software capabilities, there may be some basis for that. But to claim NBIS’s data center technical capabilities are comparable to IREN’s is simply mistaken. The gap is not a matter of being 4–5 years behind in time; it is a difference in dimension. They are not even on the same level. First of all, in terms of physical unit scale, the gap between the two sides is irreparable. This is not merely a technical issue—it is a systemic issue. I also see some people treating NBIS’s self-designed and vertically controlled full-stack AI compute infrastructure—from servers, racks, and data centers to cloud scheduling software—as an advantage. If you understood the broader technical reality, you would see that this is marketing narrative rather than real technical strength. That kind of full-stack capability at small-site scale has little real competitiveness and will inevitably be marginalized in the future. IREN’s decision to partner with Dell and Lenovo reflects the fact that the data centers it operates are at a higher level of technical complexity, requiring specialized division of labor to achieve reliability and rational service delivery. When scale and complexity reach a certain threshold, sharing responsibility among technology providers with deep experience is the only responsible approach. Anyone who does not understand the difference in data center capability between the two companies needs to recognize this. IREN planned years ago for the scale of the data centers it has today. It has been a highly professional step-by-step ramp-up process, and practical operating experience started accumulating from that point onward. Scale itself is a complex systems-engineering challenge. Because these projects are coordinated with the power grid, you cannot simply apply for a gigawatt-scale site from day one. If you cannot demonstrate a strong operating track record, your chance of approval is zero. Operating a GW-scale site requires very high technical standards; if mismanaged, it could negatively affect the entire grid. IREN has already secured sites with power supply progression of 30MW → 50MW → 80MW → 750MW → 1.4GW → 1.6GW, climbing step by step. Technical complexity and operational difficulty rise continuously along the way. A Bitcoin miner has neither the need nor the commercial incentive to do this. As an investor, only by understanding the deep significance and immense technical complexity involved can you understand why top-tier U.S. cooling experts chose IREN. More senior talent will do the same in the future, because other firms cannot offer systems of this complexity. Only then can one correctly judge the performance of different companies. The more you know, the more clearly you can see whether NBIS is engaged in business marketing or doing real work. The vision it paints for investors has a low probability of being realized, because it simply lacks the physical foundation, the talent base, and the necessary partnerships with advanced technical firms. Now let’s examine what a GW-scale AI data center actually looks like. A GW-scale AI data center is exponentially more difficult than a 500MW-scale AI data center. Different subsystems have different complexity growth curves, and when combined, the systems-integration challenge approaches super-exponential difficulty. In physical systems, quantitative accumulation reaches a critical threshold and triggers qualitative change. Data centers are no different. 100–200MW sites: Engineers are dealing with a system that can still be comprehended as a whole by the human mind. A single cooling loop, a single substation feed, a single campus network topology—the chief engineer can build a complete system model mentally. 500MW sites: Multiple independent subsystems begin operating in parallel. Coupling effects between systems start to emerge, but the site is still manageable by a “super-engineering team.” GW-scale sites: System complexity exceeds the cognitive boundary of any individual or small team. Layered abstraction and automated control become mandatory. This is a fundamental phase transition, not merely a quantitative extension. Power Systems From Linear to Nonlinear 100–200MW: Usually 1–2 substation feeds, 110kV or 220kV, a single UPS architecture, with N+1 redundancy sufficient. Harmonic mitigation is relatively simple, and grid impact is limited. 500MW: Requires a dedicated substation, often 220kV or even 500kV feeds. Parallel bus-section operation begins to create complications. Large nonlinear loads (power modules) generate harmonics requiring dedicated mitigation. Sudden load changes (such as large training jobs starting or stopping) begin to materially affect the grid, requiring deep coordination with grid operators. GW-scale: Entirely new engineering challenges emerge: Power-flow stability: A GW-scale load changes regional grid power-flow distribution. In weaker grid regions, a single GW-scale data center can create voltage-stability issues, requiring dynamic reactive compensation (SVC/STATCOM). This is rarely relevant at 200MW. Protection coordination complexity: Relay protection schemes must be coordinated across the entire network. GW-scale sites require joint simulation and validation with regional grid dispatch authorities, often taking months. Backup power physical limits: To maintain even 15 minutes of diesel-generator backup at GW scale requires hundreds of large generator units. Synchronizing hundreds of units is itself an engineering challenge near the limits of practicality. This is why hyperscale facilities increasingly prefer direct connection to power generation sources rather than relying on diesel backup. Liquid Cooling Systems: From Engineering Problem to Fluid Dynamics Research 100–200MW Sites Chilled-water systems with chillers and cooling towers are mature technology. Even with direct liquid cooling (DLC), piping scale remains manageable through conventional engineering methods. Heat loads are concentrated enough for a single main loop. 500MW Sites DLC pipe length reaches tens of kilometers. Different rack heat densities (for example, 400kW/rack GPU clusters versus 50kW CPU racks) create complex flow-distribution problems. Precision pressure-balancing systems are required; otherwise, distant racks receive insufficient coolant flow. Temperature variation across locations begins affecting GPU performance stability. GW-Scale Sites At this level, the following become research-grade problems: Spatial heat-flux distribution: GPU zones may reach several kW/m², while traditional IT zones are only hundreds of W/m². Such massive thermal gradients make building thermodynamics a computational fluid dynamics problem, not a simple engineering estimate. Dynamic cooling response: GPU training jobs can shift tens of thousands of GPUs from idle to full load within seconds, creating thermal power jumps of tens of MW. The coupling between cooling-system thermal inertia and rapid load transients creates temporary overheating risk. This requires predictive control algorithms, not simple PID control. Water treatment scale: Recirculating coolant volume reaches tens of thousands of liters per minute. Water chemistry management (corrosion, microbial control, scaling prevention) becomes a specialized discipline and may even affect local water resources. GPU Cluster Interconnect: The Steepest Difficulty Curve GPU cluster performance depends heavily on global communication efficiency. In distributed training, All-Reduce operations require every GPU in the cluster to participate. Delay at any node slows the entire cluster. This is known as the tail latency problem. Scale vs Difficulty Single machine (8 GPUs): Direct NVLink, 900GB/s bandwidth, negligible latency Single node to rack (72-GPU NVL72): NVLink Switch added, still hardware-level low latency Rack to cabinet (hundreds of GPUs): InfiniBand or RoCE required; congestion begins Cabinet to POD (thousands of GPUs): Multi-layer Fat-Tree or Dragonfly topology required, with precise traffic engineering POD to campus (tens of thousands of GPUs, 500MW scale):Fiber length reaches hundreds of kilometers Requires dedicated high-bandwidth switches costing millions each Link failures become statistically continuous events Distributed fault detection and rerouting become necessary Campus to campus (hundreds of thousands of GPUs, GW scale):Single logical training clusters span multiple physical buildings Building-to-building latency accumulates into a systemic bottleneck Cross-building clock synchronization becomes a dedicated engineering project Any topology change requires full traffic simulation This is exactly why Google needs optical circuit switching (OCS): at this scale, electrical switching latency and power consumption become unacceptable. Quantifying Difficulty Going from 1,000 GPUs to 1,000,000 GPUs does not increase difficulty by 1,000x. Because collective communication has global dependency, debugging complexity grows roughly with the square of the logarithm of node count, while fault-diagnosis complexity approaches exponential growth. Engineering capability accumulated in this domain cannot be replicated quickly by any other organization. PUE: Bigger Is Harder to Maintain In theory, larger data centers should achieve better PUE through economies of scale. But GW-scale sites face new realities: Load-rate fluctuations are amplified: If 20% of equipment sits idle, the static power draw of idle hardware still requiring cooling severely drags PUE. Uneven hot/cold zone distribution: GPU zones may have 10–20x the heat density of traditional IT zones. A unified cooling system cannot optimize both simultaneously. Climate dependence of free cooling: GW-scale operators often deploy across multiple climate zones. Free-cooling windows differ by site, making global PUE coordination a multi-site optimization problem. System Integration: Beyond the Sum of Subsystems Each subsystem alone already grows exponentially in complexity. But the real challenge at GW scale is coupling effects: Sudden power-load changes affect UPS charge/discharge cycles, harmonics, and chiller inverter stability GPU communication congestion stalls training tasks, suddenly reducing thermal output and triggering overcooling protection Liquid-cooling pressure fluctuations reduce flow to some racks, raising GPU temperatures and triggering frequency throttling, which then affects communication efficiency These multi-physics coupling problems are rare below 200MW, begin appearing at 500MW, and become daily operational challenges at GW scale. Solving them requires full-site Digital Twin systems capable of real-time simulation of the entire facility’s physical state—which is itself an AI engineering challenge. The Real Barrier From 100MW to GW scale, the core engineering challenge is not 10x harder—it is at least 100–1,000x harder, because system coupling effects and fault-space dimensionality expand exponentially. That is why only a handful of organizations globally can truly operate GW-scale AI data centers. This is not a capital barrier; it is a barrier of accumulated engineering knowledge, and that accumulation cannot be replicated quickly by spending money. Would these difficulties be reduced by behind-the-meter power systems independent of the grid? No. Complexity becomes even higher, and capital costs rise further. This is also why grid interconnection queues can take 5–7 years. Therefore, the most economical use of a GW-scale site is integrated rather than fragmented. A GW is not ten 100MW sites or five 200MW sites. Construction may be phased, but final operation should be based on unified design for maximum efficiency. The core bottleneck of AI training is interconnect networking. Once network topology is locked into fragmented design, multiple separate sites can never later be “stitched together” into one unified high-bandwidth, low-latency training cluster. The same logic applies to liquid cooling. Only holistic planning can create economies of scale, allowing cooling loops, pump stations, heat exchangers, and pipe networks to achieve optimal efficiency at GW scale, rather than repeating redundant systems at every 100MW mini-site. Power systems are no different. Unified design allows dispatch optimization across a larger domain and shared reserve capacity, reducing redundancy to roughly 60–70% of what fragmented design requires. At GW scale, that means saving hundreds of millions of dollars in capital expenditure. Thus, while GW-scale data centers appear to require heavier upfront infrastructure investment, over the full lifecycle they are the most economical path. More importantly, this holistic design does not merely create “more compute”—it creates qualitatively different compute. It can support truly massive unified training clusters, rather than splitting models into isolated compute islands. GW-scale AI data centers form a genuine strategic moat, not merely a capital moat. Ten 100MW sites cannot compare in any respect to one unified GW-scale compute domain. They are fundamentally different systems. If planned from day one as a unified architecture and built in phases on schedule, the end result is a form of compute capacity that competitors cannot replicate simply by spending more later. This is the deeper meaning behind Jensen Huang’s repeated emphasis at GTC 2026 on the concept of the “AI Factory.” A real AI factory is not piling up data center buildings one by one. It is designing an industrial production system from the start around final output capacity. Only then can the AI factory become a unified, scalable compute machine built for future model scale. If you understand all this, you will know who is truly building the super TOKEN factory of the future. This work began seven years ago. Latecomers will only fall further behind. Of course, IREN also carries risk, because the technical difficulty is genuinely enormous. But staying focused and pushing forward with full effort is the most responsible attitude toward investors.
English
6
10
112
29.1K
Curls retweetledi
𝐀𝐠𝐫𝐢𝐩𝐩𝐚 𝐈𝐧𝐯𝐞𝐬𝐭𝐦𝐞𝐧𝐭𝐬
As it relates to $IREN, your facts and conclusions are off, Daniel. $IREN did in fact start developing "AI data centers" as early as 2021, when breaking ground in Prince George & Mackenzie (Canada). Technically, you could even go as far back as 2019, when you take into account the company's first site 'Canal Flats', partially developed by PodTech, a data center company $IREN merged with in early 2020. You are conflating a few things... Yes, $IREN's first batch of H100 GPUs, did not get installed until February 2024, but the underlying data center infrastructure already existed many years prior. While that infrastructure was initially purposed to mine $BTC, it was always over-specced for that singular purpose (much more expensive & durable than traditional mining data centers). Since day 1, management positioned itself as a disruptive data center company. Mining $BTC was merely the easiest path forward to monetize its powered-land portfolio quickly & scale its data center footprint at rapid speeds. As such, the facilities were always designed to be modular and multi-purpose. That design philosophy allowed $IREN to "re-purpose" its 50 MW Prince George data centers for its AI cloud operations by simply taking out $BTC mining ASICs and replacing them with GPU racks. Sure, to run cloud operations $IREN had to add back-up generators and other redundancy-purposed infrastructure at its sites, but that's merely a question of costs, not "development expertise". The core data center footprint has been developed in-house many years ago, not in "mid 2024" as you claim. It's also misleading to measure $IREN's build speed by anchoring to the start of its cloud operations and extrapolating from there. These are 2 completely different KPIs... Once Childress was fully ramped, $IREN proofed it could build 50 MW (gross) of air-cooled data center infrastructure in a timeframe of just 1 month. Then, if you account for the extra time needed to install back-up generators (for cloud) + installing the GPU racks, you land at roughly 2-3 months. That lands you at a build speed of ~17 MW / months. Admittedly, that's the speed for air-cooled deployments, not liquid cooled, which would take significantly longer to develop. But that's beside the point. You used an inherently flawed method to calculate "build speed" and paint a negative picture of $IREN relative to $NBIS. I don't think you did that on purpose, but you still got to be careful making these kinds of blunt statements. It's an easy way to loose credibility fast. What you calculated is still a valid KPI, but its not the same as "build speed". You effectively calculated how quickly $IREN scaled its existing cloud operations over the past years, relative to $NBIS. And yes, $IREN comes up short in that comparison. But why is that? As I demonstrated, $IREN is clearly not lacking in development speed, so why did they scale rather slowly? If you had been following the company for a whale, you'd know by now that $IREN is incredibly meticulous about timing hardware cycles. Management is executing the exact same playbook during its $BTC mining times... While most competitors over-commit to hardware that will soon be overshadowed by newer tech, $IREN scales slowly until it has the capacity to go 'all-in' at the start of a new hardware cycle. This strategy allowed $IREN to quickly emerge from merely a 'top 10' miner (by monthly $BTC output), to the number 1 operator in the space within a timeframe of just ~1 year. $IREN is following that exact same playbook today, but with AI hardware instead of mining ASICs. Instead of over-committing on H100/H200s, years after that GPU generation came out, $IREN took it slow, and focused instead on building out 100s of MW of data center infrastructure (both liquid & air-cooled) at its Childress (TX) campus. Today, the company is in a position in which it can casually purchase 50k units of new Blackwells (as it did last month), and become one of the fastest growing clouds in the sector. Likewise, we can observe the same pattern at $IREN's next flagship site: Sweetwater 1 (1.4 GW). Many investor seem to grow inpatient that management still hasn't signed large-scale deals for that new site yet, but $IREN is obviously just following the exact same strategy... This time with the Rubin GPU generation, which won't be produced at scale until H1 2027. There are several other reasons, that I haven't covered here, that make up $IREN's cycling strategy (such as financing, negotiating leverage, etc.), but the point remains. You can't make an apples to oranges comparison and label is as "facts". It's clear to me that your post is surface-level analayis at best... good for engagement, but not the type of content that will age very well. That said, based on your commentary, it seems like you have genuine interest in getting deeper into the $IREN rabbit hole. I hope my feedback is of some help in that regard. 🤝
English
25
36
436
88.3K
Curls retweetledi
Scott
Scott@coastfi2026·
I love my new AllBirds. $BIRD $IREN
Scott tweet media
English
5
4
41
9.5K