Untethered

5.5K posts

Untethered banner
Untethered

Untethered

@Untethered_Coin

#Bitcoin

Canada Katılım Nisan 2012
5.6K Takip Edilen2.7K Takipçiler
Untethered
Untethered@Untethered_Coin·
@wealthmoose Let’s just call it what it is, its just pure virtue signalling to pander to their political base! It’s not actually a green policy, as it does nothing for climate change.
English
0
2
5
82
wealthmoose
wealthmoose@wealthmoose·
🇨🇦 CANADA’S MOST IMPORTANT POLL OF 2026 🇨🇦 Mark Carney .. one year in. No sugarcoating. Just your honest answer. 👇 🗳️ What is Carney actually doing? A) Trying but failing 😔 B) Deliberately managing decline 😳 C) Completely lost 🤷 D) Worst PM in Canadian history 💀 Drop your answer 👇 RT so every Canadian votes 🔁 Tag someone who needs to see this Every. Single. Vote. Tells the real story Ottawa doesn’t want told. 🇨🇦 #CdnPoli #ElbowsUp #Canada
wealthmoose tweet media
wealthmoose@wealthmoose

🚨 After everything Canadians have witnessed in 2026: 👇 📍110,000 jobs gone 📍 4.5M living in poverty 📍Biggest house price drop ever 📍Construction falling 18.1% 📍Debt taller than space 🚀 📍Poorer than Alabama 📍Bengali ballots at nominations 🗳️ 📍Pension eyed for Net Zero 💸 📍Elbows up then deeper integration 🕺 📍0.5% of HIS money in Canada I have ONE question for every Canadian. 👇 #CdnPoli #ElbowsUp #WakeUpCanada 🇨🇦

English
1.4K
379
776
43.4K
Untethered retweetledi
Jordi Visser
Jordi Visser@jvisserlabs·
As semis go parabolic, the AI bubble talk is again rising on X. Everyone is dusting off their Kindleberger Manias, Panics, and Crashes book. But as AI agents dominate market action, both through earnings and as the new emotionless traders in this world, I believe the better cycle lens is now Bubbles, Parabolas and Speed Crashes. This is a different market structure. This is a different world. Investor cycles and regime shifts will always be here, but with AI, the rules of the game are changing in real time. Watch my latest video on the implications for investors: youtu.be/Sopf31BOP4U
YouTube video
YouTube
Jordi Visser tweet media
English
36
66
494
52.2K
Untethered retweetledi
NVIDIA Data Center
NVIDIA Data Center@NVIDIADC·
The next chapter of space computing is here 🛰️ NVIDIA and its ecosystem are advancing AI from Earth-to-space across: ✔️ Earth Orbit and Infrared Imagery ✔️ Radio Frequency and Synthetic Aperture Radar ✔️ Autonomous Space Operations Leading commercial space companies and mission-grade, radiation-hardened partners are scheduling deployments of NVIDIA Jetson Orin, IGX Thor, and the Vera Rubin Space-1 module for on-orbit AI inference and ground data processing. Explore the final frontier of AI 🔗 nvda.ws/4wb6qQd
NVIDIA Data Center tweet media
English
124
606
5.7K
3.9M
Untethered retweetledi
Jukan
Jukan@jukan05·
Why did xAI hand over a 220,000-GPU cluster to Anthropic? The technical backdrop to xAI's decision to hand Colossus 1 over to Anthropic in its entirety is more interesting than it appears. xAI deployed more than 220,000 NVIDIA GPUs at its Colossus 1 data center in Memphis. Of these, roughly 150,000 are estimated to be H100s, 50,000 H200s, and 20,000 GB200s. In other words, three different generations of silicon are mixed together inside a single cluster — a "heterogeneous architecture." For distributed training, however, this configuration is close to a disaster, according to engineers familiar with the setup. In distributed training, 100,000 GPUs must finish a single step simultaneously before the cluster can advance to the next one. Even if the GB200s finish their computation first, the remaining 99,999 chips have to wait for the slower H100s — or for any GPU that has hit a stack-related snag — to catch up. This is known as the straggler effect. The 11% GPU utilization rate (MFU: the share of theoretical FLOPs actually realized) at xAI recently reported by The Information can be read as the numerical fallout of this problem. It stands in stark contrast to the 40%-plus MFU figures achieved by Meta and Google. The problem runs deeper still. As discussed earlier, NVIDIA's NCCL has traditionally been optimized for a ring topology. It works beautifully at the 1,000–10,000 GPU scale, but once you push into the 100,000-unit range, the latency of data traversing the ring once around becomes punishingly long. GPUs need to churn through computations rapidly to keep MFU high, but while they sit waiting endlessly for data to arrive over the network fabric, more than half of the silicon falls into idle. Google sidestepped this bottleneck with its own custom topology (Google's OCS: Apollo/Palomar), but xAI, by my read, has not yet reached that stage. Layer Blackwell's (GB200) "power smoothing" issue on top, and the picture comes into focus. According to Zeeshan Patel, formerly in charge of multimodal pre-training at xAI, Blackwell GPUs draw power so aggressively that the chip itself includes a hardware feature for smoothing power delivery. xAI's existing software stack, however, was optimized for Hopper and does not understand the characteristics of the new hardware; when it imposes irregular loads on the chip, the silicon physically destructs — literally melts. That means the modeling stack must be rewritten from scratch, which in turn means scaling is far harder than most of us imagine. Pulling all of this together points to a single conclusion. xAI judged that training frontier models on Colossus 1 simply was not efficient enough to be worthwhile. It therefore moved its own training workloads wholesale onto Colossus 2, built as a 100% Blackwell homogeneous cluster. Colossus 1, on the other hand — whose mixed architecture is far less crippling for inference, which parallelizes more forgivingly — was leased in its entirety to an Anthropic that desperately needed inference capacity. Many observers point to what looks like a contradiction: Elon Musk poured enormous capital into building Colossus, only to hand the core asset over to a direct competitor in Anthropic. Others read it as xAI capitulating because it is a "middling frontier lab." But these are surface-level reads. Look at the numbers and a different picture emerges. xAI today holds roughly 550,000+ GPUs in total (on an H100-equivalent performance basis), and Colossus 1 (220,000 units) accounts for only about 40% of the total available capacity. Colossus 2 — built entirely on Blackwell — is already operational and continuing to expand. Elon kept the all-Blackwell homogeneous cluster (Colossus 2) for himself and leased out the older, mixed-generation Colossus 1. In other words, he handed the pain of rewriting the stack — the MFU-11% debacle — to Anthropic, while keeping his own focus on training the next generation of models. The real point, then, is this. Elon's objective appears to be positioning ahead of the SpaceXAI IPO at a $1.75 trillion valuation, currently floated for as early as June. The narrative SpaceXAI now needs is that xAI — long the "sore finger" — is not merely a research lab burning cash, but a business with a "neo-cloud" model in the mold of AWS, capable of leasing surplus assets at high yields. From a cost-of-capital perspective, an "AGI cash incinerator" is far less attractive to investors than a "data-center landlord generating cash." As noted above, the most important detail of the Colossus 1 lease is that it is for inference, not training. Unlike training, inference requires far less tightly synchronized inter-GPU communication. Even when the chips are heterogeneous, the workload parcels out cleanly across them in parallel. The straggler effect — the chief weakness of a mixed cluster — is essentially neutralized for inference workloads. Furthermore, with Anthropic occupying all 220,000 GPUs as a single tenant, the network-switch jitter (unanticipated latency) that arises under multi-tenancy disappears. The two sides' technical weaknesses end up complementing each other almost exactly. One insight follows. As a training cluster mixing H100/H200/GB200, Colossus 1 was an asset that could only deliver an MFU of 11%. The moment it was handed over to a single inference customer, however, that asset transformed into a cash-flow asset rented out at roughly $2.60 per GPU-hour (a weighted average of the lease rates across GPU types). For xAI, what was a "cluster from hell" for training has become a "golden goose" minting $5–6 billion in annual revenue when redeployed for inference. Elon's genius, I would argue, lies not in the model but in this asset-rotation structure. The weight of that $6 billion becomes clearer when set against xAI's income statement. Annualizing xAI's 1Q26 net loss yields roughly $6 billion in losses per year. The $5–6 billion in annual revenue generated by leasing Colossus 1 to Anthropic, in other words, almost perfectly hedges xAI's loss figure. This single deal effectively pulls xAI to break-even. Heading into the SpaceXAI IPO, this functions as a core line of financial defense. From a cost-of-capital standpoint, if the image shifts from "research lab burning cash" to "infrastructure tollgate stably printing $6 billion a year," the entire tone of the offering can change. (May 8, 2026, Mirae Asset Securities)
Jukan@jukan05

What the SpaceX–Anthropic Deal Means Two weeks ago, we published a note laying out what GPT-5.5's release implied. The conclusion was simple: whoever secures compute first, in greater volume, and with greater reliability ultimately takes the win. With OpenAI's 30GW roadmap dwarfing Anthropic's 7–8GW, we closed by arguing that the structural advantage on compute sat with OpenAI. Less than a fortnight later, that conclusion is being tested. On May 6, Anthropic signed a single-tenant lease for the entirety of Colossus 1 with SpaceXAI — the infrastructure subsidiary that consolidates Elon Musk's xAI and SpaceX. The asset carries more than 220,000 GPUs and 300MW of power, and crucially, is scheduled to come online within this month. It served as the capstone of Anthropic's April blitz, which added 13.8GW of cumulative capacity over the span of a single month. On headline numbers alone, OpenAI took more than a year to stack 18GW; Anthropic has put 13.8GW in the ground in thirty days. The takeaways break down into three. First, the compute pecking order has been redrawn again. Anthropic has now swept up the AWS expansion (5GW, with $100B+ in spend commitments over a decade), Google + Broadcom (3.5GW of TPU), Google Cloud (5GW alongside a $40B investment), and now SpaceXAI's Colossus 1 (0.3GW). Cumulative committed capacity, inclusive of pre-April allocations, sits at 14.8GW. This is still only half of OpenAI's 2030 target of 30GW, but the fact that the SpaceX lease will be live inside a month makes "deliverability" a qualitatively different proposition. Second, Elon Musk is the plaintiff in an active lawsuit against OpenAI — and at the same time, the supplier handing 220,000+ GPUs and 300MW of power, in one block, to OpenAI's most formidable competitor. The timing matters: the deal was struck in the middle of the Musk–Altman trial. We read this as a deliberate pincer with OpenAI in the middle. In the courtroom, Musk works to dismantle the moral legitimacy of OpenAI's leadership; in the market, he arms Anthropic to absorb OpenAI's revenue and user base. Third, the structure is financial-engineering perfection — a clean win-win for both sides. xAI can recognize $6B of annual revenue from a single contract, an amount that almost precisely offsets its Q1 2026 annualized net loss of $6B. It also accelerates the cleanup of SpaceXAI's pre-IPO balance sheet, with the entity now being floated at around $1.75T. Anthropic, on the other side, converts roughly $5B of spend into what it expects to be $15B of ARR via the coming inference-revenue surge. (Mirae Asset Securities, May 8, 2026)

English
201
513
4.2K
1.2M
Elon Musk
Elon Musk@elonmusk·
It was an honor to be shown the awesome @Intel fab in Oregon this week. Looking forward to a great partnership with @SpaceX & @Tesla!
English
4.1K
9.7K
115.1K
46.2M
Untethered retweetledi
Liquidity Wars
Liquidity Wars@liquidity_wars·
TESLA × SPACE-X "The Berkshire Hathaway of the 21st Century" A combined Tesla–SpaceX merger would create an unprecedented vertically integrated powerhouse spanning terrestrial energy, autonomous mobility, global logistics, and orbital infrastructure—effectively forming the Berkshire Hathaway of the 21st century but built on exponential-growth technologies rather than legacy cash-flow businesses. Unifying advanced manufacturing, AI, robotics, satellite networks, launch capability, and energy systems under one operating and capital-allocation umbrella would produce a diversified, synergistic ecosystem with moats no competitor could replicate. Such a merger would position the company as the core infrastructure layer of both Earth and space economies, making it a compulsory allocation for global investors seeking exposure to the next century’s dominant industrial and technological frontier. $TSLA is currently entering a high-conviction buy zone because it has officially broken out of a five-month falling channel, a technical shift that mirrors the consolidation phase preceding its massive 1,400% run in 2019. The stock is currently compressing within a macro ascending triangle and is sitting directly on a major trendline that has acted as consistent support since 2023, offering an attractive risk-to-reward ratio for entries. With the technical "apex" suggesting an imminent volatility expansion, a confirmed monthly candle close above the $483 resistance level would validate a blue-sky breakout, setting a technical measured move target of $1,100 per share. Investing in Tesla isn't just buying an EV car company; it's buying a stake in a diversified "physical economy" giant that scales global energy, orbital satellite networks, and the future of autonomous robotics and AI. You are essentially getting exposure to a vertically integrated tech empire that powers everything from the home to the stars. Tesla Products & Business Units ​Automotive: Model S, Model 3, Model X, Model Y ​Cybertruck ​Tesla Semi ​Tesla Energy: Megapack, Powerwall, Solar Roof, Solar Panels ​Tesla Bot (Optimus) ​Full Self-Driving (FSD) ​Dojo Supercomputer ​Tesla Insurance ​Supercharger Network ​SpaceX Products & Business Units ​Starlink (Residential, Business, Roam, Mobility, Maritime, Aviation) ​Starshield ​Starship ​Falcon 9 & Falcon Heavy ​Dragon Spacecraft (Cargo & Crew) ​Raptor & Merlin Engines ​SpaceX GPU/Semiconductor Division ​Major Contracts & Partners ​NASA Artemis (HLS) ​U.S. Space Force (National Security Space Launch) ​National Reconnaissance Office (NRO) ​Sourcewell Public Agency Fleet Agreement ​Terafab AI Joint Venture ​xAI (Integrated Partner) ​Cursor AI (Integrated Partner) To me this is a MANDATORY investment. ​We are at a historic crossroads where the "Physical Economy" meets the "AI Revolution." With SpaceX reportedly filing for a confidential IPO at a $1.75 trillion valuation and securing the rights to acquire Cursor AI for $60 billion, the gravity of Elon’s empire is becoming impossible for institutional capital to ignore. ​Whether these entities officially merge into a @elonmusk Megacorp or continue as a tightly integrated ecosystem, the technicals on $TSLA are clear: we are at the apex of a macro triangle. A breakout above $483 isn't just a price move—it’s the market finally pricing in the most dominant tech conglomerate of the 21st century. ​Accumulation Zone: $335.00 – $385.00 ​Thesis: This range represents a high-probability "Support Flip" and "Mean Reversion" zone. Following the breakout of the 5-month falling channel, the price is currently undergoing a structural retest of the descending resistance-turned-support. Invalidation Point: $330.00 (Weekly Candle Close) ​Thesis: A definitive breach of the $330.00 level signals a structural breakdown of the 4.5-year Ascending Triangle. ​Risk Management: A weekly close below this threshold invalidates the bullish macro-sequence, confirming that the trendline has shifted from support to resistance. At this point, the "Mandatory Investment" thesis is technically void, necessitating a full capital exit to mitigate downside exposure and prevent a "Value Trap" scenario during a potential move to lower liquidity zones.
Liquidity Wars tweet media
English
3
15
61
34.6K
Cantonese Cat 🐱🐈
Cantonese Cat 🐱🐈@cantonmeow·
I really like these Gann levels better than fibs. $TSLA
Cantonese Cat 🐱🐈 tweet media
English
20
8
224
14.1K
Untethered
Untethered@Untethered_Coin·
Agreed. Liberals are walking trigger warnings. When your entire identity is based off of being a victim, virtue signalling, and feigning outrage at every imaginary perceived slight, it’s probably hard to hold it together. But at least they can participate in the victimhood olympics where everyone gets a participation trophy! 🤣
English
1
0
1
27
Elon Musk
Elon Musk@elonmusk·
It is too easy to troll smug libs … but it is so fun 😂
English
14.3K
26.2K
330.7K
64.2M
Untethered
Untethered@Untethered_Coin·
@Ricktheeyedoc @RobertW96576248 @wealthmoose My guess is he doesn’t care about the truth. He’s either so ideologically tied to his ESG narrative that he won’t listen. Or he knows it’s not true, but wants to continue his green grift!
English
1
0
1
13
wealthmoose
wealthmoose@wealthmoose·
🚨🇨🇦The OECD ranked Canada's future. DEAD LAST. 38 countries. 38th place. For projected GDP per capita growth through 2060. Not projecting a recession. Projecting a generation. Carney just borrowed $25B more.🤔 🍁 #CdnPoli #OECD #Canada
wealthmoose tweet media
English
200
1.9K
3.4K
65.3K
Cryptic Trades
Cryptic Trades@CrypticTrades_·
Would you guys enjoy seeing more chart updates from me?
English
10
3
124
5.9K
Untethered
Untethered@Untethered_Coin·
@elonmusk @farzyness Were significant changes to the code required to operate in a woke socialist autocracy? As an embarrassed Canadian I’m curious! 🤣
English
0
0
0
9
Whole Mars Catalog
Whole Mars Catalog@wholemars·
In January, Green Hills Software proudly announced that the BMW iX would use their tech. But read closely, and you’ll notice BMW didn’t use their tech at all. Some of BMW’s tier 1 suppliers like MobilEye did. And now BMW is dumping MobilEye, and Green Hills Software with them.
Whole Mars Catalog tweet mediaWhole Mars Catalog tweet mediaWhole Mars Catalog tweet mediaWhole Mars Catalog tweet media
English
66
149
2.4K
0
BULL OF BRITAIN
BULL OF BRITAIN@BULLOFBRITAIN·
@BonyBallf2 Actually I just thought it was good insight and wanted to qrt but couldn't. So posted a follow-up to it
English
1
0
6
3K
BULL OF BRITAIN
BULL OF BRITAIN@BULLOFBRITAIN·
WE ARE SO EARLY. Goldman Sachs just published their CPO supply chain map for $NVDA and $AVGO. It's good but realise how many names are MISSING COMPLETLY. $SOI - Not listed anywhere. Goldman maps optical engines, lasers, connectors, FAUs, test equipment but completely skips the substrate layer. Every silicon photonics PIC that TSMC fabs for both NVIDIA and Broadcom's CPO platforms starts on a Soitec Photonics-SOI wafer. That's a glaring omission. $IQE - No epiwafer layer at all. Goldman jumps from lasers to optical engines without acknowledging who grows the epitaxial layers on those laser chips. $AIXA - No MOCVD equipment layer. Every III-V laser source listed (Sumitomo, Furukawa, VPEC, LandMark, Lumentum, Coherent) uses AIXTRON or Veeco reactors to grow their material. $TSE - Goldman lists $TSMC as CPO chip fabrication for both ecosystems but doesn't mention Tower, which is arguably the leading independent silicon photonics foundry and is tripling capacity. $AEHR - No wafer-level test/burn-in. Goldman lists SPIL for CPO chip test on the NVIDIA side and "TBA" for Broadcom, but misses the wafer-level burn-in step entirely. $LPK - No glass waveguide or laser processing equipment. Goldman lists Robotechnik and ASMPT for CPO coupling but misses LPKF's LIDE technology for glass-based CPO integration. $SIVE - Not listed under laser sources despite making InP lasers specifically designed for silicon photonics integration.
BULL OF BRITAIN tweet media
English
20
130
1K
117.1K