
Morgan Beller
5.2K posts

Morgan Beller
@beller
Crypto, inner space, outer space, & a bit of everything else @NFX | Co-creator of Diem≋ (fka Libra)








NEW VIDEO and a VERY requested video! Enjoy a new tour of @stoke_space w/ CEO @AndyLapsa!!! There's so many juicy details in this video and of course, we witness another incredible Andromeda test! Enjoy! - youtu.be/7OxNZ-N_3vE




We just trained the first LLM in space using an @Nvidia H100 on Starcloud-1! 🚀 We are also the first to run a version of @Google's Gemini in space! This is a significant step on the road to moving almost all compute to space, to stop draining the energy resources of Earth and to start utilizing the near limitless energy of our Sun! Thanks @pia_singh_ and @CNBC, for highlighting our work! @starcloud_, @AdiOltean, @ezrafeilden

100 GW of Compute Above Earth: The Hardware Leap Hidden in Elon’s Claim 🔥🧠⚙️ Elon says SpaceX could launch ~100 GW of energy, in the form of high-orbit compute, in ~five years. That implies very specific performance requirements for chips, radiators, and solar arrays if launch cadence is going to stay reasonable. In space, compute is constrained by a triangular bottleneck: 1️⃣ Power generation (solar) 2️⃣ Heat rejection (radiators) 3️⃣ Compute density (processing power per kg) All three must rise together. If one lags, the other two stop contributing, and the launch count to hit 100 GW explodes. Today’s generally assumed baseline: ☀️ Solar ~0.8 kW/kg (rigid LEO-class PV) 🌡 Radiators ~1 kW/kg 🧠 Compute ~0.1 kW/kg (typical GPU rack) At these levels, you’d need ≈ 3000 Starship launches to deploy 100 GW of space compute! Compute is the dominant lever: every doubling of compute W/kg roughly halves required launches. Musk’s AI5 → AI8 roadmap points to ~50-100 % annual gains, far faster than the ~25 % GPU trend. Tesla’s AI ASICs are built for efficiency and power density, rather than flexibility; the exact silicon needed for mass-constrained data centers. Crucially, those gains demand new satellite architectures. To sustain cadence, future SpaceX platforms would need to move beyond Starlink-style LEO buses toward larger, optimised HEO compute vehicles with thin-film arrays > 1.5 kW/kg and light, high-temperature radiators > 2 kW/kg: performance levels cited only in advanced NASA STMD and ISNPS studies. Given the 100 GW / ~5 yr claim, Elon is effectively telling us he believes compute, solar, and thermal systems will all hit near-frontier performance, and that Tesla’s chips will deliver the compute-per-kilogram leap needed to make it physically and economically possible. It’s ambitious but plausible: if those PV and radiator systems reach projected “advanced” specs, SpaceX could feasibly deploy ~100 GW of orbital compute per year with ~300 Starship launches. 100 GW matters: it’s roughly the entire terrestrial data-center load expected by 2030. If SpaceX hits that number in orbit, the next AI scaling curve won’t be built on Earth. Link below for the full breakdown 🧐







Big news – we’ve raised $510M in Series D funding! Total funding is now at $990M. This capital will accelerate our fully reusable Nova rocket development and Launch Complex 14 activation at Cape Canaveral. Incredibly grateful to our investment partners and excited to keep the momentum going! Full details: stokespace.com/stoke-space-te…


