Jonathan Yuen

3K posts

Jonathan Yuen banner
Jonathan Yuen

Jonathan Yuen

@jonathanykh

data and automation systems

Katılım Kasım 2013
2.2K Takip Edilen470 Takipçiler
Jonathan Yuen
Jonathan Yuen@jonathanykh·
finally the chase of intelligence has given humanity a motivation to leave their home planet
Shanaka Anslem Perera ⚡@shanaka86

Everyone is covering Terafab as a chip factory. It is not a chip factory. Last night in Austin, Elon unveiled a facility that makes masks, fabricates chips, and tests them inside a single building with a nine-month recursive improvement cadence. No such loop exists anywhere else on Earth. Then he told you 80% of the output goes to space. Then he showed you a 100-kilowatt AI satellite with solar panels and radiators, scaling to megawatt range. Then he said Optimus plus photovoltaics will be the first von Neumann probe, a machine capable of replicating itself from raw materials found in space. Nobody connected the sequence. Terafab produces 1 terawatt per year of compute. The entire United States consumes 0.5 terawatts of electricity. Musk is building a single factory whose output in AI silicon exceeds twice the power consumption of the country it sits in. And he is sending 80% of it off-planet because Earth literally cannot power what he is building. Follow the mechanism. Terafab seeds the chips. Starship launches Optimus robots and solar arrays at 100 million tons per year. The robots mine lunar and asteroid regolith for silicon, iron, and nickel. They 3D-print more robots. They fabricate more solar panels. They assemble more AI satellites. Each satellite runs hotter-burning D3 chips designed specifically for vacuum, where free radiative cooling eliminates the thermal constraints that strangle every terrestrial data center on the planet. The nodes replicate. The replication is exponential. This is a Dyson Swarm bootstrap hidden inside a semiconductor announcement. The math is public. The Sun outputs 3.828 times 10 to the 26th watts. A 2022 paper in Physica Scripta calculated that 5.5 billion satellites at 290 kilograms each, robotically manufactured from Mars resources, capture enough solar energy to meet all of Earth’s power needs within 50 years. A 2025 paper in Solar Energy Materials calculated a partial swarm capturing 4% of solar output yields 15.6 yottawatts, roughly a billion times current human civilization’s total energy budget. Musk just announced the factory that builds the chips that go inside the satellites that replicate themselves forever. 92% of advanced logic chips are fabricated in Taiwan. One factory in Austin does not fix that. But one self-replicating system seeded by that factory, launched by the only company with reusable heavy-lift rockets, assembled by the only humanoid robot in mass production, and powered by the only star within reach, does not fix a supply chain. It obsoletes the concept of supply chains entirely. The market priced this as a $20 billion capex story about semiconductor independence. The actual announcement was the engineering blueprint for Kardashev Type II. Humanity sits at 0.73 on the Kardashev scale. 18 terawatts. The distance between here and harnessing a star is not a technology gap. It is a recursion gap. And recursion is exactly what a single building in Austin that makes its own masks, builds its own chips, tests its own chips, and launches the output into orbit on its own rockets was designed to close. Every civilization that makes it past this point never looks back.

English
0
0
0
8
Jonathan Yuen
Jonathan Yuen@jonathanykh·
@taalas_inc holy shit, imagine Opus 4.6 or gemini 3.1 on these chips, UI is going to be dynamic with this
English
0
0
0
124
Jonathan Yuen
Jonathan Yuen@jonathanykh·
"[✻] [✻] [✻] · 3 guest passes at /passes" what's this?
Jonathan Yuen tweet media
English
0
0
4
171
Jonathan Yuen retweetledi
David Sacks
David Sacks@DavidSacks·
There will be no federal bailout for AI. The U.S. has at least 5 major frontier model companies. If one fails, others will take its place.
English
927
1.6K
21.4K
2.3M
Jonathan Yuen
Jonathan Yuen@jonathanykh·
the current state of HBM for AI chips is increasingly similar to the choice of avoiding carbon fiber in building Starship - it could be way more scalable to just use steel (DRAM) instead
Elon Musk@elonmusk

@SawyerMerritt You can fit more total RAM on the board if you use “normal” memory than high-bandwidth memory and it is super cheap. Maybe high-bandwidth memory is still the right choice, but using HBM isn’t the slam dunk many people think it is.

English
0
0
1
125
Jonathan Yuen
Jonathan Yuen@jonathanykh·
hetzner has better uptime than aws and it is 50% cheaper.
English
0
0
0
232
Jonathan Yuen
Jonathan Yuen@jonathanykh·
damn, 78s, my personal record on getting Claude to reason this long!
Jonathan Yuen tweet media
English
0
0
0
100
Boris Cherny
Boris Cherny@bcherny·
👋 Boris from the Claude Code team here. Compact behavior is the same as before -- the new ⛝ boxes in /context are just a cosmetic UI change that gives people more transparency into auto-compact. We always auto-compacted near 155k tokens so there's enough buffer. We do that for reliability, and not to save costs or anything like that. Would recommend Cline does something similar, if you don't already. After dozens of iterations, we've found 155k to work well to maximize content window while also maximizing reliability and avoiding "context window exceeded" API errors. Re: rate limits -- we publish these here (support.claude.com/en/articles/11…), and generally find that most users hitting rate limits are still using the older Opus 4.1 model and have not yet upgraded to the more capable Sonnet 4.5.
English
14
6
81
6.9K
Saoud Rizwan
Saoud Rizwan@sdrzn·
Claude Code’s last update now auto-compacts more aggressively, using less of the context window to reduce costs. Users are also reporting stricter rate-limits, all of a sudden getting cooldown periods of 4 days. Anthropic dug themselves a grave getting everyone to sign up for their $200 max plan—it misaligned business and product incentives, forcing them to cost optimize and degrade quality. Claude Code is no longer the best harness for their model anymore and their users can feel it:
Saoud Rizwan tweet mediaSaoud Rizwan tweet media
English
113
72
898
262.7K
Philipp Schmid
Philipp Schmid@_philschmid·
@_amNoone This should. Can you send me an DM with an example what you are trying to do?
English
3
0
5
2.7K
Chubby♨️
Chubby♨️@kimmonismus·
The real Minecraft benchmark: Famed gamer creates working 5 million parameter ChatGPT AI model in Minecraft, made with 439 million blocks - AI trained to hold conversations, working model runs inference in the game
Chubby♨️ tweet media
English
25
20
237
22.1K
Jonathan Yuen
Jonathan Yuen@jonathanykh·
the sora app is dangerous, even more than TikTok
English
0
0
0
83
Jonathan Yuen
Jonathan Yuen@jonathanykh·
what's most interesting to me in Sonnet 4.5 and Claude Code 2.0 is the model will literally pause its reply right in the middle, call additional tools for more context, then continue answering it feels like a real-human to me
Jonathan Yuen tweet media
English
0
0
0
131
Jonathan Yuen
Jonathan Yuen@jonathanykh·
tried Codex today just to immediately uninstall it after asking GPT-5-High a simple architectural question only to receive a big sloppy response while Claude answered perfectly
English
0
0
0
73
Jonathan Yuen
Jonathan Yuen@jonathanykh·
Imagine with Claude is so fun
Jonathan Yuen tweet media
English
0
0
0
129
Jonathan Yuen
Jonathan Yuen@jonathanykh·
@sidbid this just confuse users, is claude code still taking account of “ultrathink” inside claude.md as token budget settings? cc @_catwu
English
0
0
0
263
Sid
Sid@sidbid·
We just shipped a UX update for Claude Code: you have more control over when extended thinking kicks in using '/t'
English
60
64
845
142.9K