thomasg.eth

260 posts

thomasg.eth banner
thomasg.eth

thomasg.eth

@thomasg_eth

Student of consciousness and coordination. Building open source VTOL aircraft @arrowair_

Katılım Eylül 2021
701 Takip Edilen7.7K Takipçiler
thomasg.eth
thomasg.eth@thomasg_eth·
@i2cjak The midwit constructs elaborate networks of tubes while the genius knows that we can just dig more wells
English
1
0
2
175
i2cjak
i2cjak@i2cjak·
idea: put water into long, sealed tubes. Then pressurize this water. It can then be delivered to various places easily.
English
21
0
70
4.8K
thomasg.eth retweetledi
Vladimir
Vladimir@MrVladimirX·
Your Laptop Can Run a Mind, But Never a Superintelligence We are about to split into two civilizations: those who own their intelligence, and those who rent it. A 70B parameter model running on a 128GB Apple laptop is likely sufficient for continuously-learning human-level intelligence. A trillion-parameter superintelligence will never run on your local machine. Both of these things are true simultaneously, and the gap between them is not a temporary engineering problem waiting to be solved. It is a permanent feature of physics, and it will reshape society more profoundly than the internet did. Here is why the 70B ceiling is higher than people think. The human brain has roughly 86 billion neurons. It does not grow new neurons when you learn something. It reweights existing connections. A static 70B model is a snapshot frozen at training time. A continuously learning 70B model is a living system doing exactly what your brain does: reshaping itself from experience, every day. The parameter count becomes a vessel that is constantly being reformed. Size stops being the variable. Temporal depth of adaptation becomes the variable. A 128GB M-series MacBook has unified memory shared across CPU, GPU, and Neural Engine at roughly 800 GB/s bandwidth. A 70B model in 4-bit quantization fits in about 38GB, leaving substantial room for context, memory buffers, and lightweight gradient updates. For the first time in history, the continuous learning loop can close locally, in real time, on a device you own. Now for the hard ceiling at the top. A 1 trillion parameter model at aggressive 2-bit quantization requires roughly 250GB just to hold the weights, before activations, before the KV cache, before any actual compute happens. No consumer device in any foreseeable roadmap touches this. But memory size is not even the binding constraint. LLM inference is almost entirely limited by how fast you can stream weights from memory to compute units. A trillion-parameter forward pass requires moving trillions of values. Even at theoretical consumer memory bandwidth speeds, generating a single token takes seconds. Then there is heat. A laptop sustains 20 to 40 watts. Dense superintelligence inference requires hundreds of kilowatts and active liquid cooling. This is not an engineering gap closing over time. The requirements of the largest models are diverging from consumer hardware, not converging toward it. What emerges is a permanent three-tier structure: - At the bottom, sub-human local models between 1B and 13B parameters run on phones and embedded devices, fast and cheap and private, handling narrow tasks brilliantly, essentially free and commoditized. - In the middle, human-level local models between 30B and 100B parameters represent the genuinely disruptive tier: capable of sustained reasoning, creative work, and long-horizon planning, running privately and persistently on hardware you control, adapting to your thinking over time, operating without sending a single byte to a server. A high-end Apple Silicon laptop sits at the frontier of this tier right now. - At the top, dense superintelligence above a trillion parameters will exist exclusively in hyperscaler data centers operated by a handful of companies and governments, capable of cross-domain synthesis at a scale no human or local model can approach, running thousands of parallel reasoning chains, accessed on someone else's terms, metered and monitored and expensive. The separation is not just technical. It is political. Tier 2 democratizes human-level reasoning. Anyone with capable hardware gets a private, persistent, unkillable cognitive partner that knows their history and can never be revoked. Tier 3 concentrates superhuman reasoning in whoever controls the infrastructure. The most consequential design decisions of the next decade will not be about model architecture or benchmark scores. They will be about which capabilities live in which tier, and who gets to decide. That question is already being answered, mostly without public debate, mostly by the people who benefit most from keeping superintelligence behind a paywall and a terms-of-service agreement.
English
6
5
38
5.5K
thomasg.eth
thomasg.eth@thomasg_eth·
Yes, SaaS is dead. But if you think that ruins the whole economy you clearly have no imagination or optimism. We're destined for the stars
English
5
1
19
3.4K
thomasg.eth
thomasg.eth@thomasg_eth·
@beffjezos Vibe-CADing is already getting workable with build123d
English
0
0
4
902
Beff (e/acc)
Beff (e/acc)@beffjezos·
We are entering the era of prompt-to-matter
English
307
1.3K
13.5K
935.7K
thomasg.eth
thomasg.eth@thomasg_eth·
@sergeykarayev @banteg It's crazy how I've gone in a few months from considering vibe coding somewhat low brow/midwit to assuming the best devs are just orchestrating agents
English
1
0
42
6.1K
Sergey Karayev
Sergey Karayev@sergeykarayev·
> 10x dev in 2025: guy's cracked, pushes like 5 PRs a day > 10x dev in 2026: He sits motionless, like a spider in the centre of its web, but that web has a thousand radiations, and he knows well every quiver of each of them. He does little himself. He only plans. But his agents are numerous and splendidly organised.
English
24
68
1.5K
106.3K
thomasg.eth
thomasg.eth@thomasg_eth·
@Xaraphim I'm so pumped for this man. Lmk if you could use any help at all, I'll drive my trailer to Florida and help you move this thing if you need it
English
1
0
3
249
Phoenix𝕏
Phoenix𝕏@Xaraphim·
i’m so unbelievably shocked that this worked thank you so much to everyone who donated and reposted That being said now, I have to figure out where the hell I’m gonna get a laser scanner from
Phoenix𝕏 tweet media
English
42
13
295
11.7K
Phoenix𝕏
Phoenix𝕏@Xaraphim·
oh my gosh
Phoenix𝕏 tweet media
English
6
3
88
1.5K
thomasg.eth retweetledi
Sleety.eth
Sleety.eth@sleety_eth·
Saturn 1 is just a few weeks away. 4 ETH minipools, megapool gas savings, and RPL token earning ETH commission - all the scaling we've been waiting for. 🚀 More details at saturn.rocketpool.net @Rocket_Pool Also, join the Rocket Pool Discord community (discord.com/invite/rocketp…) for sweet POAPs over the coming weeks, like this one: @poapxyz
GIF
English
0
4
16
2.7K
thomasg.eth
thomasg.eth@thomasg_eth·
Node operators just need to post a 4 ETH bond, leaving the remaining 28 ETH to be matched with LST deposits
English
2
1
9
929
thomasg.eth
thomasg.eth@thomasg_eth·
@Nikopolos @dao_times Very interesting stats here, but I disagree with your "reckless experimentation" framing. Fusaka, and lower gas fees in general, creates overwhelmingly more positive value for Ethereum than the negative risks they bring by making spam attacks cheaper.
English
2
0
4
515
Andrey Sergeenkov
Andrey Sergeenkov@Nikopolos·
Record-high Ethereum activity that everyone's celebrating is an address poisoning attack. - Over $740K already stolen, and growing - This became possible thanks to the Fusaka upgrade - This attack is ongoing right now sergeenkov.com/record-high-ac…
English
7
1
15
15.9K
thomasg.eth
thomasg.eth@thomasg_eth·
@drjasper_eth @Rocket_Pool Rocket Pool's decentralized staking pool scaling up like this is such a whitepill when so much Ethereum is being staked by large centralized entities
English
3
7
25
926
jasperthefriendlyghost.eth
jasperthefriendlyghost.eth@drjasper_eth·
There is already 6k ETH sitting in the @Rocket_Pool deposit pool waiting for Saturn 1 this February. Bring your deposits as Rocket Pool will have room for hundreds of thousands of new rETH demand very soon. Plus a staked RPL fee switch turning on ;) New Era.
jasperthefriendlyghost.eth tweet mediajasperthefriendlyghost.eth tweet media
English
10
8
60
1.9K
Tsung Xu
Tsung Xu@tsungxu·
Merry Christmas everyone. Working on getting Santa a sloppy upgrade to his sled
Tsung Xu tweet media
English
1
0
12
732
thomasg.eth retweetledi
Nick Trimmer
Nick Trimmer@nicktrimmer·
MAKE. IT. BETTER.
English
14
14
85
8.7K
thomasg.eth
thomasg.eth@thomasg_eth·
"Since I can't ask you more questions"
thomasg.eth tweet media
English
0
0
1
469
thomasg.eth
thomasg.eth@thomasg_eth·
58 years later, we're repeating Apollo 8. Only this time it is far more dangerous and expensive Luckily SpaceX and Blue Origin will have hundreds if not thousands of humans living and working in space by the end of the 2030s
NASA Artemis@NASAArtemis

The rocket is stacked. ✅ The Orion spacecraft with its launch abort system is stacked atop the Space Launch System rocket. Launch of the Artemis II mission around the Moon is targeted for early next year.

English
0
0
2
721