
Valerio Lomanto
778 posts

Valerio Lomanto
@verglasz
Sporadically hopeful humanist.







Introducing wterm (“dub-term”) A terminal emulator for the web → DOM rendering — not canvas → Select text, copy/paste, ⌘+F, a11y → Dirty-row tracking, 24-bit color, themes → WebSocket transport with reconnection → Zig core compiled to ~12 KB WASM → just-bash, local, SSH






My column: Anthropic’s Restraint Is a Terrifying Warning Sign nytimes.com/2026/04/07/opi…






The reason software eats RAM is the same reason factories used to dump chemicals in rivers. The cost is externalized. Every mass of inference compute shows up on an engineering manager's AWS bill, broken down to the cent, reviewed quarterly. Every mass of RAM consumed on YOUR machine shows up nowhere in anyone's budget. Chrome could cut memory usage by 60% tomorrow and Google's revenue wouldn't move a single basis point. Docker's 2GB idle footprint costs Docker Inc. exactly $0. Electron's 500MB todo list costs the Electron team exactly $0. The user paid for the RAM. The user pays the electricity. The user deals with the fan noise. The company ships faster because they chose the laziest possible runtime. The token-optimization obsession makes this even clearer. Companies optimize inference cost because inference cost hits their margins. They'll spend six months shaving 200ms off a model response. They won't spend six days reducing a desktop client's memory footprint because that memory belongs to someone else's hardware. This is why the 16GB vs 32GB debate is a trap. You're asking consumers to buy more expensive hardware to subsidize the software industry's refusal to optimize for a resource they never have to pay for. The market will never fix this on its own. The people writing the checks and the people running out of RAM are on opposite sides of the transaction.



Half of all Germans NEVER use public transport. So do about 2/3 residents in Italy, Portugal, and France.


