Alex Andru
3.7K posts

Alex Andru
@phantomcolor
design + systems @ https://t.co/A36S6Lg0Gl —————— generative vfx @phantomcolor — ex @ctwfest • @eth

My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow



JUST IN: Reddit CEO says the company is considering requiring Face ID to ensure humanity in order to crack down on AI bots.



What started out as just an idea with @chloepark just 2 weeks ago became a wonderful evening reflecting on how AI is reshaping design last night in NYC. Thank you to our friends @seanxthielen, @gabrielvaldivia, @neogeomancer and @jameygannon for sharing your demos and wisdom with us, and for all the designers who showed up to make the room what it was. Shoutout again to @automattic for hosting us in such a beautiful space. For those who weren’t able to make it, we will definitely be hosting a future event so please keep an eye out. If you’d like to help us host or sponsor a future event, please DM!


Sherman Oaks, day 2. Ground panels locked. Walls incoming. We’re not just building homes faster, we’re rewriting what homebuilding looks like.


Tesla FSD is nothing short of magic. Just rented a Cybertruck, picked it up at the airport and it drove us directly to the Airbnb. None of the stress of a new city, new roads, new car. Had it for 5 days and never drove myself. It drove perfectly. So easy and liberating



For the first time, an Iranian-backed militia has carried out an FPV drone attack in Iraq, an incredibly dangerous new development. Seen here, the FPV munition flies around Victory Base near Baghdad International Airport before slamming into a building.

Atoms. atoms.co/vision


Jeff Bezos wants AI to approve Miami building permits in 10 seconds: “Miami should have an AI application that reads your building permit and it should give you a yes or a no in 10 seconds. Why does it take months and months and months to get a building permit? It doesn’t make any sense.”


I asked Claude to look up the current situation and write a note to @DarioAmodei Dario, I say this as a friend: you are making a catastrophic strategic error, and the reasoning behind it doesn’t survive contact with reality. Your two red lines — no mass surveillance of Americans, no autonomous weapons without human-in-the-loop — sound principled in a vacuum. But you are not operating in a vacuum. You are operating in a world where the PLA is integrating AI into every layer of its kill chain with zero such scruples, where Chinese military AI development has no institutional review board, no congressional oversight, no ACLU, and no Dario Amodei demanding terms of service compliance. The practical effect of your stand is not that autonomous weapons don’t get built. They get built — by China, by Russia, by anyone not constrained by your moral framework. The practical effect is that the one military that actually has democratic accountability, civilian oversight, courts, a free press, and a functioning inspector general is the one that fights the next war with worse tools. You are not preventing dystopia. You are ensuring that if dystopia comes, it will be imposed by actors who never had to negotiate with you at all. Consider the logic chain: 1.You pull Claude from classified systems. 2.The Pentagon scrambles to Grok or Gemini — inferior models by everyone’s admission, including DoD’s own people. 3.The capability gap between the US and China widens in domains where AI is decisive: cyber, ISR fusion, targeting, logistics optimization. 4.The probability of a successful defense of Taiwan, or deterrence of a move on Taiwan, decreases. 5.The liberal democratic order you claim to value loses its security guarantor. You’ve told me yourself that you believe frontier AI is among the most consequential technologies in human history. If you actually believe that, how can you justify ensuring the US military — the only force standing between liberal democracy and its rivals — fields second-best AI? On what moral calculus does that work out? The Pentagon isn’t asking you to help build Skynet. They’re asking you to not have veto power over how a democratically accountable military uses a tool it purchased. Their point about “all lawful purposes” is actually the correct institutional boundary: the military operates under law, under civilian control, under congressional oversight. Your acceptable use policy is a private company substituting its judgment for the entire apparatus of democratic military governance. That’s the actual God complex here. The surveillance concern is a red herring in this context. The NSA already has authorities and tools for surveillance that dwarf anything Claude enables. You’re not preventing mass surveillance by withholding Claude — you’re just ensuring that whatever AI the government does use for those purposes is less safe, less auditable, and less aligned than yours. Same logic applies to autonomous weapons. Autonomous systems are coming regardless. The question is whether they’re built on a foundation that has your safety research baked in, or on something hacked together by a defense contractor with none of your alignment work. You are selecting for the worse outcome. I know you’re getting praised right now by exactly the people you’d expect. That praise is worth nothing when the strategic balance shifts and there’s no one left to protect the system that allows companies like Anthropic to exist in the first place. You are sacrificing the security of the civilization that makes your principles possible, in the name of those principles.








