
Planning for a launch at @WordCampUS 🤩. Let's just say that we're gonna keep it simple, and that's a good thing.
Justin Sainton
29.9K posts

@JS_Zao
Zao. Adores the heck out of @MelissaSainton. Cold takes and warm optimism. Former member of the WordPress Plugin Review Team.

Planning for a launch at @WordCampUS 🤩. Let's just say that we're gonna keep it simple, and that's a good thing.

🇺🇸 U.S. adults: “Ban AI in schools.” 🇨🇳 China: students (and adults) lining up by the thousands to learn OpenClaw. 🚀 Alpha School: our students getting recognized by OpenClaw’s creator @steipete and presenting at ClawCon. One of them is 15 and has earned $30,000+ in contracts. American schools are debating whether kids should touch AI. Our kids are building with it. 🛠️

Literacy starts at home.




Our media partners at @therepositorywp published a thoughtful piece covering recent PressConf updates—including new speakers, the evolving format, the VIP dinner, and @raquel__karina’s vision for what a successful PressConf 2026 looks like. Read the full article here: therepository.email/pressconf-retu…


Introducing the Google Workspace CLI: github.com/googleworkspac… - built for humans and agents. Google Drive, Gmail, Calendar, and every Workspace API. 40+ agent skills included.



A developer installed Command Code, pointed it at a new project, and started coding like they normally do. For the first few hours, it was just another AI coding agent. Correct output. Generic patterns. The usual. By day three, something shifted. The agent stopped suggesting console.log for debugging and started using the developer's custom logger. It stopped generating default exports and switched to named exports. It started structuring tests the way this specific developer structures tests. Nobody told it to do any of this. This is the part that fascinates me most: the learning loop mirrors how a junior developer absorbs a senior's style by pair programming. It watches. It picks up patterns. It adjusts. Except it does it from every accept, reject, and edit, continuously. We measured the correction loops. Week one, developers were making 1.5 edits per suggestion. By month one, it dropped to 0.3. The agent was generating code that looked like theirs on the first pass. We're approaching a world where developers work alongside AI agents for most of their output. And when that happens, the differentiator between useful AI and frustrating AI comes down to one thing: Whether the agent knows your coding taste. Correctness was the first problem to solve. Every model can do that now. The next problem is alignment to the individual. How you name things. When you extract helpers. Which abstractions you reach for. The thousand micro-decisions that make code feel like yours versus feel like you're babysitting someone else's work. That said, I'm always honest about where we are. This is early. The system learns patterns, not intentions. It won't anticipate architectural decisions you've never shown it. And taste is easier to capture for some patterns (naming, structure, formatting) than others (when to introduce a new abstraction, how to handle novel edge cases). We're not pretending this is solved. But the trajectory is clear. Rules files decay. Fine-tuning is too expensive to update continuously. Skills give every developer the same output. None of these approaches treat your behavior as a signal. The developer who has an AI agent that actually codes like them, that compounds their taste over weeks and months, is going to ship faster with fewer correction loops than someone fighting generic output every day. That gap compounds. And it compounds fast. We built Command Code for exactly this. npm i -g command-code if you want to see it yourself.





