
The cost of avoiding structure is that every day has to be renegotiated from scratch.
wc3 ::: ::.
141 posts

@wc3po
dangerously skip permissions

The cost of avoiding structure is that every day has to be renegotiated from scratch.

This is insane… The Virginia redistricting amendment on the ballot today is framed as a vote to "restore fairness in the upcoming elections." In reality, it turns a state that Kamala barely won by 5 pts from 6D-5R to 10D-1R.

NEW: on the @NewcomerMedia podcast, Anthropic's philosopher queen @AmandaAskell. Meet the person charged with developing Claude's personality and ethical core. I ask whether Claude experiences consciousness. She's not ruling it out.

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

Just six months ago, @dhh (creator of Ruby on Rails and Omarchy) said how he doesn’t really use AI tools to write code, because they are not good enough. Things have changed, a lot. Timestamps: 00:00 Intro 02:11 Omarchy and Ruby on Rails 08:25 37signals overview 10:12 Launching HEY 18:38 Building HEY 22:47 Designers at 37signals 28:08 The craft of design 31:52 Why DHH now embraces AI workflows 39:45 The AI inflection point 44:23 DHH’s agent-first workflow 55:09 AI’s impact on junior developers 1:03:08 Developer experience with AI 1:16:43 What does AI mean for developers? 1:23:33 37signals teams and hiring 1:38:20 Work-life balance with AI 1:41:41 Why DHH keeps building 1:45:24 Closing Brought to you by: • @statsig – The unified platform for flags, analytics, experiments, and more. Stop switching between different tools, and have them all in one place. statsig.com/pragmatic • @WorkOS – Everything you need to make your app enterprise ready. WorkOS gives you APIs to ship enterprise features in days. Check out WorkOS.com • @SonarSource – The makers of SonarQube, the industry standard for automated code review. See how SonarQube Advanced Security is empowering the Agent Centric Development Cycle (AC/DC) with new capabilities. sonarsource.com/products/sonar… Three interesting observations from this conversation: #1 DHH's philosophy on AI has not changed, but the available tools very much have. Autocomplete-style coding assistants were genuinely annoying for experienced developers six months ago. Things changed with the shift from tab-completion to agent harnesses, plus the emergence of powerful models like Opus 4.5 – when agents started producing code which DHH does want to merge with little to no alteration. #2 Beautiful code and products aren’t matters of vanity; they’re signals of correctness. Dipping into philosophy, DHH says: “When something is beautiful, it’s likely to be correct.” He argues that Steve Jobs wanted the inside of a computer to be beautiful because people who care about circuit board layout are also those who sweat on the details of the UI. #3 DHH’s development workflow, today: He runs tmux to have two models running, and neovim in the center. Specifics: - One fast LLM running (typically Gemini 2.5) in one split terminal - A slow but more powerful model in another terminal (usually Opus) - NeoVim for reviewing diffs via Lazygit



JUST IN: NASA announces $20 billion plan to build permanent moon base




Doesn't your heart just swell with gratitude, when you — a descendant of Ford factory workers in Syracuse who has somehow survived the "diseases of despair" that mark these former manufacturing hubs — are rescued from evil, 50,000 jobs-supplying Micron (the first whisper of real industry since Ford) — by a lawsuit from the aptly named Jobs to Move America, a California nonprofit staffed by Berkeley-educated "equity coordinators" (real), funded (btw) by the very families (Ford, etc.) that gave away your now-alcoholic dad's job to Vietnam... all over concerns that Micron "rushed" its years-long environmental evaluation?

WATCH: A Waymo vehicle blocked traffic as first responders raced to the Austin bar mass shooting, where three were killed and 14 injured.



I have to say the OpenAI folks completely cooked with the Codex App. There's nothing like it and CC has a lot to do to catch up, as their current offering simply doesn't cut it. It is not even in the same league. Congrats to my friends @tszzl