ᵉzᕈuzzle
2.5K posts

ᵉzᕈuzzle
@ezpuzzle
i collect weird records and weirder shirts check out https://t.co/iI3SyMn2eS
San Francisco Katılım Nisan 2009
258 Takip Edilen125 Takipçiler

@mitchellh Vibe coders don't see the value of branching, isolated dev servers, database migrations, nor do they know anything about tooling. Managing multiple agents operating on shared everything is doomed to fail. CDEs are expensive or slow or crash. Coder+Mux is almost right imo
English

@jarredsumner llm optimized ctags for typescript. basically some dense view of type signatures per file with some flags for including or excluding things (eg exports, functions, interfaces etc) including line number and length.
English

We're proposing an open standard for tracing agent conversations to the code they generate. It's interoperable with any coding agent or interface.
agent-trace.dev
English

@emollick assuming that somewhere in the training data is a close to ideal (minimal) solution for a well stated (read: easily mappable to emergent world model in an llm) problem i think optimizing context for completeness of the local "world" (codebase) is ideal. complexity kills.
English

@GergelyOrosz @steipete subtext: most people are terrible at prompting
English

"I don't like pull requests (PRs) any more. A large chunk code change doesn't tell me much about the intent or why it was done.
I now prefer prompt requests. Just share the prompt you ran / want to run.
If I think it's good, I'll run it myself and merge it." - @steipete
wow
English

Just shipped adversarial-spec, a Claude Code plugin for writing better product specs.
The problem: You write a PRD or tech spec, maybe have Claude review it, and ship it. But one model reviewing a doc will miss things. It'll gloss over gaps, accept vague requirements, and let edge cases slide.
The fix: Make multiple LLMs argue about it.
adversarial-spec sends your document to GPT, Gemini, Grok, or any combination of models you want. They critique it in parallel. Then Claude synthesizes the feedback, adds its own critique, and revises. This loops until every model agrees the spec is solid.
What actually happens in practice: requirements that seemed clear get challenged. Missing error handling gets flagged. Security gaps surface. Scope creep gets caught. One model says "what about X?" and another says "the API contract is incomplete" and Claude adds "you haven't defined what happens when Y fails."
By the time all models agree, your spec has survived adversarial review from multiple perspectives.
Features:
- Interview mode: optional deep-dive Q&A before drafting to capture requirements upfront
- Early agreement checks: if a model agrees too fast, it gets pressed to prove it actually read the doc
- User review period: after consensus, you can request changes or run another cycle
- PRD to tech spec flow: finish a PRD, then continue straight into a technical spec based on it
- Telegram integration: get notified on your phone, inject feedback from anywhere
Works with OpenAI, Google, xAI, Mistral, Groq, Deepseek. Leveraging more models results in stricter convergence.
If you're building something and writing specs anyway, this makes them better.
Check it out and let me know what you think!
github.com/zscole/adversa…
English

@keysmashbandit leaderboards for daily tasks. toothbrush coverage. food waste. trash can on a scale that sends you a nightly notification of added mass.
English
ᵉzᕈuzzle retweetledi

@LewistheLight1 i like listening to jazz music records and ecm, live in San Francisco (on divis), and have kids. I'm as tech as it gets though unfortunately. lemme know if u change your mind.
English

@hardfiled that's the one. wonder book was the other place but now they overprice everything. I'm out in SF now so haven't been digging in Frederick for a while.
English


















