
🅺evin 🅶oldsmith 🇺🇦
17.9K posts

🅺evin 🅶oldsmith 🇺🇦
@KevinGoldsmith
CTO @ DistroKid, Principal @ Nimble Autonomy. Board Member & Speaker. Author of "It Depends: Writing on Technology Leadership 2012-2022." Let's talk on BlueSky.





















Just shipped adversarial-spec, a Claude Code plugin for writing better product specs. The problem: You write a PRD or tech spec, maybe have Claude review it, and ship it. But one model reviewing a doc will miss things. It'll gloss over gaps, accept vague requirements, and let edge cases slide. The fix: Make multiple LLMs argue about it. adversarial-spec sends your document to GPT, Gemini, Grok, or any combination of models you want. They critique it in parallel. Then Claude synthesizes the feedback, adds its own critique, and revises. This loops until every model agrees the spec is solid. What actually happens in practice: requirements that seemed clear get challenged. Missing error handling gets flagged. Security gaps surface. Scope creep gets caught. One model says "what about X?" and another says "the API contract is incomplete" and Claude adds "you haven't defined what happens when Y fails." By the time all models agree, your spec has survived adversarial review from multiple perspectives. Features: - Interview mode: optional deep-dive Q&A before drafting to capture requirements upfront - Early agreement checks: if a model agrees too fast, it gets pressed to prove it actually read the doc - User review period: after consensus, you can request changes or run another cycle - PRD to tech spec flow: finish a PRD, then continue straight into a technical spec based on it - Telegram integration: get notified on your phone, inject feedback from anywhere Works with OpenAI, Google, xAI, Mistral, Groq, Deepseek. Leveraging more models results in stricter convergence. If you're building something and writing specs anyway, this makes them better. Check it out and let me know what you think! github.com/zscole/adversa…





