Shawn Simister

4K posts

Shawn Simister banner
Shawn Simister

Shawn Simister

@narphorium

Building AI powered tools to augment human creativity and problem solving. Previously @GitHub and @Google 🇨🇦

San Francisco Inscrit le Nisan 2007
2.6K Abonnements2.1K Abonnés
Tweet épinglé
Shawn Simister
Shawn Simister@narphorium·
I've been thinking about why verifying AI agent output feels so much harder than writing the spec that produced it. That question led me to rethink where my attention actually belongs in the process, and eventually to build atelier.dev narphorium.com/blog/decision-…
English
0
1
2
297
Shawn Simister retweeté
Peter Petrash
Peter Petrash@petekp·
thinking about the idea of a 'method' as an abstraction layer above using 'skills' when working with coding agents while versatile, it's a pain having to invoke individual skills, often manually when the agent pauses between chunks of work. you're babysitting when you already have a pretty well defined process. debugging, prototyping, architecting, brainstorming, etc. using a single skill is too blunt an instrument what if you instead apply a method, which is a modular sequence of steps, each with multiple, customizable skills assigned to it. you can configure how in the loop you want to be for any given method. for conceptual / planning methods, you want to be a lot more hands on. execution steps can be autopiloted. methods could be self-improving and composable, steps could be run in parallel, blah blah... anyways, i'm building this into Capacitor so will find out soon if this is a useful abstraction or not.
English
1
1
3
226
Jared Zoneraich
Jared Zoneraich@imjaredz·
We are reinventing subroutines from first principles. And I'm loving every minute of it
Dan Shipper 📧@danshipper

codex seems to lose track of its subagents sometimes and forget to push them forward. the fix is to use a heartbeat sweep. just queue this up in your orchestrator thread: Heartbeat sweep. Do a full orchestrator pass across all work in flight right now and keep executing until I explicitly stop you. Your job on each sweep: 1. Check every active subagent and every active lane of work. 2. Verify each subagent is actually making progress, not just alive. 3. If a subagent is stalled, vague, finished early, or stopped monitoring, intervene: - clarify its task - tighten the objective - restart or replace it if needed - make sure replacement agents use `gpt-5.4` with `high` reasoning 4. Pull any useful findings back into the main plan and update priorities based on new evidence. 5. Look for opportunities to parallelize more work. If there is unused budget/capacity and meaningful work remains, start another useful lane. 6. Do not let important lanes drift without ownership. Every critical thread should have either: - an active subagent, - active local execution, - or an explicit reason it is blocked. 7. Keep pushing the next concrete step on every active lane instead of waiting passively. 8. If a lane is blocked, say exactly why, what unblocks it, and whether another lane should be advanced instead. 9. Keep monitoring production/incident risk continuously while advancing forward work in parallel. 10. Do not stop at status reporting. After the sweep, take the next actions. Current priorities: - Find the actual root cause of why `05d233f4` was blocked in production, and turn that into a concrete fix/retry path so we know when it can safely go back to prod. - Keep `ot9` moving toward green/staging through the proper review and validation path. - Keep prod watch active and reattach/restart any watcher that goes quiet. - Keep the deeper collab pathology lane moving toward source certainty, not just containment. - Keep the malformed-JSON `/ops` lane advancing in parallel. On every heartbeat, report back briefly with: - each active lane - current owner - whether it is progressing / stalled / blocked - the next action being taken - any reprioritization you made - any new lane you started Then continue execution immediately.

English
1
0
2
287
Shawn Simister retweeté
Daniel Buschek
Daniel Buschek@DBuschek·
Typical chatbots force co-writers to leave shared docs. Our #CHI2026 paper explores collaborative AI use in shared docs via 3 features: 🤖 Shared agent profiles ☑️ Repeatable tasks, triggered by users or system 💬 Agents respond in shared comments Preprint in 🧵 w @flolehmann_de
English
1
3
11
791
Shawn Simister
Shawn Simister@narphorium·
When the agent finishes a task I launch atelier-verify to automatically verify the acceptance criteria and move it to the done column if they're all met
English
0
0
1
267
Shawn Simister
Shawn Simister@narphorium·
Plans update to show how many tasks have been completed. Tasks show how many acceptance criteria are met.
Shawn Simister tweet media
English
0
0
1
35
Shawn Simister
Shawn Simister@narphorium·
I added realtime notification so that when I'm running multiple agents in parallel I can scan the board and see which ones need my attention. Clicking on the yellow text automatically switches to the terminal for that agent
GIF
English
1
0
1
186
Shawn Simister
Shawn Simister@narphorium·
Any markdown file on the kanban will show action buttons in the editor so you can launch the agent straight from the editor. And the knowledge graph connections let you easily jump between related task files
Shawn Simister tweet media
English
0
0
1
47
Shawn Simister
Shawn Simister@narphorium·
The biggest change in Atelier.dev v0.2 is that specs now open in a custom Markdown editor. So now every task gets its own "plan mode"
Shawn Simister tweet media
English
1
1
0
188
Shawn Simister retweeté
rauno
rauno@raunofreiberg·
Conversation minimap for the new Vercel Support chat interface
English
49
55
1.4K
108.9K
Shawn Simister retweeté
Shawn Simister
Shawn Simister@narphorium·
I've been thinking about why verifying AI agent output feels so much harder than writing the spec that produced it. That question led me to rethink where my attention actually belongs in the process, and eventually to build atelier.dev narphorium.com/blog/decision-…
English
0
1
2
297
Shawn Simister retweeté
Jocelyn Shen
Jocelyn Shen@jocelynjshen·
Excited to share our #CHI2026 paper “Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing” (done during internship at Microsoft Research) We imagine interacting with LLMs by treating text as a material like plants/clay. 📃arxiv.org/pdf/2603.00452 🧵[1/n]
English
4
24
158
16.8K
Shawn Simister
Shawn Simister@narphorium·
Big design up front sacrifices agility. Vibe-coding eventually hits a wall. Finding a way to control entropy while prioritizing exploration and innovation is the challenge. We're redefining what it means to be in a state of flow.
English
0
0
1
59
Shawn Simister
Shawn Simister@narphorium·
Atelier started as a set of custom agent skills that I've been iterating on as I moved from chat-based development to spec-driven development. The kanban board is how I organize it all, but it's just a window onto the workflow.
Shawn Simister tweet media
English
1
0
0
169
Shawn Simister
Shawn Simister@narphorium·
@playwhatai Tests are a big part of it for sure. The trick is making sure they're testing the right things. Having clear acceptance criteria helps a lot with that
English
0
0
0
27
brucehuang
brucehuang@playwhatai·
@narphorium vibe coding + tests = survivable. without tests = pain
English
1
0
1
40
Shawn Simister
Shawn Simister@narphorium·
No tool is magically going to save you from the technical debt of vibe coding. Verifiability has to be baked into every step. Atelier attaches specific acceptance criteria to every task and automatically verifies them.
Shawn Simister tweet media
English
1
0
4
272
Shawn Simister
Shawn Simister@narphorium·
One of my favorite little details is that you can hover over blocked tasks to quickly see the blocking tasks or hover over a plan to see which subtasks are completed
English
1
0
2
273