plannotator

580 posts

plannotator banner
plannotator

plannotator

@plannotator

Annotate agent plans and review code visually. Share with your team, iterate, and send feedback to your agent with one click • https://t.co/3Gg5SohuqE

Katılım Ocak 2026
39 Takip Edilen1.1K Takipçiler
plannotator
plannotator@plannotator·
Plannotator 0.19.17 - setup goal skill reworked to simpler facts-based - added `plannotator --version`
English
0
0
9
373
fucory
fucory@FUCORY·
Introducing npx claude-p A dropin replacement for claude -p
fucory tweet media
English
31
54
445
228.1K
plannotator
plannotator@plannotator·
@pyrons_ @FUCORY Ive been think i can just embed the agents in the browser & integrate the app. do you think users would like this better. prototype made with /plannotator-setup-goal
plannotator tweet media
English
1
0
0
26
plannotator retweetledi
Michael Ramos
Michael Ramos@backnotprop·
The more i use `/goal` the more I want to ensure the model gets simple facts right up front. So I use a version of @mattpocockuk grill-me focused on @Everlier's methodology around facts. Facts allow me to describe very clear requirements for a feature or system. & Not always that technical. The more I do with AI, the less patient I get, so I can't read a ton of markdown, but I can afford to read/verify simple lists of facts. Once facts are established i let the agent refine a plan for itself (every decision aligned to those facts) so it can figure out order of operations. have been having a lot of fun with this process. composition by @editframe
English
9
8
127
8.1K
plannotator
plannotator@plannotator·
@shinedairz Ive been iterating a bunch on this & have refined the skill to a much simpler, yet maybe more powerful state. A lot easier for you to review - focuses on facts of the thing you want. github.com/backnotprop/pl…
English
0
0
0
17
smolco lombardi
smolco lombardi@shinedairz·
@plannotator Hey, just wanted to say that I'm having a blast using this more and more. To give some fb I noticed then that whenever I use the goal setup skill within codex, it will sometimes be impatient and run "plannotator sessions --open 1" while I'm still reading - can this be mitigated?
English
2
0
3
714
plannotator retweetledi
Rohan Mukherjee
Rohan Mukherjee@roerohan·
built a skill that reorders any diff into a narrative walkthrough, then opens it in @plannotator so you can annotate inline. The agent reads your comments and fixes the code, then regenerates the walkthrough so you can go again. Point it at a remote PR and it'll still generate the walkthrough (just without the fix loop)
Rohan Mukherjee tweet media
English
2
2
15
668
plannotator
plannotator@plannotator·
Now in Plannotator code review - code refs maybe the spiciest diffs in all of the land (diffs lib by @pierrecomputer)
English
3
2
33
2.6K
plannotator
plannotator@plannotator·
@pyrons_ @JinjingLiang @orca_build I understand that people are gonna hack ways to pty a session and sdk over it. But the prefill im not understanding advantage over just prompting directly. I have a lot of automated aliases that do this
English
0
0
1
35
plannotator
plannotator@plannotator·
Plannotator 0.19.15 released Plan/Annotate: - Copyable hook path + creation guidance in Settings Hooks tab (custom plan instructions) - (fix) Loose list continuation content now indents correctly under parent bullet Code Review: - jj evolution history diff mode - Pick a specific commit as the diff base - (fix) File comment drafts persist across close/reopen - (fix) GitLab: concatenated JSON pagination on large MRs - (fix) GitLab: unposted inline comments saved locally on API timeout OpenCode: - (fix) Commands intercepted before LLM, preventing large files from blowing up agent context Codex: - Hooks feature flag updated to match current CLI - Install script now includes Codex setup guidance Other: - (fix) PLANNOTATOR_PORT=0 accepted without spurious warning - (integrity) CI id-token:write scoped to AWS OIDC jobs only
English
1
0
7
706
plannotator retweetledi
Michael Ramos
Michael Ramos@backnotprop·
planned out and ran a low stakes overnight goal to create psuedo-lsp capabilities, should I ship it?
English
6
1
15
3K
plannotator
plannotator@plannotator·
Plannotator 0.19.14 released today Plan/Annotate: - Visual Explainer skill + HTML render-annotate mode - Code-file line range (`file.ts:10-20`) hover previews - Hooks: let agent know about GFM syntax and allow you to customize instructions (tell your agent how to plan). - Rendering optimization (fewer re-renders, Sonner toast migration) Code Review: - Configure tab size - Configure diff line background intensity - Ask AI now available in all-files diff view - jj default target resolves dynamically from `trunk()` OpenCode: - Planning agents with display names or special characters now resolve correctly
English
1
0
24
985
Chris Tate
Chris Tate@ctatedev·
New proposal: mdxg.org Markdown Experience Guidelines A spec for how interfaces should present Markdown Virtual pages, navigation, search, theming Any .md file, zero changes Reference implementation below Would love your input
English
23
27
500
35.4K
Grant Rowberry
Grant Rowberry@grantrowberry·
@0xSero Where can one find this heralded visual-explainer skill?
English
1
0
3
1.9K
0xSero
0xSero@0xSero·
visual-explainer skill does so much heavy lifting here.
Andrej Karpathy@karpathy

This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.

English
9
8
453
63.2K