Sebastian Wenninger

16 posts

Sebastian Wenninger

Sebastian Wenninger

@_swngr_

Inscrit le Ocak 2026
108 Abonnements0 Abonnés
Sebastian Wenninger
@badlogicgames Love it! I really think adding an agent in the editor itself would add value for teaching/training purposes. Pair-speccing 😬
English
0
0
0
213
Mario Zechner
Mario Zechner@badlogicgames·
ah shit, here we go ...
Mario Zechner tweet media
English
3
1
18
3.5K
Sebastian Wenninger
Sebastian Wenninger@_swngr_·
@badlogicgames Not an expert, but maybe WASM could also help here? Sandboxing built in and modular components kinda are its thing as far as I understood.
English
0
0
0
26
Mario Zechner
Mario Zechner@badlogicgames·
literally been thinking about how i can make a GUI version of pi. easy if you control everything, but extensions make this tricky. thoughts so far: - limit to web stack, native apps are dead - web components with and without shadow root? - how to transmit over wire in remote case can't come up with something as easy for extension writers as the gui stuff. kinda sucks :/
English
24
0
82
5.8K
OpenCode
OpenCode@opencode·
OpenCode Go is a low cost ($10/month) subscription designed to bring agentic coding to programmers around the world it provides generous limits and reliable access to the most capable open source models run /connect and select OpenCode Go to get started
English
217
209
3.7K
548.7K
Pierce Boggan
Pierce Boggan@pierceboggan·
VS Code 🤝 GitHub Copilot CLI For folks using both together, what should we prioritize improving in the experience?
English
73
12
119
54.8K
Evan Boyle
Evan Boyle@_Evan_Boyle·
@Grynn What can we improve in Copilot CLI?
English
13
0
8
1.7K
Sebastian Wenninger
Sebastian Wenninger@_swngr_·
@_Evan_Boyle Is there a way in copliot cli to pin the standard subagents to specific models? e.g. if I always wanna use codex for reviews
English
1
0
0
78
Evan Boyle
Evan Boyle@_Evan_Boyle·
@atthatmatt Exactly, "AgentFS" is another way we've talked about this inside the team.
English
2
0
2
526
Evan Boyle
Evan Boyle@_Evan_Boyle·
We've been working on something internally called "infinite sessions".  When you're in a long session, repeated compactions result in non-sense. People work around this in lots of ways. Usually temporary markdown files in the repo that the LLM can update - the downside being that in team settings you have to juggle these artifacts as they can't be included in PR. Infinite sessions solves all of this. One context window that you never have to worry about clearing, and an agent that can track the endless thread of decisions.
Evan Boyle tweet media
Scott Hanselman 🌮@shanselman

Something is cooking in GitHub #copilot

English
28
33
412
157.1K
Gordon Mickel
Gordon Mickel@gmickel·
Go for it! I've been able to port it to @opencode with 100% parity, will release that soon. WRT to the reviews cycle: Nothing is discardeed! If the reviewing LLM returns NEEDS_WORK or MAJOR_RETHINK, Claude implements the reviewer's feedback and re-submits for review. This happens in the same chat session (preserves context). This fix→review cycle continues until SHIP or MAX_ATTEMPTS_PER_TASK (default 5). If max attempts hit, task gets marked blocked for human review. This was part of flow-next before Ralph mode and works really really well.
English
2
0
1
76
Gordon Mickel
Gordon Mickel@gmickel·
I just published a new version of flow-next that lets you use @OpenAI Codex as the reviewer instead of @RepoPrompt. I still heavily recommend RepoPrompt, @pvncher did great work on the context builder, but this lets all non-mac users use Flow-Next and its Ralph mode!
Gordon Mickel@gmickel

Looked at Anthropic's ralph-wiggum plugin again. Single session, accumulating context, no re-anchoring. This really defeats the purpose. The original vision by @GeoffreyHuntley: - Fresh context per iteration - File I/O as state (not transcript) - Dumb bash loop, deterministic setup Anthropic's version? Stop hook that blocks exit and re-feeds the prompt in the SAME session. Transcript grows. Context fills up. Failed attempts pollute future iterations. The irony: Anthropic's own long-context guidance says "agents must re-anchor from sources of truth to prevent drift." Their plugin doesn't re-anchor. At all. --- flow-next follows the original vision: ✅ Fresh context per iteration (external bash loop) ✅ File I/O as state (.flow/ directory) ✅ Deterministic setup (same files loaded every iteration) Plus Anthropic's own guidance: ✅ Re-anchor EVERY task - re-read epic spec, task spec, git state ✅ Re-anchor after context compaction too (compaction shouldn't happen but if it does, we're set) Plus what we added: ✅ Multi-layered quality gates: tests, lints, acceptance criteria, AND cross-model review via @RepoPrompt ✅ Reviews block until SHIP - not "flag and continue" ✅ Explicit plan → work phases - plan reviewed before code starts ✅ Auto-blocks stuck tasks after N failures ✅ Structured task management - dependencies, status, evidence Two models > one. Process failures, not model failures. Agents that actually finish what they start. github.com/gmickel/gmicke…

English
3
0
17
2.4K
dax
dax@thdxr·
idk how this fits into git but opencode under the hood has information on which hunks in a diff were ai generated i feel like this is important info that needs to be surfaced but i also don't know how best to layer this into git i guess they'd have to be committed separately with a different committer?
English
47
2
496
47.5K