Post

Mario Zechner
Mario Zechner@badlogicgames·
Recommended viewing. "What if you could do 600 commits per day and none of it was slop? This is what Peter claims is doing" I would be very surprised if @steipete actually claimed that :D youtu.be/8lF7HmQ_RgY?si…
YouTube video
YouTube
English
3
2
60
16.7K
Mario Zechner
Mario Zechner@badlogicgames·
It's great @GergelyOrosz spend so much time on non-clawdbot things. PSPDFKits content marketing strategy back in the days was really good. Deeply technical blogs that built trust. Not sure this still works in 2026. Also, Peter being knee-deep in support himself for such a long time also helped shape PSPDFKit to be what it became. Peter was actually a great source of wisdom back around 2014 when I dipped my toes into start-upy things. Got to enjoy the terrace of his old flat with my co-conspirator in all things 2D skeletal animation back then, to get some advice and exchange war stories. Good times.
English
2
0
16
2.3K
Mario Zechner
Mario Zechner@badlogicgames·
They also talk about people management skills being a prerequisit to being good at agent management. I think that is true only to some extend (having both deep tech and mgmt experience myself). It comes down to trust and learning abilities. I can build up trust with a human coworker by observing their work output, giving feedback and observing how they integrate it, and can rely on them being generally able to learn from mistakes. They can also proactively come and ask for help. All without me needing to babysit them. In case of clankers, that trust is simply not there yet, as they still make too many absolutely idiotic mistakes (CVEs on clawdbot's tracker are evidence of that, no shade, shit happens). You can try to "teach" them by encoding lessons learned into resources that automatically go into the context, but there's only so much you can stuff into the context. You can further coerce them into producing better outputs with feedback loops like type checks, lints, tests, whatever. But these also have limits. Especially if you let your clanker write tests. Until that changes, and I think there are some technical limitations that make it unlikely that it will in the near future specifically wrt to context and how current long context capabilities are basically smoke and mirrors, there is still massive value in checking in more on your clankers output than you would with a human coworker. That does not mean you have to check and know every line of your software. But not looking at the code at all is also a recipe for disaster eventually. We have plenty of examples to demonstrate that. Famously, Claude Code is now basically 100% vibe coded. And I think it shows. The same is true for clawdbot (again, no shade, single person project, massive scope, all good, except when it comes to security). I think you are still kidding yourself if you think you just need to prompt good and your swarm of agents does the rest. With current SOTA models, the real art of "agentic coding" is to develop a sense for what you can trust the model with and what needs human intervention and supervsion. The problem with that is that model behaviour is not deterministic, intuition for one model doesn't transfer to other models, even within the same family or reasoing level, intuitions for one coding harness does not transfer to other harnesses, and so on. Typing out boilerplate was never the problem. Clankers are good at that. You can also trust them to type out variations of established patterns and algorithms for the most part. But software is still fucking hard. It gets harder if you trick yourself into thinking the clanker can take on the hard parts.
English
3
3
25
2K
Pete
Pete@https_500·
@badlogicgames @steipete @GergelyOrosz I have oscillated between at times feeling like I don't review my code enough and other times feeling like I'm reviewing LLM-generated code too much. The idea that the impt skill is knowing what needs review or more involved human supervision and what doesn't resonates a lot.
English
0
0
0
51
I-share