Koen Verstrepen

565 posts

Koen Verstrepen

Koen Verstrepen

@koenver__

I build agentic AI systems, from first principles to production. I have strong conviction on combining leadership with being hands-on.

Turnhout Katılım Haziran 2010
51 Takip Edilen84 Takipçiler
Koen Verstrepen
Koen Verstrepen@koenver__·
@badlogicgames @intelliyole Increasingly starting to think that the larger your organization gets, the more difficult it becomes to put your soul in the code that your organization produces, regardless of where your organization consists of humans or ai agents.
English
1
0
0
29
Mario Zechner
Mario Zechner@badlogicgames·
@koenver__ @intelliyole you can build trust with humans. humans can learn. humans are bottlenecks. agents are none of that. that's the difference.
English
2
0
0
53
Dmitry Jemerov
Dmitry Jemerov@intelliyole·
In my view, the oft-repeated take that we can stop caring about the code an agent generates in the same way that we don't care about the assembly our compilers generate is misguided. Unlike generating code from a spec, compiling code is deterministic and measurable.
English
2
7
28
2.8K
Koen Verstrepen
Koen Verstrepen@koenver__·
@badlogicgames I dare to say I am building trust with coding agents, although still limited. They follow instructions reasonably and learn from my feedback when I explicitly instruct them to do so.
English
0
0
0
9
Koen Verstrepen
Koen Verstrepen@koenver__·
@intelliyole @badlogicgames But probably a rather small percentage of the code (depending on the org size)? I am still reviewing almost every line written by my agents, but I am searching for a way to organize them such that this is not necessary anymore, just like we do in organizations with humans.
English
0
0
0
13
Dmitry Jemerov
Dmitry Jemerov@intelliyole·
@koenver__ @badlogicgames The compiler analogy implies that you never need to look at code. When I was a tech leader, I definitely did look at code written in my org when I needed to.
English
2
0
1
52
Koen Verstrepen
Koen Verstrepen@koenver__·
Started reviewing every line of AI code. Then speed became addictive. Here's what I learned from letting go. wllw.co/XbZPNWE2z
English
0
0
0
10
Koen Verstrepen
Koen Verstrepen@koenver__·
Don't analyze your codebase to extract coding guidelines. Analyze your code reviews. That's where the signal is. wllw.co/goronHf8w
English
0
0
0
13
Koen Verstrepen
Koen Verstrepen@koenver__·
How can an LLM get smarter by training on self-generated data? Asymmetry: creating an exercise is easier than solving it. wllw.co/EHkwzCdZv
English
0
0
2
27
Koen Verstrepen retweetledi
Derek Thompson
Derek Thompson@DKThomp·
for me the odds that AI is a bubble declined significantly in the last 3 weeks and the odds that we’re actually quite under-built for the necessary levels of inference/usage went significantly up in that period basically I think AI is going to become the home screen of a ludicrously high percentage of white collar workers in the next two years and parallel agents will be deployed in the battlefield of knowledge work at downright Soviet levels
English
247
352
4.6K
1.2M
Koen Verstrepen
Koen Verstrepen@koenver__·
Human researchers training LLMs are like coaches who know less than their students. The LLM already read the internet. The coach teaches it how to practice. wllw.co/cFbJznXgy
English
0
0
0
10
Koen Verstrepen
Koen Verstrepen@koenver__·
LLMs don't just need to read more — they need to practice more. Next-token prediction = reading. Post-training on hard problems = real learning. wllw.co/VgfbkbvWm
English
0
0
0
8
Koen Verstrepen
Koen Verstrepen@koenver__·
I didn't get claude code + vscode + git worktrees working properly. Will stick to multiple clones for now.
English
0
0
0
21
Koen Verstrepen
Koen Verstrepen@koenver__·
PSA: Claude Code MCP servers default to project scope. Use claude mcp add [name] --scope user for global access. wllw.co/SQw0aR1l8
English
0
0
0
15
Koen Verstrepen retweetledi
Addy Osmani
Addy Osmani@addyosmani·
Every time we've made it easier to write software, we've ended up writing exponentially more of it. When high-level languages replaced assembly, programmers didn't write less code - they wrote orders of magnitude more, tackling problems that would have been economically impossible before. When frameworks abstracted away the plumbing, we didn't reduce our output - we built more ambitious applications. When cloud platforms eliminated infrastructure management, we didn't scale back - we spun up services for use cases that never would have justified a server room. @levie recently articulated why this pattern is about to repeat itself at a scale we haven't seen before, using Jevons Paradox as the frame. The argument resonates because it's playing out in real-time in our developer tools. The initial question everyone asks is "will this replace developers?" but just watch what actually happens. Teams that adopt these tools don't always shrink their engineering headcount - they expand their product surface area. The three-person startup that could only maintain one product now maintains four. The enterprise team that could only experiment with two approaches now tries seven. The constraint being removed isn't competence but it's the activation energy required to start something new. Think about that internal tool you've been putting off because "it would take someone two weeks and we can't spare anyone"? Now it takes three hours. That refactoring you've been deferring because the risk/reward math didn't work? The math just changed. This matters because software engineers are uniquely positioned to understand what's coming. We've seen this movie before, just in smaller domains. Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we'd need fewer developers. Each one instead enabled us to build more software. Here's the part that deserves more attention imo: the barrier being lowered isn't just about writing code faster. It's about the types of problems that become economically viable to solve with software. Think about all the internal tools that don't exist at your company. Not because no one thought of them, but because the ROI calculation never cleared the bar. The custom dashboard that would make one team 10% more efficient but would take a week to build. The data pipeline that would unlock insights but requires specialized knowledge. The integration that would smooth a workflow but touches three different systems. These aren't failing the cost-benefit analysis because the benefit is low - they're failing because the cost is high. Lower that cost by "10x", and suddenly you have an explosion of viable projects. This is exactly what's happening with AI-assisted development, and it's going to be more dramatic than previous transitions because we're making previously "impossible" work possible. The second-order effects get really interesting when you consider that every new tool creates demand for more tools. When we made it easier to build web applications, we didn't just get more web applications - we got an entire ecosystem of monitoring tools, deployment platforms, debugging tools, and testing frameworks. Each of these spawned their own ecosystems. The compounding effect is nonlinear. Now apply this logic to every domain where we're lowering the barrier to entry. Every new capability unlocked creates demand for supporting capabilities. Every workflow that becomes tractable creates demand for adjacent workflows. The surface area of what's economically viable expands in all directions. For engineers specifically, this changes the calculus of what we choose to work on. Right now, we're trained to be incredibly selective about what we build because our time is the scarce resource. But when the cost of building drops dramatically, the limiting factor becomes imagination, "taste" and judgment, not implementation capacity. The skill shifts from "what can I build given my constraints?" to "what should we build given that constraints have in some ways been evaporated?" The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don't reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work. The pattern is so consistent that the burden of proof should shift. Instead of asking "will AI agents reduce the need for human knowledge workers?" we should be asking "what orders of magnitude increase in knowledge work output are we about to see?" For software engineers it's the same transition we've navigated successfully several times already. The developers who thrived weren't the ones who resisted higher-level abstractions; they were the ones who used those abstractions to build more ambitious systems. The same logic applies now, just at a larger scale. The real question is whether we're prepared for a world where the bottleneck shifts from "can we build this?" to "should we build this?" That's a fundamentally different problem space, and it requires fundamentally different skills. We're about to find out what happens when the cost of knowledge work drops by an order of magnitude. History suggests we (perhaps) won't do less work - we'll discover we've been massively under-investing in knowledge work because it was too expensive to do all the things that were actually worth doing. The paradox isn't that efficiency creates abundance. The paradox is that we keep being surprised by it.
Aaron Levie@levie

x.com/i/article/2004…

English
126
617
3.3K
553.1K
Koen Verstrepen
Koen Verstrepen@koenver__·
LLMs now learn like top researchers do. A number theorist doesn't read textbooks — they prove conjectures. Their training data = their own attempts and failures. cnto-staging.io/WKJhsNyPK
English
0
0
0
8
Koen Verstrepen
Koen Verstrepen@koenver__·
We're not running out of data to train AI. The "data wall" assumes LLMs need human-generated text. They don't. DeepSeek-V3.2 used synthetic data — data it generated itself — to outperform its predecessor. wllw.co/m8cr0wr7s
English
0
0
0
26