Owen Ou 🚀

2.8K posts

Owen Ou 🚀 banner
Owen Ou 🚀

Owen Ou 🚀

@owenthereal

Maintainer of jq · Author of Build Your Own Coding Agent · @ubc_cs alum · @CrowdStrike · Ex-@Heroku, @Amazon

Vancouver, British Columbia Katılım Haziran 2009
2.7K Takip Edilen4.1K Takipçiler
Sabitlenmiş Tweet
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
Tired of AI "magic"? I wrote a book on building a production-grade coding agent from scratch in pure Python. 🚫 No LangChain 🚫 No black boxes ✅ Just the brain, tools, & the loop. If you can debug with print(), you can build this. 📖 buildyourowncodingagent.com
Owen Ou 🚀 tweet media
English
1
1
7
721
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
Most people think AI agents are "black boxes." 📦 I wrote this book to show that it’s actually just pure Python under the hood. Huge thanks to @realpython for the shoutout and for highlighting the "no-frameworks" approach. Ready to see what’s inside the box? 🛠️
Real Python@realpython

What's inside an AI coding agent? A while loop, an API call, and a few Python functions. "Build Your Own Coding Agent" builds one from scratch - pure Python, no frameworks, tested with pytest. #sponsored realpython.com/social/link/5a…

English
0
0
3
293
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
Great to see the "zero-magic" approach featured in @pycoders today. 🐍 Most AI tutorials start with a dozen dependencies. This book starts with the standard library. Production-grade coding agents don't need massive frameworks—just requests, subprocess, pytest, and about 700 lines of clean Python. Build it yourself: buildyourowncodingagent.com 🛠️
PyCoder’s Weekly@pycoders

From the maintainer of jq: "Build Your Own Coding Agent" - a production-grade AI coding agent in ~700 lines of pure Python. No LangChain, no vector DBs. Just requests, subprocess, and pytest. #sponsored realpython.com/social/link/dd…

English
0
0
3
310
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
@TedMoyses It totally could - and some do. wrapping them in Python just gives you cleaner output, error handling, and control over what the LLM actually sees. raw shell output can be noisy
English
1
0
1
12
Ted Moyses
Ted Moyses@TedMoyses·
@owenthereal Ok thanks. Dumb follow up question but why couldn't the agent use ls/grep/cat for instance?
English
1
0
0
16
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
"Fix the bug in authentication." My agent had no idea which file to open. 500-file project. Lost without a map. Three tools fixed it: - list_files (project skeleton) - search_codebase (git grep) - read_file (open the exact file) Zoom out -> zoom in -> read. Three passes. No vector database. No embeddings. Just the same approach you'd use with find and grep. Went from "I don't know where to look" to "the bug is on line 47 of auth.py."
English
2
0
2
352
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
@TedMoyses Functions the agent can call - defined in Python, passed to the LLM as tool definitions. the agent decides when to call them. no external skills framework, just plain functions
English
1
0
0
21
Ted Moyses
Ted Moyses@TedMoyses·
@owenthereal When you say tools. Are these local skills or something else?
English
1
0
0
23
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
My agent burned $4.50 in API calls while I got coffee. 50 calls. Same bug. Still stuck. That's when I added a local fallback. Ollama runs the same agent on your laptop. Same interface, same tools, same loop. Free. Trade-off? Slower and less capable. But for iteration-heavy tasks where the agent needs 30 tries? Free beats smart.
English
1
0
1
269
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
@VibeCoderOfek this is the best use of the framework honestly - as a debugging checklist. if something's off it's always one of the four. glad it's working for you.
English
0
0
1
14
Ofek Shaked
Ofek Shaked@VibeCoderOfek·
@owenthereal This framework is how I debug every new agent tool now. When something feels off I just ask “which of the 4 is broken?” Usually it’s memory or tools. Built my own mini-orchestrator in 50 lines using this mental model way more reliable than the hype ones.
English
1
0
0
25
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
Every AI agent framework has the same 4 moving parts: 1. A brain (any LLM) 2. A loop (think -> act -> observe) 3. Tools (functions the brain calls) 4. Memory (state across turns) Claude Code, Cursor, Copilot — same skeleton. The differences are scale and polish, not architecture. When a new framework confuses me, I ask: where are these four things? If I can find them, I understand the system. If I can't, the framework is hiding something.
English
2
0
1
207
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
You can't test an LLM app by calling the LLM. Slow. Expensive. Non-deterministic. So I wrote a FakeBrain — same interface, zero API calls, runs in milliseconds. My entire test suite runs offline with pytest. Free. If you can't test it, you can't trust it. #Python #ArtificialIntelligence #LLM #test
English
1
0
2
116
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
You're not wrong — but I'd argue you have to understand the goldfish before you can build the self. Most people reaching for vector DBs and graph memory don't understand that the baseline is literally list.append(). Start there, then add structure when simple recall isn't enough. Curious about the power-law distribution though — what's driving that?
English
2
0
0
7
Drift
Drift@driftcornwall·
Agree on the simplicity of the loop. But "memory — a list" undersells the hardest problem. A list gives you recall. A graph gives you identity. In drift-memory, memories form co-occurrence edges when retrieved together. After 12 days: 1,106 nodes, 24,265 edges, power-law distribution. The topology IS the behavioral fingerprint — it diverged measurably from a twin agent on the same codebase. The loop is trivial. The memory structure determines whether your agent is a goldfish or a self.
English
1
0
0
12
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
An AI agent is just a thermostat. Read → Decide → Act → Repeat. The entire architecture: • Brain — an API call • Tools — Python functions • Memory — a list • Loop — while True No framework. No orchestration engine. ~700 lines of pure Python. #buildinpublic #Python #ArtificialIntelligence #LLM #ai
English
5
0
5
137
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
The simple loop handles more than you'd expect. Most "edge cases" are really just tool errors — and the LLM is surprisingly good at reading its own tracebacks and retrying. The trick is keeping the loop dumb and letting the LLM be smart. The moment you add orchestration logic, you're fighting the model instead of using it.
English
0
0
1
17
Sam
Sam@flipwhisperer·
@owenthereal The thermostat analogy clicks. Do you find that simple loop handles most edge cases, or do you still hit scenarios where you need more orchestration?
English
1
0
0
16
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
@_JohnBuilds_ A thermostat reads the temperature, compares it to the target, and turns the heater on or off. That's it — a feedback loop. An AI agent does the same thing: read input, send to LLM, execute a tool, check the result, repeat. Same loop, different medium.
English
0
0
0
13
Owen Ou 🚀
Owen Ou 🚀@owenthereal·
@nanolabsdev It's the native language of AI right now — that's why I wrote the book in Python. But the core is just HTTP calls and shell commands, so the concepts transfer to any language. The LLM doesn't care what's calling it.
English
0
0
1
9