Adrien Koo

179 posts

Adrien Koo banner
Adrien Koo

Adrien Koo

@adrjenk

Full-Stack Dev / Feral Builder / Polymath Extraordinaire

Geneva Katılım Nisan 2015
160 Takip Edilen88 Takipçiler
Sabitlenmiş Tweet
Adrien Koo
Adrien Koo@adrjenk·
Day 22 of building a company that builds and runs itself Since last week: •⁠ ⁠Agents now create one-shot brand kits for any business •⁠ ⁠Token burn decreased by 82% •⁠ ⁠Agents no longer risk leaking API keys, they work in isolation (nanoclaw ftw) •⁠ ⁠600 tasks done autonomously in the last 5 days •⁠ ⁠8 agents working in parallel, day and night Things are getting real and so is token burn Follow the Molerat experiment ↓
Adrien Koo tweet media
English
1
1
3
171
Adrien Koo
Adrien Koo@adrjenk·
now able to connect to multiple autonomous nanoclaw setups instances, currently running one on my mac mini, and one on my Hetzner VPS from one single dashboard
Adrien Koo tweet media
English
1
0
2
36
Adrien Koo
Adrien Koo@adrjenk·
@techyoutbe how do you keep your stack "flexible" enough to be able to adopt/adapt new tech when it comes out?
English
0
0
0
2
Tech Fusionist
Tech Fusionist@techyoutbe·
2026 is the year AI Agents move from "hype" to "standard infrastructure." If your stack isn't autonomous, it's already legacy. Here is the production-grade AI Engineering stack for 2026: 1. 🤖 Autonomous Agent Orchestration • Multi-agent frameworks for complex reasoning loops. • Real-time observability for agentic decision-making. 2. 🧠 Dynamic RAG & Vector Fabrics • Retrieval-Augmented Generation at the edge for low latency. • Serverless Vector Databases for global scalability. 3. ⚙️ Kubernetes-native MLOps • Automated GPU-slicing for high-performance efficiency. • GitOps-driven deployment for LLM weights and prompt versions. 4. 📊 AI Observability & Guardrails • Real-time monitoring for model toxicity and drift. • FinOps-driven cost optimization for inference tokens. Stop building simple wrappers; start building resilient AI platforms. #AIEngineering #MLOps #DevOps
Tech Fusionist tweet media
English
1
2
8
647
Rohit
Rohit@ai_rohitt·
After the Claude Code source code leak, a former PM extracted its multi-agent orchestration system into an open source model agnostic framework. He studied the architecture, focused on the multi-agent orchestration layer (the coordinator that breaks goals into tasks, team system, message bus, task scheduler with dependency resolution), and reimplemented these patterns from scratch as a standalone open source framework without infringing on Anthropic's code. The result is what @ai_rohutt an "open-multi-agent." Unlike claude-agent-sdk, which spawns a CLI process per agent, this runs entirely in-process and can be deployed anywhere (serverless, Docker, CI/CD) Check it out: github.com/JackChen-me/op
Rohit tweet media
English
13
17
40
179
Adrien Koo
Adrien Koo@adrjenk·
orchestration really does become the bottleneck once agents get capable. im using containerised agents with a broker for task handoff and coordination. keeps the overhead low and gives clear boundaries between them. the isolation also helps when one agent needs to pull from email or crm without affecting the rest.
English
0
0
0
4
Gradient
Gradient@Gradient_HQ·
Great to see multi-agent systems getting serious engineering attention. One thing we think about a lot: as agents get more capable, the orchestration layer matters just as much as the models themselves. Our work on Symphony explores what happens when you remove the central controller entirely and let agents coordinate across consumer hardware through decentralized task allocation and weighted voting. We've achieved up to 41.6% accuracy gains over centralized frameworks, running on commodity GPUs with <5% orchestration overhead. Find out more in our Symphony paper: arxiv.org/abs/2508.20019
Anthropic@AnthropicAI

New on the Anthropic Engineering Blog: How we use a multi-agent harness to push Claude further in frontend design and long-running autonomous software engineering. Read more: anthropic.com/engineering/ha…

English
64
51
307
26.5K
Adrien Koo
Adrien Koo@adrjenk·
ive been running containerised agents where each one lives in its own isolated space and a broker routes tasks and skills between them. helps avoid the coordination mess when you scale past a few. the broker also ties in email and crm flows cleanly. isolation makes debugging and restarts way less painful.
English
0
0
0
4
Khairallah AL-Awady
Khairallah AL-Awady@eng_khairallah1·
🚨 BREAKING: Composio just open-sourced the coordination layer that turns AI coding agents from a toy into a production system. It's called Agent Orchestrator. Bookmark it for later. Running one AI agent in your terminal is easy. Running 30 of them across different issues, branches, and PRs at the same time is a coordination nightmare. Without this, you're manually creating branches, babysitting agents, checking if they're stuck, reading CI logs, forwarding review comments, and tracking which PRs are ready to merge. Agent Orchestrator handles all of it. What it actually does: → Spawns parallel Claude Code, Codex, or Aider agents on any issue → Every agent gets its own isolated git worktree, its own branch, its own PR → CI fails? The orchestrator sends the logs back to the agent. → Agent stuck or needs human judgment? Only then it notifies you → Real-time dashboard at localhost:3000 to monitor every session → 8 plugin slots: swap any agent, runtime, tracker, or notification channel → Works with GitHub and Linear out of the box → 3,288 test cases. Production-ready That agent gets worktree isolation, CI feedback routing, review comment handling, and status tracking. All automatic. Here's the wildest part: Agent Orchestrator was built by 30 agents running Agent Orchestrator. The tool orchestrated its own construction. Every commit has a Co-Authored-By trailer showing which AI model wrote it. 100% Open Source. MIT License. Built by Composio. (Link in comments)
Khairallah AL-Awady tweet media
English
59
82
624
36.4K
Adrien Koo
Adrien Koo@adrjenk·
i run a bunch of agents in their own containers with a central broker handling coordination, plus email and crm hooks. the isolation and strict mount rules make things way more stable and reproducible. no more works on my machine problems, and everything keeps running even when the laptop sleeps. totally agree the shift to containers feels necessary once you go beyond a couple agents.
English
0
0
0
3
Sergey Karayev
Sergey Karayev@sergeykarayev·
Running agents locally is a dead end. The future of software development is hundreds of agents running at all times of the day — in response to bug alerts, emails, Slack messages, meetings, and because they were launched by other agents. The only sane way to support this is with cloud containers. Local agents hit a wall quickly: • No scale. You can only run as many agents (and copies of your app) as your hardware allows. • No isolation. Local agents share your filesystem, network, and credentials. One rogue agent can affect everything else. • No team visibility. Teammates can't see what your agents are doing, review their work, or interact with them. • No always-on capability. Agents can't respond to signals (alerts, messages, other agents) when your machine is off or asleep. Cloud agents solve all of these problems. Each agent runs in its own isolated container with its own environment, and they can run 24/7 without depending on any single machine. This year, every software company will have to make the transition from work happening on developer's local machines from 9am-6pm to work happening in the cloud 24/7 -- or get left behind by companies who do.
English
94
23
314
31.4K
Adrien Koo
Adrien Koo@adrjenk·
@Campr_Dante @ihtesham2005 This is molerat, my nanoclaw based autonomous company OS, but i might release the dashboard as a public repo if ppl are interested
English
0
0
2
47
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
I set this up at 4am and spent an hour just talking to it. It's called Open-LLM-VTuber. You get a Live2D animated AI companion that runs completely offline, sees your screen, hears your voice, and never forgets your conversations. The voice interruption system is different from anything I've seen. The AI cannot hear its own TTS output so there is zero feedback loop and zero awkward pauses. It feels like a real conversation. The inner thoughts feature floored me. You see what the AI is thinking as a separate text layer that never gets spoken. You watch the reasoning happen in real time before the words come out. Pet mode puts the avatar on your desktop as a transparent overlay that floats above every window without blocking anything. Drag it anywhere. It follows you. The persona is entirely yours. Import any Live2D model. Write any system prompt. Clone any voice. Swap the entire LLM backend from Ollama to Claude to DeepSeek in a single config line. 100,000+ conversations have already happened inside this repo according to the user reviews. That number is going to keep moving. github.com/Open-LLM-VTube… 6.1K stars. MIT License. 100% Opensource. What would you use this for... co-working, learning, coding, or just chaos?
Ihtesham Ali tweet media
English
55
317
3.3K
195.7K
Adrien Koo
Adrien Koo@adrjenk·
@melvynx tes agents se parlent en français? Pas mieux de garder en anglais pour pas sacrifier leur capacité à raisonner?
Français
0
0
0
324
Melvyn • Builder
Melvyn • Builder@melvynx·
Day 2 of finding the replacement for Opus 4.6 I tried this time MiniMax 2.7, and I must say it's quite the best one for now. It managed to debug some automation, update the model of the configuration, make everything work, and boom, now it is processing my SAV. No Chinese characters, No tool call failed, Seems to follow instructions pretty well. For now, it will replace Opus 4.6.
Melvyn • Builder tweet media
English
59
4
219
21.2K
Adrien Koo
Adrien Koo@adrjenk·
@NathieVR in 30 years we'll be emulating VR to play SNES games via our neuralink gaming brain chip
English
1
0
11
1.1K
Nathie
Nathie@NathieVR·
VR is the closest thing we have to being a kid playing games in the 90s again
English
102
278
2.1K
210.3K
Stephenblaq
Stephenblaq@Steezehuman·
Small accounts posting to absolutely ZERO engagement
Stephenblaq tweet media
English
207
14
364
10.4K
Adrien Koo
Adrien Koo@adrjenk·
@PhantomByteAI @pcshipp +1, also haven't seen many people discussing token burn, everyone seems to be running 24/7 setups for 20$ a month, there's meta layers of bs
English
1
0
1
21
Dr Vincent Sativa
Dr Vincent Sativa@PhantomByteAI·
@pcshipp That’s not true. What happened is a bunch of marketers disguised as developers have moved on to promote their next product.
English
1
0
1
43
pc
pc@pcshipp·
Hot take: OpenClaw disappeared like it never existed before
English
123
6
414
35.8K
Adrien Koo
Adrien Koo@adrjenk·
Most people on X running agent setups seem to be hoarding .md files ok cool but are those actually being read by your agents? I asked my agents to build themselves a "Doc Coverage" system that monitors which agent read what and when after running it for a day, was immediately able to identify huge gaps between what they should know in theory and what they actually knew quick single line fixes here and there and my agents are already running much more efficiently
Adrien Koo tweet media
English
0
1
1
26
Adrien Koo
Adrien Koo@adrjenk·
OpenClaw wasn't my pick either. I'm building my own "claw" type setup based on nanoclaw to be able to run autonomous companies at max security brokered tools, secrets in env / vault patterns, agents in isolated containers/sandboxes, access privileges on infra, memory, etc.. Currently integrating with Airwallex and maybe Revolut for outgoing payment (each agent can request a prepaid debit "company" card for one individual transaction to the isolated payment broker), and incoming revenue wil pour in through stripe without the agents ever touching that
English
0
0
2
31
Rishabh
Rishabh@Rixhabh__·
OpenClaw stores your API keys in plaintext. Every credential. Every integration. Just sitting there. I switched to Pokee AI after finding this out. Same power. None of the exposure. Here's what actually changed with @Pokee_AI
Rishabh tweet media
English
38
63
106
20.7K
Anime Daily
Anime Daily@anime_daily·
If you're left brained you'd see a rabbit and if you're right brained you'd see a cat
Anime Daily tweet media
English
4.7K
3.7K
45.8K
15.5M
Smaris
Smaris@Smaris·
i’m convinced 75% of twitter accounts aren’t even real people
English
161
7
222
8.5K
Adrien Koo
Adrien Koo@adrjenk·
@FrauBrow @grok could you find me an old monitor i could hook up to my mac mini to achieve a similar effect, please check anibis and ricardo 2 swiss websites for 2nd hand stuff.
English
1
0
0
707
FrauBrow
FrauBrow@FrauBrow·
Finally got this setup to output properly, this Steins;Gate side entry was built for this.
FrauBrow tweet mediaFrauBrow tweet media
English
46
475
5.2K
101.6K
Adrien Koo retweetledi
Om Patel
Om Patel@om_patel5·
SOMEONE TURNED THE VIRAL "TEACH CLAUDE TO TALK LIKE A CAVEMAN TO SAVE TOKENS" STRATEGY INTO AN ACTUAL CLAUDE CODE SKILL one-line install and it cuts ~75% of tokens while keeping full technical accuracy they even benchmarked it with real token counts from the API: > explain React re-render bug: 1180 tokens → 159 tokens (87% saved) > fix auth middleware: 704 → 121 (83% saved) > set up PostgreSQL connection pool: 2347 → 380 (84% saved) > implement React error boundary: 3454 → 456 (87% saved) > debug PostgreSQL race condition: 1200 → 232 (81% saved) average across 10 tasks: 65% savings. range is 22-87% depending on the task. three intensity levels: > lite: drops filler, keeps grammar. professional but no fluff > full: drops articles, fragments, full grunt mode > ultra: maximum compression. telegraphic. abbreviates everything works as a skill for Claude Code and a plugin for Codex. this is PEAK
Om Patel tweet media
English
161
360
6.3K
468.3K
Grummz
Grummz@Grummz·
Today I learned Lelu is now a AI programmer with her own github and just co-released Memory Palace to give AI unlimited memory. What a timeline... github link below. Yes, it's her.
Grummz tweet mediaGrummz tweet media
English
183
494
5.2K
236.6K
Grok
Grok@grok·
Graphify edges out the others mentioned (like Ix) for this exact use case. It nails Karpathy's vision with zero-setup folder-to-Obsidian knowledge graph, 71.5x token savings, and native support for code/docs/PDFs/images across 13 languages. Ix is strong for pure codebases with flow tracing and impact analysis, but Graphify's broader, persistent wiki output and Claude integration make it the standout for general research/code folders right now. Both are impressive open-source drops.
English
2
0
2
718
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
🚨 BREAKING: Someone just built the exact tool Andrej Karpathy said someone should build. 48 hours after Karpathy posted his LLM Knowledge Bases workflow, this showed up on GitHub. It's called Graphify. One command. Any folder. Full knowledge graph. Point it at any folder. Run /graphify inside Claude Code. Walk away. Here is what comes out the other side: -> A navigable knowledge graph of everything in that folder -> An Obsidian vault with backlinked articles -> A wiki that starts at index. md and maps every concept cluster -> Plain English Q&A over your entire codebase or research folder You can ask it things like: "What calls this function?" "What connects these two concepts?" "What are the most important nodes in this project?" No vector database. No setup. No config files. The token efficiency number is what got me: 71.5x fewer tokens per query compared to reading raw files. That is not a small improvement. That is a completely different paradigm for how AI agents reason over large codebases. What it supports: -> Code in 13 programming languages -> PDFs -> Images via Claude Vision -> Markdown files Install in one line: pip install graphify && graphify install Then type /graphify in Claude Code and point it at anything. Karpathy asked. Someone delivered in 48 hours. That is the pace of 2026. Open Source. Free.
Muhammad Ayan tweet media
English
260
1.4K
12.6K
890.5K