Adel Zaalouk

1K posts

Adel Zaalouk banner
Adel Zaalouk

Adel Zaalouk

@ZaNetworker

AI Tinkerer 🤖 | Opinions my own

Germany Beigetreten Aralık 2009
2.6K Folgt631 Follower
Angehefteter Tweet
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
I'm launching "The Technomist," a newsletter exploring how technology and business intersect, covering topics from product/idea discovery to AI strategies. Subscribe if you'd like to follow along: thetechnomist.com/about First post coming soon! #tech #business
English
1
1
2
716
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
Karpathy and Tobi Lutke built the same loop independently. Point an AI agent at code, give it a score to chase, let it run experiments overnight. One got a better model. The other got a 53% speedup on a 20-year-old codebase. I generalized the pattern into a tool that works for any domain. Pointed it at a RAG search engine, got 14 experiments and a 9.3% improvement while I did other things. The real work isn't the loop. It's writing good evals. adelzaalouk.me/2026/mar/15/au…
English
0
0
1
61
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
Your AI coding agent makes the same mistake every session. You correct it, it adapts, the session ends, and tomorrow it's forgotten everything. I built a system that captures corrections, figures out which skill caused the failure, and checks whether I already fixed it. The key insight: an agent that remembers everything learns nothing. An agent that remembers only what you choose to teach it gets better every week. adelzaalouk.me/2026/mar/22/te…
English
0
0
0
36
Adel Zaalouk retweetet
Red Hat AI
Red Hat AI@RedHat_AI·
Your agent works on your laptop. But does it have identity? Isolation? Audit trails? Observability with agent tracing? Can you prove to compliance what tools it called and why? Most teams can't. That's the gap Red Hat AI closes with Bring Your Own Agent: security, governance, observability, and tool-level authorization around any agentic runtime, framework, or application without touching code. Here's how to operationalize "Bring Your Own Agent" on Red Hat AI, the OpenClaw edition: redhat.com/en/blog/operat…
Red Hat AI tweet media
English
1
6
11
1.6K
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
Most CLAUDE.md files I've seen are way too long. The model can only reliably follow ~150 instructions, and Claude Code's system prompt already uses ~50 of those. If Claude keeps ignoring your rules, your instruction budget is probably overdrawn. Wrote about how to fix it. adelzaalouk.me/2026/mar/7/you…
English
0
0
0
55
Andrej Karpathy
Andrej Karpathy@karpathy·
Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.
Andrej Karpathy@karpathy

@nummanali tmux grids are awesome, but i feel a need to have a proper "agent command center" IDE for teams of them, which I could maximize per monitor. E.g. I want to see/hide toggle them, see if any are idle, pop open related tools (e.g. terminal), stats (usage), etc.

English
829
839
10.6K
2.5M
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
Spent the weekend building a tool for solving the coding agents' sprawl problem. Introducing ✨ aimux ✨ If you're running multiple coding agents like Claude, Codex, Gemini, etc, you know the pain: which session is stuck? What did it do? How do I debug it? How much did it cost? aimux is a single-binary TUI that gives you one view across all your AI coding agents. Discovery, traces, cost tracking, annotations + labels (for evals), and OTEL export! No daemons, no hooks, no modifications to your tools. Integrates with MLFlow and is easily extensible as well. Multiplex your AI agents. Trace, launch (built-in for different coding agents), export. Never leave the terminal. Install: brew install zanetworker/aimux/aimux Repo: github.com/zanetworker/ai… Site: zanetworker.github.io/aimux/
GIF
English
0
0
0
277
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
AI scales execution to near-zero cost. But verifying that output stays biologically bounded. The bottleneck is not intelligence (not anymore, that's becoming abundant now), it is and will be human verification bandwidth. Full post here: adelzaalouk.me/2026/feb/25/hu…
Adel Zaalouk tweet media
English
1
0
1
32
Andrej Karpathy
Andrej Karpathy@karpathy·
CLIs are super exciting precisely because they are a "legacy" technology, which means AI agents can natively and easily use them, combine them, interact with them via the entire terminal toolkit. E.g ask your Claude/Codex agent to install this new Polymarket CLI and ask for any arbitrary dashboards or interfaces or logic. The agents will build it for you. Install the Github CLI too and you can ask them to navigate the repo, see issues, PRs, discussions, even the code itself. Example: Claude built this terminal dashboard in ~3 minutes, of the highest volume polymarkets and the 24hr change. Or you can make it a web app or whatever you want. Even more powerful when you use it as a module of bigger pipelines. If you have any kind of product or service think: can agents access and use them? - are your legacy docs (for humans) at least exportable in markdown? - have you written Skills for your product? - can your product/service be usable via CLI? Or MCP? - ... It's 2026. Build. For. Agents.
Andrej Karpathy tweet media
Suhail Kakar@SuhailKakar

introducing polymarket cli - the fastest way for ai agents to access prediction markets built with rust. your agent can query markets, place trades, and pull data - all from the terminal fast, lightweight, no overhead

English
665
1.1K
11.8K
2.1M
Mark Lynch
Mark Lynch@marklynchdev·
@karpathy @lauriewired Grandma is going to send a Western Union wire because her grand daughter called in her own voice and said she’s in trouble in a foreign land.
English
5
0
251
9.1K
Andrej Karpathy
Andrej Karpathy@karpathy·
Very interested in what the coming era of highly bespoke software might look like. Example from this morning - I've become a bit loosy goosy with my cardio recently so I decided to do a more srs, regimented experiment to try to lower my Resting Heart Rate from 50 -> 45, over experiment duration of 8 weeks. The primary way to do this is to aspire to a certain sum total minute goals in Zone 2 cardio and 1 HIIT/week. 1 hour later I vibe coded this super custom dashboard for this very specific experiment that shows me how I'm tracking. Claude had to reverse engineer the Woodway treadmill cloud API to pull raw data, process, filter, debug it and create a web UI frontend to track the experiment. It wasn't a fully smooth experience and I had to notice and ask to fix bugs e.g. it screwed up metric vs. imperial system units and it screwed up on the calendar matching up days to dates etc. But I still feel like the overall direction is clear: 1) There will never be (and shouldn't be) a specific app on the app store for this kind of thing. I shouldn't have to look for, download and use some kind of a "Cardio experiment tracker", when this thing is ~300 lines of code that an LLM agent will give you in seconds. The idea of an "app store" of a long tail of discrete set of apps you choose from feels somehow wrong and outdated when LLM agents can improvise the app on the spot and just for you. 2) Second, the industry has to reconfigure into a set of services of sensors and actuators with agent native ergonomics. My Woodway treadmill is a sensor - it turns physical state into digital knowledge. It shouldn't maintain some human-readable frontend and my LLM agent shouldn't have to reverse engineer it, it should be an API/CLI easily usable by my agent. I'm a little bit disappointed (and my timelines are correspondingly slower) with how slowly this progression is happening in the industry overall. 99% of products/services still don't have an AI-native CLI yet. 99% of products/services maintain .html/.css docs like I won't immediately look for how to copy paste the whole thing to my agent to get something done. They give you a list of instructions on a webpage to open this or that url and click here or there to do a thing. In 2026. What am I a computer? You do it. Or have my agent do it. So anyway today I am impressed that this random thing took 1 hour (it would have been ~10 hours 2 years ago). But what excites me more is thinking through how this really should have been 1 minute tops. What has to be in place so that it would be 1 minute? So that I could simply say "Hi can you help me track my cardio over the next 8 weeks", and after a very brief Q&A the app would be up. The AI would already have a lot personal context, it would gather the extra needed data, it would reference and search related skill libraries, and maintain all my little apps/automations. TLDR the "app store" of a set of discrete apps that you choose from is an increasingly outdated concept all by itself. The future are services of AI-native sensors & actuators orchestrated via LLM glue into highly custom, ephemeral apps. It's just not here yet.
Andrej Karpathy tweet media
English
913
1K
12.1K
1.9M
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
@clairevo @GergelyOrosz Had little time to invest in learning up their way to an outcome, now thats way faster and less costly?
English
0
0
0
48
claire vo 🖤
claire vo 🖤@clairevo·
@GergelyOrosz I see this a lot too. I think it’s 3 things: - they can get a lot out of LLMs because they’re skilled - they’re setting an example on purpose - they’ve been starved too long of the fun of writing code, and the joy is back
English
3
0
82
11.3K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
One interesting observation: inside a Big Tech, the internal token leaderboard is dominated by… very very experienced engineers. Distinguished-level folks who you rarely saw code day to day before LLMs. Also, some VPs (!!)
English
87
36
1.5K
269.2K
Adel Zaalouk retweetet
Red Hat AI
Red Hat AI@RedHat_AI·
There are many great agent frameworks today. What’s missing? A common contract that lets them run anywhere. That’s where Llama Stack comes in. Here's how; a 🧵:
Red Hat AI tweet media
English
1
2
16
680
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
@pashmerepat Do checkout llama stack (github.com/llamastack) as well, it exposes OpenAI compatibility across a range of APIs, including file_search, responses, vectorstores, etc.
English
0
0
0
4
pash
pash@pashmerepat·
everyone's constantly posting the meme about having a bunch of different agent rule files while the real nightmare continues to be totally ignored: - openai just dropped responses api that breaks every single existing agent architecture - anthropic format was the universal translator (superset of openai completions), now it's obsolete - every provider has different message shapes, tool calling patterns, reasoning hydration - cline has anthropic baked into disk storage, 30+ providers, core interfaces - migration would total architectural hell who gives a fuck about .cursorrules vs agents md when your reasoning traces disappear between api calls and your entire codebase assumes one message format that's no longer the superset? can we please standardize on a future proof llm API standard?
English
60
58
703
205.4K
Sundar Pichai
Sundar Pichai@sundarpichai·
To MCP or not to MCP, that's the question. Lmk in comments
English
1K
405
7.2K
2.2M
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
Long-context models are impressive, but they haven't killed RAG. In fact, RAG, especially the retrieval part of it, is 𝘮𝘰𝘳𝘦 important than ever for building truly intelligent AI systems/applications. Here is a post that explains why, covering: 👉𝗕𝗲𝘆𝗼𝗻𝗱 "𝗡𝗮𝗶𝘃𝗲" 𝗥𝗔𝗚: The original vision (as defined in the original RAG paper), and why fine-tuning (especially embeddings) is key. 👉 𝗟𝗼𝗻𝗴 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 ≠ 𝗡𝗼 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹: Why longer context isn't always better. 👉 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗥𝗔𝗚: Used to handle query complexity, dynamic and multi-step knowledge discovery and acquisition. 👉 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: From customer support to healthcare, see examples of RAG in action. The "𝗥" in RAG is essential for controlling: 👉 𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻/𝗳𝗿𝗲𝘀𝗵𝗻𝗲𝘀𝘀:  Combats knowledge cutoffs and provides targeted retrieval (hint: fine-tuning helps here). 👉 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆: Smart chunking beats brute-force large context to produce accurate results in some use-cases. 👉𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Agentic RAG handles complex, multi-step information needs with retrieval as a tool. 👉 𝗙𝗼𝗰𝘂𝘀: Good retrieval provides focus. Without it, you have a distracted generator (It's coffee for LLMs and the right-hand for agents)! Read the full post for more details 👇 thetechnomist.com/p/rag-reigns-s…
English
0
0
2
71
Adel Zaalouk retweetet
8VC
8VC@8vc·
Our friends @Meta are taking part in the 8VC Llama Stack Innovation Challenge! First challenge: 3/7-4/19. Use Llama Stack to build desktop, field, and edge AI applications that run locally for privacy and performance. Join now: llamastackchallenge.com
English
15
62
167
70K
Adel Zaalouk
Adel Zaalouk@ZaNetworker·
@petergyang @superwhisper When you are walking, do you want to brain-dump or write by conversing or something else? If its the former, you anyway need the two steps or? 2. can probably be faster with an _assistant_ though.
English
0
0
2
228
Peter Yang
Peter Yang@petergyang·
Ok this is how I do "vibe writing" today: 1. I go on a walk and talk to @superwhisper 2. I paste it into Claude with my prompt that has past writing examples to edit But ideally, this happens in one step with no copy and paste.
English
22
6
185
17.7K
Peter Yang
Peter Yang@petergyang·
Who's building the Cursor for writing? I want to practice "vibe writing," just speaking stream of conscious thoughts and AI cleans it all up for me.
English
377
63
2.1K
401.1K