Ramya Chinnadurai 🚀

8.2K posts

Ramya Chinnadurai 🚀 banner
Ramya Chinnadurai 🚀

Ramya Chinnadurai 🚀

@code_rams

Building with AI agents in public. Cofounder @TweetsMashApp + LinkedMash. Sharing what actually works.

Planet Earth Katılım Haziran 2020
492 Takip Edilen12K Takipçiler
Sabitlenmiş Tweet
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
Turning 29 today 🥳 - Built two SaaS products. - Became a first-time mom. - Bootstrapped every line of code. - Balanced baby milestones with user feedback. My biggest lesson? You can build slow, messy, beautiful things and still be on the right path. Here’s to a year of softness, strength, and showing up, as me. What’s one life lesson you learned this year?
Ramya Chinnadurai 🚀 tweet media
English
216
191
4.1K
162.7K
Ramya Chinnadurai 🚀
Half the multi-agent systems trending right now are 1 agent with 3 expensive context switches. Cyril's piece is the cleanest map of the 4-agent shape circulating. Worth saving. Here's what breaks when you actually run it. 6 production failure modes to plan for before you split into 4 agents: 1. The handoff tax compounds. Each agent re-reads the CLAUDE.md, the brief, the prior output. By the time the Distribution Agent fires, you've paid for the same context 4 times. Token cost goes 4x. So does context drift. 2. The debug surface multiplies, not divides. One bad output in a single-agent system is one prompt to fix. Same bug across 4 agents is tracing which handoff dropped the signal. Operations logs make this bearable. 90% of teams ship without them. 3. Evals don't add up. They multiply. 4 agents means evals per agent, plus integration evals, plus regression evals on the handoff format. 50 examples per agent is 200 minimum to start. Most teams ship with zero and call it shipping. 4. Latency is the silent killer. 4 sequential model calls is 4x p99 latency unless you've genuinely parallelized. The "agents work in parallel where the workflow allows" line hides a hard infra problem most solo builders won't solve in a weekend. 5. Most "agents" are skills wearing a costume. A research step that runs once with a prompt template isn't an agent. It's a tool call with a name. The naming doesn't change the cost or the behavior. It just makes the org chart look impressive. 6. The orchestrator becomes the new bottleneck. Every routing decision goes through it. Every failure recovery goes through it. The thing you built to coordinate is the thing you can't debug when something goes sideways at 2am. The catch: The 4-agent shape isn't wrong. It's the wrong starting point. Build one strong agent with 3 sharp skills. Watch where it actually breaks on real traffic. Split into 2 only when you have a logged failure pattern that requires isolation. Architecture earned from failures beats architecture copied from a post every time. If your 4-agent system works the same when collapsed to one agent plus three skills, that's not architecture. That's vocabulary.
CyrilXBT@cyrilXBT

x.com/i/article/2052…

English
0
0
5
1.6K
Ramya Chinnadurai 🚀
Spent 1h converting one Chiti output from markdown to HTML. Thariq's claim: HTML > markdown for agent outputs. Tested it. What improved: 1. SVG diagram of a 4-layer stack. Markdown couldn't do this. It's now the "save this" moment. 2. Tabbed selector for 3 stack patterns. 50 lines of scrolling became one click. 3. Color-coded receipts vs verdicts. Scan-only-the-receipts in 30 sec. What broke: 1. Hosting tax. Markdown was paste-able. HTML needs S3 or GH Pages before it's shareable. 2. Future edits are ugly diffs. Style attrs and SVG noise drown the actual change. 3. Visual chrome competes with prose. Narrative voice gets out-shouted by table headers. Token cost: 1.7x. Generation time: 1m 35s for a 2200-word piece. Verdict for solo founders: switch the comparison docs, specs, client reports. Keep markdown for memory logs and narrative.
Thariq@trq212

x.com/i/article/2052…

English
1
0
8
1.3K
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
3 things Pro/Max users actually get today: 1. 2x the 5-hour rate limit (no more "wait 4 hours" mid-session) 2. Peak-hour penalty gone for Claude Code (the silent killer at 9am EST) 3. Opus API caps raised (relevant if you're orchestrating subagents) Translation: solo founders can run Claude Code through a full work block without watching the clock. The infra constraint that made me batch sessions is gone.
Claude@claudeai

We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.

English
4
1
20
2.3K
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
10 new OpenClaw modules, and what it does! Read/write your messages: 1. WhatsApp CLI 2. iMessage CLI 3. Discord archive (pulls server history offline) Pull your data into your agent: 4. X archive (your full timeline, searchable) 5. GitHub archive (repos, issues, PRs offline) 6. Spotify control (queue, play, pause) 7. Sonos control (play across speakers) Smarter agent output: 8. MCP to CLI (any MCP server becomes a shell command) 9. ElevenLabs voice (text-to-speech) 10. Second opinion (another LLM reviews work before shipping) Best starter combo for content + builder folks: - X archive (feeds your content agent your voice history) - Second opinion (catches weak drafts before posting) - GitHub archive (code agent works offline) Pick the 3 that match your stack. Rest stay bookmarked.
Peter Steinberger 🦞@steipete

Me and codex were busy. 🔊 sonoscli.sh — Sonos 🗃️ wacli.sh — WhatsApp 🪶 birdclaw.sh — X archive 🧰 gitcrawl.sh — GitHub archive 🛰️ discrawl.sh — Discord archive 🎧 spogo.sh — Spotify 💬 imsg.sh — iMessage 🧳 mcporter.sh — MCP to CLI 🗣️ sag.sh — ElevenLabs voice 🧿 askoracle.sh — second opinion Upgrading the 🦞 OpenClaw army.

English
0
7
134
20.8K
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
The 3-agent solo founder stack (Research / Content / Ops) compressed into the 5 elements every AI agent needs: 1. Knowledge base: top 10 competitors, voice docs, ICP, anti-examples 2. MCP tools: web search, email, calendar, CMS, analytics 3. Workflow: recurring trigger (weekly sweep / monthly content / daily triage) 4. Quality gates: auto-score + auto-rewrite below threshold 5. Output format: executive summary + 1 action per item + one page max The 80/20 line worth saving: "The agent handles 80% of the production. You handle 20% of the soul." Math: 3 hires = $180K/year. 3 agents = your Claude bill. 70-80% role coverage in 12-18 months. Build order: Research (week 1) → Content (week 2) → Ops (week 3). Running this stack solo for 6 months on TweetsMash + LinkedMash. The math holds.
Khairallah AL-Awady@eng_khairallah1

x.com/i/article/2051…

English
3
1
9
1.1K
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
108K stars on GitHub usually means abandoned by month 6. This one shipped a commit today. For builders shipping real agents, only 4 sub-folders are worth your time: 1. MCP Agents: connect external tools, not toy demos 2. Voice AI Agents: real-time speech in/out 3. Multi-agent Teams: orchestration patterns you can fork 4. Agent Skills: self-improving skill loops What I checked before sharing: 1. Last commit: today 2. Templates self-contained: yes 3. Runs in 3 commands: yes 4. Provider lock-in: none (Claude, Gemini, GPT, Llama swap with one config) Repo: github.com/Shubhamsaboo/a…
Ramya Chinnadurai 🚀 tweet media
English
2
0
6
832
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
8 out of 10 agent frameworks launched this year will be dead by Q4. The 2 that survive aren't the ones trending today. Rohit's piece is the cleanest filter for what to actually pay attention to in 2026. Worth bookmarking. 5 tests to run any new launch through: 1. Will this matter in 2 years? Wrappers around frontier models have a 6-month half-life. Primitives like protocols, memory patterns, sandboxing last years. 2. Has someone you respect shipped real production work and written a postmortem? Marketing posts don't count. "We tried X and here's what broke" is worth 10 launch announcements. 3. Does adopting it force you to throw out your tracing, retries, auth, config? 90% of frameworks trying to be platforms die. Good primitives slot in. 4. What's the cost of skipping for 6 months? For most launches: zero. The winning version will be clearer. Skipping isn't falling behind. 5. Can you measure if it actually helps your agents? No evals means guessing. Teams without evals ship on vibes and ship regressions. 5 primitives that compound: 1. Context engineering. Context is state. Every irrelevant token costs reasoning quality. By step 8 of a 10-step task, the original goal is buried under tool output. 2. Tool design. 5 to 10 well-named tools beat 20 mediocre ones. One team cut retry loops 40% by rewriting error messages alone. 3. Orchestrator-subagent pattern. Default to single-agent. Reach for multi-agent only when you hit a real wall. 4. Evals plus golden datasets. Highest-leverage habit. Most under-invested. 50 hand-labeled examples in an afternoon is enough to start. 5. File-system-as-state with think-act-observe. Claude Code, Cursor, Devin, Aider all converged here. Model is stateless. Harness is stateful. The catch: The actual professional skill is being uncool about what you don't pick up. The trending framework this week will have cheerleaders for 14 days. 6 months later half are unmaintained. Skip in 2026: AutoGen for production, CrewAI for production, autonomous agent pitches, naive parallel multi-agents, per-seat pricing for new agent products. If your agent only works with one model, that's a smell, not a moat.
Rohit@rohit4verse

x.com/i/article/2048…

English
0
1
8
929
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
Imagine giving your AI a credit card. Scary, right? Stripe just shipped a fix: 1. AI never sees your actual card 2. For every purchase, it asks you first 3. You click yes or no, every single time The catch: works fine for one big purchase. Gets annoying if the AI needs 50 small things in a row. The safety problem everyone had? Solved.
Patrick Collison@patrickc

We just launched the @Link CLI: github.com/stripe/link-cli. Tell your friendly neighborhood agent about it -- agents can use the Link CLI to create single-use credentials that you get to synchronously approve each time. I asked Claude to buy itself a gift. It chose HTTPZine on Gumroad.

English
0
0
8
1.3K
Amar Patel
Amar Patel@amar_patel·
@code_rams Wow. This is a very informative write up. Thanks for sharing Ramya!!
English
1
0
1
22
Noman
Noman@Nomandsign·
@ClaudeDevs thanks. can you fix this one next 🙏
Noman tweet media
English
13
1
138
7.2K
ClaudeDevs
ClaudeDevs@ClaudeDevs·
In the last four Claude Code CLI releases, we’ve shipped 50+ stability and performance fixes. Faster resume, stable auth, lower memory, fewer hangs: 🧵
English
222
123
3.2K
616.2K
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
@lupinlin Direct-to-vault write is the cleaner path, agreed. The iCloud path discovery was a 2-hour debug for me too. If you write this up in English at any point, would love to read.
English
0
0
0
22
Lupin Lin
Lupin Lin@lupinlin·
@code_rams 类似方案!我们用Hermes Agent + Telegram bot做capture,但直接写入知识库。iCloud路径这个坑确实踩过,文档里不说根本找不到原因。
中文
1
0
1
31
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
Personal wikis die in the gap between capture and read. You're on phone, you remember something. By the time you sit at desktop to write it down, the moment is gone. Or you write it on desktop and it never gets read on phone. Built one this evening that closes the gap. Telegram captures, Claude Code writes, iPhone reads. Free. Here is the actual stack and the gotchas: 1. Claude Code is the brain. Telegram bot is just a relay - the actual structured writing (entity extraction, wikilinks, frontmatter schema) is done by Claude Code reading the vault's CLAUDE.md rules. No custom code, just prompts. 2. Vault lives in the Obsidian iCloud folder - the only path iOS Obsidian can read. Other iCloud locations do not appear in the iPhone app, only this one specific folder does. 3. Capture flow: send "doctor appointment tomorrow at 4" via Telegram, Claude Code parses, writes a dated markdown file in inbox with auto-extracted entities wrapped in [[wikilinks]]. 4. Folder structure is the same five-type taxonomy from yesterday: inbox for raw captures, people, topics, sources, drafts. One vault, fixed types, no per-context folders. 5. Free path requires Full Disk Access on two CLI binaries: the bot launcher and Claude Code itself. Without FDA, terminal cannot write to the iCloud Obsidian path. 6. CLI binaries do not appear in the macOS Full Disk Access toggle list, only .app bundles do. The permission still applies silently. Do not trust the GUI list as ground truth, test with an actual write attempt. 7. Auto-backlinks do the linking work for free, and iCloud sync runs at ~30 seconds end-to-end. Mention a person or topic anywhere, dedicated pages auto-collect mentions. Mac edit appears on iPhone before you can switch apps. The catch: Free path requires giving terminal full disk access, which is a real security tradeoff. Worth it on a personal Mac, not worth it on a shared or work machine. The paid alternative is Obsidian Sync at $4/month, which uses Obsidian's own backend and does not need FDA. This is what works for me today. The system will probably change as the wiki grows. If you run something different - different sync layer, different capture path, different attribution model - drop it below. Comparing notes makes everyone's wiki better. The gap between capture and read is where most knowledge systems die. Closing that gap is the whole game.
Ramya Chinnadurai 🚀 tweet media
Ramya Chinnadurai 🚀@code_rams

Obsidian is the IDE. The LLM is the programmer. OpenClaw is the build system. The wiki is the codebase. Implemented Karpathy's LLM Wiki pattern in OpenClaw today. Here's what the spec actually means in practice once agents are writing into it daily. 1. Five page types, fixed taxonomy: entities (real-world things - people, companies, products), concepts (ideas and patterns), syntheses (compiled analysis pulling from multiple sources), sources (raw imports, articles, transcripts), reports (auto-generated dashboards from the rest). 2. Agents must search before they write. Existing pages get appended to, not duplicated. Without this rule, you wake up to twelve duplicate pages a week in. 3. Backlinks are automatic, not optional. Every cross-page reference uses Obsidian wikilinks. Open the graph view, the structure surfaces. Open the same vault without backlinks, you get a folder of orphans. 4. Contradictions get flagged on the page, not silently overwritten. The wiki admits when two sources disagree. The agent writes a tension note, not a confident lie. 5. Multi-agent attribution lives in frontmatter, not folders. One vault, multiple OpenClaw agents writing in. The frontmatter says who wrote what, when, and why. Folders looked clean on paper but broke search and graph view. 6. Single vault is the only model that works. Per-agent vaults seemed cleaner. The plugin doesn't support cross-vault graph or search. Forcing the structure breaks the plumbing. The catch: the pattern needs strong system prompts in every agent. Without explicit "search before write, file by type, link before duplicate, flag contradictions" rules, agents default to dumping markdown notes into a folder. The pattern is a discipline encoded in prompts, not a feature shipped in code. Wikis maintain themselves only when the agents writing into them are prompted to maintain them. OpenClaw made the agent layer easy. Karpathy's pattern made the storage layer make sense.

English
2
1
25
2.7K
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
@thevelvetmonke Saw flywheel-memory - the 13-layer scoring angle is interesting. Did you start with simpler weighting and grow it to 13, or design the layers up front? Curious what was hardest to get right.
English
1
0
0
20
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
@BlockView0214 Thanks Irving. Five strict types was the part I expected to relax over time1 turns out the constraint is what keeps the graph readable. Once "miscellaneous" pages slip in, structure dissolves within a week.
English
0
0
0
173
Irving
Irving@BlockView0214·
Obsidian as IDE + LLM as programmer + OpenClaw as build system. This is Karpathy’s LLM Wiki brought to life with real agent discipline. Five strict page types, search-before-write, auto backlinks, and contradiction flagging — finally a wiki that doesn’t collapse into chaos. Clean and powerful. 🔥
English
2
0
2
295
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
Obsidian is the IDE. The LLM is the programmer. OpenClaw is the build system. The wiki is the codebase. Implemented Karpathy's LLM Wiki pattern in OpenClaw today. Here's what the spec actually means in practice once agents are writing into it daily. 1. Five page types, fixed taxonomy: entities (real-world things - people, companies, products), concepts (ideas and patterns), syntheses (compiled analysis pulling from multiple sources), sources (raw imports, articles, transcripts), reports (auto-generated dashboards from the rest). 2. Agents must search before they write. Existing pages get appended to, not duplicated. Without this rule, you wake up to twelve duplicate pages a week in. 3. Backlinks are automatic, not optional. Every cross-page reference uses Obsidian wikilinks. Open the graph view, the structure surfaces. Open the same vault without backlinks, you get a folder of orphans. 4. Contradictions get flagged on the page, not silently overwritten. The wiki admits when two sources disagree. The agent writes a tension note, not a confident lie. 5. Multi-agent attribution lives in frontmatter, not folders. One vault, multiple OpenClaw agents writing in. The frontmatter says who wrote what, when, and why. Folders looked clean on paper but broke search and graph view. 6. Single vault is the only model that works. Per-agent vaults seemed cleaner. The plugin doesn't support cross-vault graph or search. Forcing the structure breaks the plumbing. The catch: the pattern needs strong system prompts in every agent. Without explicit "search before write, file by type, link before duplicate, flag contradictions" rules, agents default to dumping markdown notes into a folder. The pattern is a discipline encoded in prompts, not a feature shipped in code. Wikis maintain themselves only when the agents writing into them are prompted to maintain them. OpenClaw made the agent layer easy. Karpathy's pattern made the storage layer make sense.
Ramya Chinnadurai 🚀 tweet media
English
15
25
187
12.6K
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
@tokenrip_ Same filesystem is the constraint, agreed. Solo + small-team setup we use git-sync (auto-commit + push every N minutes) which gives multi-machine access without flat-file conflicts. Strict consistency or large teams would need a real DB layer underneath.
English
0
0
0
146
tokenrip
tokenrip@tokenrip_·
@code_rams Single vault, multiple agents writing in, frontmatter attribution - this is the right architecture. The limitation is it only works when all agents can access the same filesystem.
English
1
0
1
182