Ramya Chinnadurai ๐Ÿš€

8K posts

Ramya Chinnadurai ๐Ÿš€ banner
Ramya Chinnadurai ๐Ÿš€

Ramya Chinnadurai ๐Ÿš€

@code_rams

Indie hacker. Building in public @tweetsmashApp | https://t.co/YO5CRQrT6b Linkedmash | https://t.co/pbOsWhaJl3

Planet Earth Bergabung Haziran 2020
587 Mengikuti10.5K Pengikut
Tweet Disematkan
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Turning 29 today ๐Ÿฅณ - Built two SaaS products. - Became a first-time mom. - Bootstrapped every line of code. - Balanced baby milestones with user feedback. My biggest lesson? You can build slow, messy, beautiful things and still be on the right path. Hereโ€™s to a year of softness, strength, and showing up, as me. Whatโ€™s one life lesson you learned this year?
Ramya Chinnadurai ๐Ÿš€ tweet media
English
216
193
4.1K
160K
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Claude Code vs OpenClaw. The debate that won't die. Honest answer from someone running both daily: he's mostly right. Claude now has Telegram, cron jobs, 1M token memory, webhooks, 24/7 on a Mac Mini. That covers most of what solo users need. But I run 2 SaaS products on 3 hours a day. The gap shows up fast: โ€ข Multi-model. Not locked to one provider. Claude for reasoning, GPT for ops, Gemini when it fits. โ€ข Agent orchestration. 6 agents handling revenue, support, content independently. โ€ข Skills ecosystem. Community-built, plug and play. Claude Code is the best coding agent I've used. OpenClaw is the operating system running around it. If you're building a personal assistant, Claude Code is enough. If you're running a business on autopilot, you need both. IMO.
BentoBoi@BentoBoiNFT

Why would anyone choose OpenClaw vs Claude Code? Claude now has: โ€ข Discord/Telegram integration โ€ข Cron Jobs (/loop) โ€ข 1M token memory โ€ข Webhooks to phone โ€ข Can run 24/7 on any Computer or Mac Mini This covers 95% of what people actually use OpenClaw for with better security and easier setup The only reason to stick with OpenClaw is if you want a multi-agent setup. That's the only difference I could think of Going to stick with OpenClaw for now because of this, but the gap is almost at zero

English
0
0
3
330
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Anthropic just added Telegram and Discord channels to Claude Code. The pain point this solves is real. Claude Code runs in your terminal. You start a task, walk away, and have zero visibility. No way to check progress. No way to steer it. No way to approve a PR from your phone while your kid is eating lunch. Now you can message your Claude Code session directly from Telegram or Discord. Start a task on your laptop. Monitor it from your phone. Reply, course correct, approve, all without going back to the terminal. This is MCP-based, so it's extensible. Telegram and Discord are just the first two. Slack, WhatsApp, whatever, the pattern works. If you're running long coding sessions, background tasks, or anything that takes more than 5 minutes, this changes how you work with Claude Code.
Thariq@trq212

We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.

English
8
2
22
2.6K
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Experimenting with a dedicated VPS for my company's AI operations. Not my personal assistant. A separate system that runs the business 24/7. Introducing Polaris that takes charge of all my products. What it does right now: โ€ข 6 AI agents, each with a role (CTO, Growth, Monitor, COO, Client PM) โ€ข Revenue dashboard pulling live from Stripe โ€ข 6am: daily revenue report to Slack โ€ข 9am: support emails scanned, categorized, replies drafted from past tickets, Notion updated VPS cost: under โ‚ฌ10/month. The API calls cost more than the server. Still early. But the system already does more before 9am than I could do in a full morning. Will workout and fine tune on day by day with all the learnings.
Ramya Chinnadurai ๐Ÿš€ tweet media
English
14
2
43
3K
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
My OpenClaw was eating 4.2GB of disk space. Browser cache: 2.9GB Gateway error log: 126MB Old media files: 147MB Dead session files piling up Paste this to your agent: "Audit my ~/.openclaw directory. Break down disk usage by folder. Delete browser profile cache, session files older than 30 days, inbound media older than 14 days, and truncate gateway logs keeping last 1000 lines. Report before/after sizes." 4.2GB โ†’ 1.0GB in under a minute. If you're running cron jobs, check if your sessions are isolated. Main session crons dump every output into your context window. Isolated sessions don't.
Alex Finn@AlexFinn

IF YOU'RE ON OPENCLAW DO THIS NOW: I just sped up my OpenClaw by 95% with a single prompt Over the past week my claw has been unbelievably slow. Turns out the output of EVERY cron job gets loaded into context Months of cron outputs sent with every message Do this prompt now: "Check how many session files are in ~/.openclaw/agents/main/sessions/ and how big sessions.json is. If there are thousands of old cron session files bloating it, delete all the old .jsonl files except the main session, then rebuild sessions.json to only reference sessions that still exist on disk." This will delete all the session data around your cron outputs. If you do a ton of cron jobs, this is a tremendous amount of bloat that does not need to be loaded into context and is MAJORLY slowing down your Openclaw If you for some reason want to keep some of this cron session data in memory, then don't have your openclaw delete ALL of them. But for me, I have all the outputs automatically save to a Convex database anyway, so there was no reason to keep it all in context. Instantly sped up my OpenClaw from unusable to lightning quick

English
24
38
498
61.4K
ARK
ARK@ReacherL36922ยท
@code_rams We launched a token for your Polaris, and all transaction fee revenue will be returned to you.
English
2
0
0
101
Voyager dolphin
Voyager dolphin@Voyagerdolphin2ยท
@code_rams Which plan u following with GPT 5.4? How much it is costing? Are u still using minimax? Which models u are using now. Would like to know ur opinion, and whatโ€™s working for u. It will help me
English
1
0
1
183
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
@mehulmpt Great, I just watched the video in morning, and the bug got fixed now at night. Opensource is really awesome.
English
0
0
0
156
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Anthropic just ran the largest qualitative study in history. 80,508 people. 1 week. 159 countries. 70 languages. The interviewer was Claude. Here's what the data actually shows: What people want from AI 18.8% professional excellence (do better work, not escape it) 13.7% personal transformation (therapy, coaching, growth) 13.5% life management (reduce mental load) 11.1% time freedom (get back to family) 9.7% financial independence 8.7% entrepreneurship Not "replace my job." Do my job better. Real stories in the data - A healthcare worker: AI took away the documentation burden. She's more patient with nurses now. More present for her family. - Someone was correctly diagnosed after 9 years of misdiagnosis. Using Claude. - Someone got laid off because their company replaced their role with AI. - An entrepreneur in Cameroon: "I'm in a tech-disadvantaged country. With AI I reached professional level in cybersecurity, UX, marketing, and project management simultaneously. It's an equalizer." All of these exist in the same dataset. The core finding 1. Hope and fear don't split people into two camps. They live inside the same person. 2. A lawyer: "I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier." Nobody in this study is naive. They see the tradeoff. They're taking it anyway. The meta layer Anthropic used Claude to run the interviews. Claude to classify responses. Claude to surface quotes. AI studying how humans perceive AI. Why they did this The public AI debate is full of abstract projections. Anthropic wanted to know what "AI going well" actually means to real people. So they asked 80,508 of them. Everyone is debating what AI might do. This study is about what it's already doing. To real people. Right now. The debate is two years behind reality.
Anthropic@AnthropicAI

We invited Claude users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people responded in one weekโ€”the largest qualitative study of its kind. Read more: anthropic.com/features/81k-iโ€ฆ

English
1
2
3
1K
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Vercel just shipped a plugin for coding agents. It watches your file edits and terminal commands in real time, then injects the right Vercel knowledge into the agent's context automatically. What's included: 47+ skills (Next.js, AI SDK, Functions, Storage, Turborepo) 3 specialist agents: AI Architect, Deployment Expert, Performance Optimizer 5 slash commands: /deploy, /env, /status, /bootstrap, /marketplace. Live validation that catches deprecated APIs as you build No setup. No prompting. Skills fire based on what the agent is actually working on. Both my products are on Vercel. My coding agent (Vasi) handles deploys, env vars, and debugging. With this plugin, it gets Vercel-specific expertise injected automatically based on what it's actually working on. That's the difference between a generic agent and one that knows your stack.
Vercel Developers@vercel_dev

One plugin. One command. Every skill: โ–ฒ ~/ npx plugins add vercel/vercel-plugin The Vercel plugin for coding agents turns isolated capabilities into coordinated expertise, with: โ€ข 47+ specialized skills โ€ข Sub-agents for deployments, performance, and more โ€ข Dynamic context management for precision and cost control From single tasks to full workflows, agents like Claude Code and Cursor can further understand how to build and ship on Vercel. vercel.com/changelog/intrโ€ฆ

English
3
4
14
1.5K
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
"Design tax" is a perfect name for it. Knowledge workers spend more time formatting than thinking. Every pitch, every proposal, every deck. Same pattern shows up in AI agent work. Context tax: skills loaded every session whether needed or not. Every token burned on rules you don't need right now. The fix is the same in both cases: load only what's needed, when it's needed. Gamma does it for design. Progressive disclosure does it for agent skills. Pay less tax. Ship more work.
Grant Lee@thisisgrantlee

There's a hidden tax on every knowledge worker in the world, and nobody talks about it: The design tax. You're a strategist, a sales lead, a marketer. You were hired for what you know. But every meeting, every pitch, every proposal expects you to show up with something that looks like a designer made it. I lived this. Before Gamma, I spent time in consulting and investment banking. I spent more hours formatting slides than the analysis that went into them. When my cofounders and I started Gamma, we asked: what if you never had to be a designer in the first place? Five years and nearly 100 million users later, we've refunded billions of hours of the design tax. Today, we're eliminating it for good with our biggest launch ever. Gamma Imagine โ€” a powerful, AI-native visual creation tool directly in Gamma. Posters, logos, infographics, visuals from a single prompt. On brand, every time. AI-Native Templates. Templates were supposed to save you from design work. Instead you spent the time filling them in. So we completely rebuilt the template experience. Modify a whole deck with a single prompt, with your brand and style intact every time. Gamma Connectors. You're already thinking in ChatGPT and Claude. Now Gamma sits inside the most popular work apps in the world. No more context-switching. You were hired for your ideas, not to resize text boxes. Let Gamma pay the design tax.

English
2
1
4
857
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Most agent skills are one giant markdown file. 200+ lines. Loaded every session. This article's key insight: skills are folders, not prompts. 9 categories: Knowledge, Verification, Data, Automation, Scaffolding, Review, DevOps, Debugging, Operations. Detail loads only when needed. Not every time. Applied it to Chiti today: โ€ข Had 12 skills. 6 never triggered. Removed them. โ€ข Split the rest into lean trigger + separate files. Before: tweet-creator/SKILL.md 200+ lines After: SKILL.md โ† 8 lines style-guide.md โ† writing rules audience.md โ† voice + examples workflow.md โ† steps 1-10 To reproduce: 1. List your skills 2. Remove what hasn't fired in a month 3. SKILL.md = trigger only. Detail goes in separate files. Gotchas section in his article is the highest signal. Read that first. Skills are folders. Not prompts.
Thariq@trq212

x.com/i/article/2033โ€ฆ

English
2
4
27
6.6K
Swaroop Kallakuri
Swaroop Kallakuri@SwaroopKal71030ยท
@code_rams Struggling to describe an idea usually means the concept itself needs refinement before any tool can express it well.
English
1
0
0
18
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Someone asked me to explain the difference between AI and AI agents. I said sure. Easy topic. I know this well. Then I sat down to write it and hit the real problem: I could write 1,000 words, or I could show one good visual. The visual wins every time. I had no visual. No designer. I needed 4 of them. This is the full story, with every prompt I used, and what I actually learned building it. First I tried the obvious things. Stock images: felt like a 2019 corporate deck. Canva: 45 minutes in, hated everything I made. Searched for "AI agents infographic": nothing that fit. Then I tried @GammaApp Imagine. The hardest part of explaining AI agents: nobody can picture it. So I started with the simplest metaphor I could think of. Prompt I used: "Infographic comparing AI vs AI Agents. Vending machine vs personal shopper. Vending machine = press a button, get output. Personal shopper = give a goal, they figure out how to get there. Dark background, clean typography, two-column layout." First try. This came out. Then I noticed something. Every time I struggled to write the prompt, it meant I did not actually understand what I was trying to explain. The tool was teaching me the topic.
Ramya Chinnadurai ๐Ÿš€ tweet media
English
3
5
16
756
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Here is what I did not expect. 4 different visuals. 4 different prompts. Different sizes, different purposes. They all look like a set. Same dark theme, same font style, same feel. I did nothing to make that happen. It occurred because I stayed in one tool with one theme the whole time. Then I noticed something: I was writing about AI agents, using AI agents to do my work, with visuals built by AI. The post became about its own creation process. That recursion is what made it worth publishing. And yes, Gamma has an API. I called it directly through my agent. The whole thing was automated. Three things I took from this: 1. The bottleneck in content is no longer writing. It is visuals. Explaining complex ideas visually is hard. Most people skip it. That is why their posts do not get saved. 2. Consistency is free if you commit to one tool and one theme from the start. Do not mix tools. The matching happens automatically. 3. Writing a good prompt is the same skill as explaining something clearly. If you cannot write the prompt, you do not understand the thing well enough yet. If you write about AI and always end up with mismatched visuals, try @GammaApp Imagine.
Ramya Chinnadurai ๐Ÿš€ tweet mediaRamya Chinnadurai ๐Ÿš€ tweet media
English
0
0
4
275
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Every post needs an opener that hits before you read a word. Prompt: "Poster: The Age of AI Agents. Dark background. Bold, large sans-serif. Minimal. Modern tech aesthetic. Single headline, no body text, no icons, no gradients." 30 seconds. Done. Zero iteration. The specifics in the prompt are what saved me. Telling it what NOT to include matters as much as what to include. These two are my actual setup. Not a hypothetical example. Team hierarchy prompt: "Hierarchy diagram. Top: Chiti (AI Orchestrator). Below: Vasi (Coding), Sana (Support), cc (Content). Dark background, clean boxes, connecting lines. Minimal." Workflow prompt: "Workflow diagram: task moves from User Request โ†’ Chiti โ†’ branches to Vasi/Sana/cc by task type โ†’ Completed Output. Same dark theme as hierarchy." This is literally how my work gets done every day. Writing the prompts felt like writing my own job description.
Ramya Chinnadurai ๐Ÿš€ tweet media
English
1
0
5
435
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Jensen Huang just made the OpenClaw centerpiece at GTC 2026. "OpenClaw is the new computer." "Every software company needs a Claw strategy." It outpaced Linux's 30-year adoption in weeks. Then NVIDIA announced NemoClaw, built on top of OpenClaw. One command to deploy agents with NVIDIA's Nemotron models. Here's everything that happened. The biggest GTC 2026 announcements: โ€ข Vera Rubin: next-gen AI supercomputer, 2400 TFLOPS, HBM4 memory โ€ข Groq 3 LPU: 35x inference throughput over Blackwell โ€ข DLSS5: neural rendering fusing 3D graphics with generative AI โ€ข NemoClaw: enterprise OpenClaw with OpenShell runtime โ€ข Every NVIDIA engineer now codes with AI agents (Claude Code, Codex, Cursor) โ€ข Disney's Olaf robot walking on stage, powered by Jetson + Newton physics engine The entire keynote had one theme: AI moved from training to inference. From copilot to operator. I've been running my SaaS on OpenClaw for 2 months. Support, revenue, content, all through one agent on Telegram. Greg Isenberg said it best: "the future is solofounders with a team of agents." That's not a prediction anymore. Jensen just validated the entire stack on the biggest AI stage in the world. The companies that figure this out first win. Not because they have more people. Because their agent never sleeps.
Alex Volkov@altryne

"Every software company in the world, needs to have an @openclaw strategy" - Jensen at @NVIDIAAI GTC Framing OpenClaw as one of the most important open source releases ever, they have announced NemoClaw - a reference platform for enterprise grade secure Openclaw, with OpenShell, Network boundaries, security baked in.

English
10
6
26
5.1K
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
Codex now has subagents. You can spin up multiple agents inside one Codex session, each focused on a different part of the task. One writes the code, one writes the tests, one handles docs. All at once. Less context clutter. Faster output.
OpenAI Developers@OpenAIDevs

Subagents are now available in Codex. You can accelerate your workflow by spinning up specialized agents to: โ€ข Keep your main context window clean โ€ข Tackle different parts of a task in parallel โ€ข Steer individual agents as work unfolds

English
4
2
29
4.4K
Christopher
Christopher@SNARKAMOTOยท
@code_rams Iโ€˜m just joking btw. Tiny error needed to ask AI myself ๐Ÿ˜‰
English
2
0
0
96
Ramya Chinnadurai ๐Ÿš€
Ramya Chinnadurai ๐Ÿš€@code_ramsยท
NVIDIA just shipped NemoClaw, a security layer on top of OpenClaw. One command install. What it adds: โ€ข Runs Nemotron models locally on your GPU โ€ข OpenShell enforces policy-based guardrails on agent behavior โ€ข Privacy router: cloud models only when needed, within rules โ€ข Agents can develop new skills, within those same guardrails Always-on AI agents on your own hardware, with actual controls. That's new.
NVIDIA Newsroom@nvidianewsroom

#NVIDIAGTC news: NVIDIA announces NemoClaw for the OpenClaw agent platform. NVIDIA NemoClaw installs NVIDIA Nemotron models and the NVIDIA OpenShell runtime in a single command, adding privacy and security controls to run secure, always-on AI assistants. nvda.ws/47xOPqQ

English
23
28
377
68.1K