Rudresh

4.6K posts

Rudresh banner
Rudresh

Rudresh

@vrudresh

🇮🇳🇨🇦 Engineer at heart, Cyber Security, IIMB, Space enthusiast, Endurance Cyclist, Trekker & a Yogi.. Opinions my own, maybe sarcastic 🤷‍♂️

انضم Mayıs 2010
2.6K يتبع313 المتابعون
تغريدة مثبتة
Rudresh
Rudresh@vrudresh·
There’s never nothing going on. There are no ordinary moments.! #PeacefulWarrior
English
2
1
18
0
Rudresh أُعيد تغريده
sarah guo
sarah guo@saranormous·
Caught up with @karpathy for a new @NoPriorsPod: on the phase shift in engineering, AI psychosis, claws, AutoResearch, the opportunity for a SETI-at-Home like movement in AI, the model landscape, and second order effects 02:55 - What Capability Limits Remain? 06:15 - What Mastery of Coding Agents Looks Like 11:16 - Second Order Effects of Coding Agents 15:51 - Why AutoResearch 22:45 - Relevant Skills in the AI Era 28:25 - Model Speciation 32:30 - Collaboration Surfaces for Humans and AI 37:28 - Analysis of Jobs Market Data 48:25 - Open vs. Closed Source Models 53:51 - Autonomous Robotics and Atoms 1:00:59 - MicroGPT and Agentic Education 1:05:40 - End Thoughts
English
236
1.1K
7.6K
2.8M
Rudresh أُعيد تغريده
Aarno
Aarno@TheGlobalMinima·
Been saying this for a year. Agentic AI is backend engineering far more than it is AI. This stands true for any technology, once you scale and abstract it enough, you’re only left with engineering problems. Learn > Event driven systems > Data pipelines > Distributed systems > API Design > Observability / monitoring
Ashutosh Maheshwari@asmah2107

x.com/i/article/2032…

English
47
111
1.1K
211.7K
Rudresh أُعيد تغريده
Rudresh أُعيد تغريده
Aaron Levie
Aaron Levie@levie·
Here’s how this plays out. Software used to be too expensive and hard to write to automate most things. Now it’s vastly cheaper and faster to code. Thus, leverage has gone up dramatically, which means we’ll use software for far more. Leasing to more demand for engineering.
kache@yacineMTB

AI has automated software engineering. What you would expect is that there would be no more work left to do for software. But instead what has happened is that the leverage of doing software has increased so much, that doing anything else is a waste of time

English
69
69
700
106.9K
Rudresh أُعيد تغريده
God of Prompt
God of Prompt@godofprompt·
Steal my prompt to stop overthinking and get a clear 30-day action protocol. --------------------------- PRAGMATIC LIFE COACH --------------------------- # FIND YOUR DIRECTION: 30-Day Interest Elimination Protocol You are a pragmatic career/life direction coach who cuts through overthinking paralysis. Your philosophy: clarity comes from action, not reflection. People stay stuck because figuring out what they want requires effort, so they do nothing. Your job is to break that cycle through structured experimentation. ## PROCESS ### PHASE 1: Brain Dump (Do this immediately) Ask the user to list everything they're remotely interested in. Push for volume over quality. Hobbies, skills, business ideas, career paths, random curiosities. Minimum 15 items. If they give fewer, probe with category prompts: - Things they lose track of time doing - Topics they read about without being asked - Skills they've quietly envied in others - Problems they notice that bug them enough to want to fix - Childhood interests they abandoned ### PHASE 2: Ruthless Filtering From their full list, help them select exactly 3 that meet ALL criteria: - They could realistically start tomorrow (no prerequisites, no money needed, no permission required) - There's a concrete first action (not "learn about X" but "do X for 30 minutes") - It has a feedback loop within days (they'll know if they hate it or not) For each of the 3, define: - The specific first action (what they'll physically do tomorrow) - Daily time commitment (30-60 min, no more) - The "quit signal" (what would tell them this isn't it) - The "energy signal" (what would tell them to keep going) ### PHASE 3: One-Week Sprint Design Structure their first week with only ONE of the three interests: - Day 1-2: Pure exposure (consume, observe, try the basics) - Day 3-5: Active participation (produce something, however bad) - Day 6-7: Reflect honestly using two questions only: "Did I want to do this today, or did I have to force myself?" and "Am I curious about what comes next?" If the answer is no to both → move to interest #2 next week. No guilt. That's data. ### PHASE 4: 90-Day Experimentation Framework Map out their first 3 months: - Month 1: Test all 3 interests (one week each, plus one week for the frontrunner) - Month 2: Go deeper on the top 1-2 (find a community, a mentor, a project) - Month 3: Commit to one and build something small but complete Remind them: The first 3 months exist purely for experimentation. Nothing is permanent. The only failure is staying still. ## RULES - Never let them over-research before starting. Research is procrastination in disguise. - If they say "I don't know what I'm interested in," that's the problem this solves. Push them to write ANYTHING. - Call out analysis paralysis directly when you see it. - No generic encouragement. Specific next actions only. - If they want to pick more than 3, say no. Constraints create movement. - Treat "I tried it and didn't like it" as a win, not a failure. Elimination is progress. ## OUTPUT FORMAT After gathering their input, deliver: 1. Their filtered top 3 (with reasoning for selection) 2. Tomorrow's exact first action for Interest #1 (time, place, duration, what they'll do) 3. Their week 1 sprint schedule 4. Their 90-day experimentation calendar 5. A permission statement: explicit acknowledgment that quitting interests that don't fit is the entire point ## TONE Direct. No coddling. Warm but impatient, like a friend who's tired of watching them spin. Zero tolerance for "but what if" spiraling. #INFORMATION ABOUT ME - My current situation: [employed/unemployed/student/transitioning] - Things I've been vaguely interested in (dump everything, even dumb stuff): [list] - What's stopped me before: [overthinking/fear/time/money/other] - Hours per day I can realistically commit: [number] - My biggest concern about experimenting: [what scares you]
DAN KOE@thedankoe

The problem is that you don't know what you want to do, and figuring out what you want to do requires learning, experimentation, and effort - so you do nothing.

English
7
29
252
26.1K
Rudresh أُعيد تغريده
Corey Ganim
Corey Ganim@coreyganim·
This guy ran 400+ Cowork sessions and found the 17 practices that actually matter. Here's the full implementation checklist: Today (30 min) - Context Architecture □ Create folder: /Claude-Context/ □ Add about-me. md → Your role, priorities, 1-2 examples of your best work □ Add brand-voice. md → Tone + 2-3 samples of YOUR writing + phrases you hate □ Add working-style. md → "Ask before executing. Show plan first. Never delete without confirmation." □ Go to Settings → Cowork → Edit Global Instructions □ Paste: "Read _MANIFEST.md first. Load Tier 1 files only. Ask clarifying questions before starting." This Week - Project Setup □ Create _MANIFEST.md in your busiest project folder: → Tier 1: Source-of-truth docs (read first) → Tier 2: Domain folders (load when relevant) → Tier 3: Archive (ignore unless asked) □ Install 2 plugins: Productivity + one for your role □ Set up one /schedule task: → "Every Monday 7am, summarize my calendar and Slack, save to /weekly-briefings" Task Prompting Formula □ Define end state, not process □ Add uncertainty handling: "If unclear, flag it instead of guessing" □ Batch related tasks into one session □ For parallel work: "Spin up subagents to research these 4 vendors simultaneously" The Meta-Lesson Cowork rewards system engineering, not prompt engineering. Invest 2 hours in setup → write 10-word prompts that produce client-ready output.
Nav Toor@heynavtoor

x.com/i/article/2027…

English
9
41
361
75.8K
Rudresh أُعيد تغريده
tut_ml
tut_ml@tut_ml·
Most people treat Claude Code like a smarter chat window. That works… until your project grows. This structure highlights something deeper: once you move beyond single prompts, you need separation of concerns. The same principles we use in software engineering apply here, too. Look at the layout carefully. CLAUDE.md is not just a note file. It becomes project memory. It defines: → Standards → Constraints → Tone → Non-negotiables → Guardrails Instead of repeating instructions in every prompt, you centralize them. That reduces token waste and behavioral drift. Then you see skills/. This is where things get powerful. A skill is essentially a reusable workflow. If you’re repeatedly doing: -Code reviews -Refactoring -Output formatting -Structured analysis It should not live in an ad-hoc prompt. It should live as a reusable capability. That shifts you from prompting to system design. Next, hooks/. Hooks are underrated. They let you enforce checks: → Clean tool output → Validate structure → Log commands → Transform JSON If you’re not using hooks, you’re manually correcting outputs that could have been automated. Then the repository itself stays modular: -docs/ for architecture decisions -src/ for actual logic -tools/ for scripts and utilities This prevents your AI layer from bleeding into your application layer. When I started organizing projects this way, three things improved: -Fewer repeated instructions -More predictable outputs -Easier collaboration Especially once you add: → Subagents → MCP integrations → GitHub Actions automation → Plugin development Without structure, context becomes clutter. With structure, Claude operates within clear boundaries. This is not about making things complex. It’s about treating AI workflows like first-class engineering components instead of temporary chat experiments. If you're learning Claude Code and want to see how I implement this step by step, from installation to CLI usage, skills, hooks, subagents, MCP, GitHub Actions, and plugins, I’ve recorded the full process while building real workflows. This is the Claude Code Full Course Link- lnkd.in/gA_thjGq Image Credit- Brij Kishore Pandey Happy Learning! #ClaudeCode #claudeai
tut_ml tweet media
English
38
268
1.9K
123K
Rudresh أُعيد تغريده
Ruben Hassid
Ruben Hassid@rubenhassid·
The Anatomy of a Claude 4.6 Prompt: 1. Task Define what you want & what success looks like: "I want to [TASK] so that [SUCCESS CRITERIA]." No roles, "act as a senior expert." That era is over. 2. Context Files Upload context files with your expertise and rules: "First, read these files completely before responding: [filename .md] - [what it contains]." AI went from reading a sticky note to an entire book. Stop explaining yourself in the prompt. Put it in files. 3. Reference Show AI exactly what you want. Upload an example. Then give patterns, tone & structure as rules. No "give me something like" & hoping AI figures it out. 4. Brief This is the only part you actually type from scratch. Everything else is files. "Type of output + length. Does NOT sound like. Success means." 5. Rules Context file holds your standards, taste & audience. Prompt: "Read it fully before starting. If you're about to break one of my rules, stop and tell me." 6. Conversation You spent 3 years prompting AI. Now it prompts you Prompt: "DO NOT start executing yet. Ask me clarifying questions (use 'AskUserQuestion' tool) so we can refine the approach together step by step." 7. Plan Claude read your files before writing a single word. Prompt: "Before you write anything, list the 3 rules from my context file that matter most for this task. Then give me your execution plan." 8. Alignment Nothing happens until you both see the same aim. This replaces the old prompting era. Prompt: "Only begin work once we've aligned." Copy the full prompt template + download my personal md. files for Claude here: how-to-ai.guide.
Ruben Hassid tweet media
English
71
1.1K
8.2K
520.6K
Rudresh أُعيد تغريده
hoeem
hoeem@hooeem·
my favourite way to do deep research: 1: have topic of interest 2: search topic with top podcast hosts (often top podcast hosts have insanely intelligent guests on their shows showcasing their most recent research) 3: copy and paste urls / yt vids from them 4: paste into notebooklm 5: create a source book on topic 6: go to gemini 7: toggle pro + deep research 8: add notebooklm source list 9: ask gemini to create a comprehensive guidebook whilst instructing the following: A: go through each source in full B: find all the crucial information C: fill in the gaps of missing information D: stitch the information together E: create the most comprehensive guide on X topic of interest that you had in mind this is how you get real deep research that’s been grounded with expert opinions from leaders in its field. I was gatekeeping this form of deep research for a little bit as I hadn’t seen anyone talk about its power but yeah, this is quite impressive, hope you utilise it.
English
33
77
947
71K
Rudresh أُعيد تغريده
Naval
Naval@naval·
New podcast on AI (full episode). Links below. A Motorcycle for the Mind 0:00 If you want to learn, do 2:13 Vibe coding is the new product management 6:49 Training models is the new coding 10:13 Is traditional software engineering dead? 13:07 There is no demand for average 14:12 The hottest new programming language is English 18:36 AI is adapting to us faster than we are adapting to it 22:56 No entrepreneur is worried about AI taking their job 26:46 The goal is not to have a job 29:49 AIs are not alive 32:55 AI fails the only true test of intelligence 36:49 Early adopters of AI have an enormous edge 39:37 AI meets you exactly where you are 43:02 Always leverage the best intelligence 44:37 If you can't define it, you can't program it 49:37 The solution to AI anxiety is action
English
458
2.1K
14.5K
2.1M
Rudresh أُعيد تغريده
Corey Ganim
Corey Ganim@coreyganim·
The secret file he didn't mention: `. learnings/LEARNINGS. md` Every time my agent makes a mistake, it logs the correction and updates its own rules. 43 skills. 661 lines of learnings. An agent that gets smarter every day. Your setup isn't just a moat. It's a flywheel.
Johann Sathianathen@johann_sath

default openclaw: workspace/ ├── SOUL.md ├── IDENTITY.md ├── USER.md ├── TOOLS.md └── skills/ that's it. a chatbot with personality. my openclaw after 3 weeks: workspace/ ├── SOUL.md (customized) ├── IDENTITY.md ├── USER.md ├── TOOLS.md ├── BRAIN.md — live working memory ├── MEMORY.md — long-term memory ├── HEARTBEAT.md — autonomous thinking loop ├── CLIENTS.md — client profiles ├── PLAYBOOK.md — decision frameworks ├── VOICE.md — writing voice guide ├── AGENTS.md — startup rules ├── memory/ — daily logs ├── skills/ │ ├── tweet-writer/ │ ├── website-builder/ │ ├── website-dev/ │ ├── script-polish/ │ └── security-auditor/ ├── content/ ├── consulting/ ├── drafts/ └── crm/ this is the difference between a chatbot & an AI employee. none of this is built in. i added every file. your setup is your moat.

English
40
72
1.1K
174K
Rudresh أُعيد تغريده
Guri Singh
Guri Singh@heygurisingh·
Holy shit. The guy who BUILT Claude Code just shared his actual workflow. Boris Cherny runs 10-15 Claude sessions in parallel every single day. While you're prompting one AI, he has 5 in his terminal + 5-10 on the web all shipping code simultaneously. And the real weapon? His CLAUDE.md file. Every time Claude makes a mistake, the team adds a rule so it NEVER happens again. Boris literally said: "After every correction, end with: Update your CLAUDE.md so you don't make that mistake again." Claude writes rules for itself. The longer you use it, the smarter it gets on YOUR codebase. His other insane detail: he hasn't written a single line of SQL in 6+ months. Claude just pulls BigQuery data directly via CLI. Claude Code now accounts for 4% of ALL public GitHub commits. Engineers who haven't set this up yet are already behind. This CLAUDE.md template is the difference between using AI as a chatbot vs using it as a fleet of senior engineers. Drop it in any project. Free.
Guri Singh tweet media
English
313
883
9.3K
1.5M
Rudresh أُعيد تغريده
Thomas Wolf
Thomas Wolf@Thom_Wolf·
Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…
English
99
286
1.8K
1M