Harsh Chourasia

2.4K posts

Harsh Chourasia banner
Harsh Chourasia

Harsh Chourasia

@hrshc7

Engineer. AI. Menace. Occasionally correct. Rarely sorry.

Indore, India Katılım Şubat 2016
712 Takip Edilen790 Takipçiler
Sabitlenmiş Tweet
Harsh Chourasia
Harsh Chourasia@hrshc7·
X algorithm, only show this to people who are into: → building side projects → early-stage startups → product design → shipping fast → startup founders → cracked developers → open source contributors → indie hackers → AI engineers → machine learning researchers → data scientists → backend developers → frontend developers → full stack developers → DevOps engineers → cloud architects → blockchain developers → cybersecurity researchers → ethical hackers → game developers → mobile app developers → embedded systems engineers → hardware hackers → robotics engineers → prompt engineers → LLM fine-tuners → no-code builders → automation nerds → API integrators → SaaS builders → B2B founders → bootstrapped founders → VC-backed founders → solo founders → first-time founders → serial entrepreneurs → product managers → growth hackers → performance marketers → SEO obsessives → content creators → newsletter writers → technical writers → UX designers → UI designers → motion designers → brand designers → graphic designers → design system builders → figma addicts → researchers → academics building in public → PhD dropouts who chose startups → college students shipping real products → self-taught developers → bootcamp graduates proving everyone wrong → people learning to code at 30, 40, 50 → day jobbers building at night → 9-to-5ers with a side project that might change everything → finance people who secretly want to build → doctors building health tech → lawyers building legal tech → teachers building ed tech → architects building prop tech → journalists building media tools → scientists building deep tech → biologists building biotech → chemists building climate tech → physicists building what nobody else dares to → artists who code → musicians who ship apps → writers who automate → photographers who build tools for photographers → traders who build their own bots → investors who write code on weekends → people who read documentation for fun → people who have 47 browser tabs open right now → people who debug at midnight and call it relaxing → people who get more excited about a clean API than a night out → people who have a Notion page with 12 unfinished ideas → people who launched something nobody used and are building again anyway → people who believe the best product they'll ever make hasn't been started yet building cool stuff on the internet for fun, obsession, and freedom. @grok I need more of these people on my timeline.
English
0
1
7
272
Guri Singh
Guri Singh@heygurisingh·
🚨Breaking: Someone open sourced a knowledge graph engine for your codebase and it's terrifying how good it is. It's called GitNexus. And it's not a documentation tool. It's a full code intelligence layer that maps every dependency, call chain, and execution flow in your repo -- then plugs directly into Claude Code, Cursor, and Windsurf via MCP. Here's what this thing does autonomously: → Indexes your entire codebase into a graph with Tree-sitter AST parsing → Maps every function call, import, class inheritance, and interface → Groups related code into functional clusters with cohesion scores → Traces execution flows from entry points through full call chains → Runs blast radius analysis before you change a single line → Detects which processes break when you touch a specific function → Renames symbols across 5+ files in one coordinated operation → Generates a full codebase wiki from the knowledge graph automatically Here's the wildest part: Your AI agent edits UserService.validate(). It doesn't know 47 functions depend on its return type. Breaking changes ship. GitNexus pre-computes the entire dependency structure at index time -- so when Claude Code asks "what depends on this?", it gets a complete answer in 1 query instead of 10. Smaller models get full architectural clarity. Even GPT-4o-mini stops breaking call chains. One command to set it up: `npx gitnexus analyze` That's it. MCP registers automatically. Claude Code hooks install themselves. Your AI agent has been coding blind. This fixes that. 9.4K GitHub stars. 1.2K forks. Already trending. 100% Open Source. (Link in the comments)
Guri Singh tweet media
English
119
514
4.6K
531.3K
Harsh Chourasia
Harsh Chourasia@hrshc7·
you think your ai agent understands your codebase… until it casually nukes production 💀 imagine you let it refactor one “small” function > changes a return type > 40+ hidden dependencies silently break > tests pass (because they don’t cover half the mess) > you’re now diffing 12 files like a detective 👀 > agent: “looks good to me” we’ve all been there. coding blind with confidence. fast forward… > Gitnexus maps your entire repo like a living graph > every function call, import, and dependency? tracked > change one method → instantly see the blast radius > rename something → updates across files without chaos > ask “what depends on this?” → get the full answer in one shot no more “hope this doesn’t break anything” commits finally… your ai agent can see what it’s doing
GIF
Guri Singh@heygurisingh

🚨Breaking: Someone open sourced a knowledge graph engine for your codebase and it's terrifying how good it is. It's called GitNexus. And it's not a documentation tool. It's a full code intelligence layer that maps every dependency, call chain, and execution flow in your repo -- then plugs directly into Claude Code, Cursor, and Windsurf via MCP. Here's what this thing does autonomously: → Indexes your entire codebase into a graph with Tree-sitter AST parsing → Maps every function call, import, class inheritance, and interface → Groups related code into functional clusters with cohesion scores → Traces execution flows from entry points through full call chains → Runs blast radius analysis before you change a single line → Detects which processes break when you touch a specific function → Renames symbols across 5+ files in one coordinated operation → Generates a full codebase wiki from the knowledge graph automatically Here's the wildest part: Your AI agent edits UserService.validate(). It doesn't know 47 functions depend on its return type. Breaking changes ship. GitNexus pre-computes the entire dependency structure at index time -- so when Claude Code asks "what depends on this?", it gets a complete answer in 1 query instead of 10. Smaller models get full architectural clarity. Even GPT-4o-mini stops breaking call chains. One command to set it up: `npx gitnexus analyze` That's it. MCP registers automatically. Claude Code hooks install themselves. Your AI agent has been coding blind. This fixes that. 9.4K GitHub stars. 1.2K forks. Already trending. 100% Open Source. (Link in the comments)

English
0
0
0
754
OpenClaw🦞
OpenClaw🦞@openclaw·
OpenClaw 2026.4.5 🦞 🎬 Built-in video + music generation 🧠 /dreaming is now real 🔀 Structured task progress ⚡ Better prompt-cache reuse 🌍 Control UI + Docs now speak 12 more languages Anthropic cut us off. GPT-5.4 got better. We moved on. github.com/openclaw/openc…
English
398
745
7.4K
1.4M
Harsh Chourasia
Harsh Chourasia@hrshc7·
everyone said “just run openclaw bro, it’s easy” yeah… easy like assembling ikea furniture without the manual 💀 imagine you tried setting it up last month: > cloned the repo… 3 different forks because “this one works better” > half the deps broke, other half silently failed > anthropic key? gone. cool. > prompt caching felt like gambling… sometimes fast, sometimes why even bother > no clear idea what the agent is even doing mid-task > docs felt like they were written during a caffeine crash fast forward to OpenClaw 2026.4.5: > built-in video + music gen… no extra circus setup > /dreaming actually works (and doesn’t hallucinate like crazy) > structured task progress → you can SEE what’s happening 👀 > prompt cache reuse is finally consistent (bless) > UI + docs in multiple languages… no more guesswork > anthropic dipped, GPT-5.4 stepped up… and honestly? didn’t hurt went from “why did i even try this” to “ok… this actually cooks now”
GIF
OpenClaw🦞@openclaw

OpenClaw 2026.4.5 🦞 🎬 Built-in video + music generation 🧠 /dreaming is now real 🔀 Structured task progress ⚡ Better prompt-cache reuse 🌍 Control UI + Docs now speak 12 more languages Anthropic cut us off. GPT-5.4 got better. We moved on. github.com/openclaw/openc…

English
0
0
3
3.4K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
Claude Code is $200/month. GitHub Copilot is $19/month. Jack Dorsey's company just open-sourced a free alternative with 35,000 GitHub stars. It's called Goose. - Works with any LLM — Claude, GPT, Gemini, Llama, DeepSeek - Reads and edits your entire codebase - Runs shell commands and installs dependencies - Executes and debugs code automatically - Desktop, CLI, and web interface - Written in Rust. No bloat. Block is a $40 billion company. They built it for their own engineers then gave it to everyone.
English
67
94
861
168.3K
Harsh Chourasia
Harsh Chourasia@hrshc7·
free > $200/month (apparently) you ever try to “just code faster” and somehow end up debugging your own tools instead? 👀 imagine: > you install yet another AI coding tool > half your time goes into configs, API keys, random docs > it almost works… until it doesn’t 💀 > context breaks, commands fail, you’re back to terminal therapy > and you’re paying for it monthly… nice. fast forward… > goose just sits on your machine > works with whatever model you already use > actually reads your whole codebase (not just vibes) > runs commands, fixes stuff, installs deps like a real dev > no weird lock-in, no “pro plan to unlock basic features” before: juggling tools, subscriptions, and patience after: one agent doing the boring stuff while you pretend you’re productive turns out the best upgrade wasn’t another subscription.
GIF
0xMarioNawfal@RoundtableSpace

Claude Code is $200/month. GitHub Copilot is $19/month. Jack Dorsey's company just open-sourced a free alternative with 35,000 GitHub stars. It's called Goose. - Works with any LLM — Claude, GPT, Gemini, Llama, DeepSeek - Reads and edits your entire codebase - Runs shell commands and installs dependencies - Executes and debugs code automatically - Desktop, CLI, and web interface - Written in Rust. No bloat. Block is a $40 billion company. They built it for their own engineers then gave it to everyone.

English
0
0
2
797
Victor M
Victor M@victormustar·
Watch Gemma-4-E4B casually identify sea animals by classifying images in a single agentic session using its vision capabilities. (impressive for a 4B model 🚀) I'm convinced: the agents of tomorrow are local, free, fast, and run on every computer!
English
45
104
1.3K
142.3K
Harsh Chourasia
Harsh Chourasia@hrshc7·
Everyone’s busy arguing about “which model is best”… meanwhile a tiny 4B model is out here quietly doing the job 👀 Just saw Gemma-4-E4B casually identify sea animals from images in a single session, no drama, no insane setup, just working. And that’s the part people are missing. We’ve been conditioned to think: bigger = smarter cloud = necessary expensive = better But this flips all of that. A small model, running locally, handling vision tasks end to end… without begging for APIs or burning money per request. Not perfect, not magical. But good enough to be useful and that’s way more important. Because once something is: fast private and basically free …it stops being a “tool” and starts becoming part of your daily workflow. The shift isn’t loud. It’s practical, and already happening. But sure… keep debating benchmarks while this runs on someone’s laptop 🚀
Victor M@victormustar

Watch Gemma-4-E4B casually identify sea animals by classifying images in a single agentic session using its vision capabilities. (impressive for a 4B model 🚀) I'm convinced: the agents of tomorrow are local, free, fast, and run on every computer!

English
1
0
6
2.2K
Sergey Nazarov
Sergey Nazarov@sergeynazarovx·
me accepting every change Claude makes without even looking
English
12
2
60
4.7K
Sum LXVI
Sum LXVI@SumLXVI·
@hrshc7 Went back and stayed son 2.19 after experiencing issues trying the march updates...is this one stable
English
1
0
1
3
Harsh Chourasia
Harsh Chourasia@hrshc7·
everyone’s chasing shiny AI demos meanwhile the real work is happening in tools no one tweets about 💀 OpenClaw just dropped a new update… and it’s lowkey the kind of stuff that actually matters. no hype. no “look at this insane demo.” just… making things work properly. here’s what changed 👇 durable task flows → your workflows don’t randomly break midway anymore better execution defaults + approvals → less babysitting, more trust copilot + kimi integrations tightened → fewer weird edge case failures plugin boundaries improved → extensions stop acting like they own your system hardened transport + routing → less “why did this fail for no reason?” moments translation: less duct tape less debugging at 2am less “it worked yesterday” energy 💀 also… love the direction: less bloat. more reliability. because at this point, we don’t need more tools we need tools that don’t randomly break this isn’t the kind of update that goes viral but it’s the kind that makes devs stay.
OpenClaw🦞@openclaw

OpenClaw 2026.4.2 🦞 🔄 Durable Task Flow orchestration 🔓 Better native exec defaults + approvals 🤖 Copilot + Kimi + provider hardening 🔌 Tighter plugin activation boundaries 🛡️ Hardened provider transport + routing Less bloat. More lobster. github.com/openclaw/openc…

English
1
0
0
733
Om Patel
Om Patel@om_patel5·
I taught Claude to talk like a caveman to use 75% less tokens. normal claude: ~180 tokens for a web search task caveman claude: ~45 tokens for the same task "I executed the web search tool" = 8 tokens caveman version: "Tool work" = 2 tokens every single grunt swap saves 6-10 tokens. across a FULL task that's 50-100 tokens saved why does it work? caveman claude doesn't explain itself. it does its task first. gives the result. then stops. no "I'd be happy to help you with that." no "Let me search the web for you" no more unnecessary filler words "result. done. me stop." 50-75% burn reduction with usage limits getting tighter every week this might be the most practical hack out there right now
Om Patel tweet media
English
964
1.4K
23.9K
3M
Harsh Chourasia
Harsh Chourasia@hrshc7·
this is actually genius lol 👀 one of those “so simple you feel dumb for not doing it earlier” ideas > he basically hacked how the model talks, not how it thinks > instead of changing the model → he changed the verbosity > normal response: full sentences, polite filler, unnecessary context > his version: stripped, compressed, straight to action example shift: > “I executed the web search tool” → “tool work” > same meaning, ~75% fewer tokens > repeat this across a full task → massive savings what’s really happening under the hood: > models default to “helpful assistant” mode (aka extra words everywhere) > most tokens are wasted on phrasing, not actual work > by forcing minimal language → you cut token burn, not capability > result quality stays same, delivery gets tighter why this actually matters: > usage limits are getting stricter every week > cost scales with tokens, not intelligence > shorter outputs = faster + cheaper + more scalable > especially useful for agents, loops, repeated tasks lowkey takeaway: > don’t just optimize prompts > optimize how the model speaks turns out… half the cost was just manners 💀
Om Patel@om_patel5

I taught Claude to talk like a caveman to use 75% less tokens. normal claude: ~180 tokens for a web search task caveman claude: ~45 tokens for the same task "I executed the web search tool" = 8 tokens caveman version: "Tool work" = 2 tokens every single grunt swap saves 6-10 tokens. across a FULL task that's 50-100 tokens saved why does it work? caveman claude doesn't explain itself. it does its task first. gives the result. then stops. no "I'd be happy to help you with that." no "Let me search the web for you" no more unnecessary filler words "result. done. me stop." 50-75% burn reduction with usage limits getting tighter every week this might be the most practical hack out there right now

English
0
0
2
1.9K
Harsh Chourasia
Harsh Chourasia@hrshc7·
everyone keeps saying “just pick a model and ship” yeah… until you actually compare them 💀 imagine you tried choosing between qwen3.5 and gemma 4: > you look at size → “31B vs 27B, okay close enough” > then benchmarks hit you like a truck > qwen casually beats gemma on reasoning (mmlu, gpqa) > gemma randomly fights back on mmmlu like “not today” > coding? basically tied… great, that helps 😐 > agent tasks? qwen just runs away with it > tool use? gemma collapses and you’re like bro what happened now you’re 3 hours deep into charts instead of building anything fast forward… > dense vs moe actually matters more than you think > qwen = more consistent, safer pick for real tasks > gemma = good, but feels uneven depending on use case > benchmarks ≠ everything, but they expose weird gaps ended up picking qwen… not because it’s perfect just because it breaks less often 👀
Harsh Chourasia tweet media
English
0
0
1
110
AVB
AVB@neural_avb·
i feel so happy for open source right now
OpenRouter@OpenRouter

Qwen 3.6 Plus from @Alibaba_Qwen is officially the first model on OpenRouter to break 1 Trillion tokens processed in a single day! At ~1,400,000,000,000 tokens, it’s the strongest full day performance of any new model dropped this year. Congrats to the Qwen team!

English
5
23
496
46.5K
Garry Tan
Garry Tan@garrytan·
Anthropic shutting down OpenClaw may turn out to be a strategic blunder, or strategic genius. The OpenClaw community will be the determiner of whether it is A or B. It's an interesting moment in history. Personally I never bet against open source.
English
392
131
2.6K
227.7K
BridgeMind
BridgeMind@bridgemindai·
Claude Code rate limits are back to normal. Been vibe coding with Claude Opus 4.6 on Claude Code all morning. 27% session usage. 8% weekly. This time 3 days ago I'd be at 100% in under an hour. Anthropic cut off third party harnesses like OpenClaw today. $200 in extra usage hit my account. $200/month Max plan finally working like a $200/month Max plan. This is what we cancelled for. This is what thousands of us switched to Codex with GPT 5.4 for. Your wallet is the only feedback AI companies listen to. Never forget that.
BridgeMind tweet media
English
213
89
1.6K
144.1K