Bharat Suneja

6.5K posts

Bharat Suneja banner
Bharat Suneja

Bharat Suneja

@bsuneja

Sr. Cloud Architect | AI/ML builder | AI is coming for your legacy infra | Formerly: @Microsoft, Exchange MVP | Claude Code | https://t.co/gpqdY7kyJd

San Francisco, CA Entrou em Haziran 2008
162 Seguindo1.2K Seguidores
Tweet fixado
Bharat Suneja
Bharat Suneja@bsuneja·
Built a Windows process monitoring agent this morning. First real-world run surfaced a textbook degradation pattern on a mission-critical app: 🔴 Sustained memory leak — private bytes growing unbounded across sessions 🔴 CPU thrashing correlated with memory pressure (burst spikes to 371%) 🔴 Zombie process — paged out, idle, but holding 3.6GB of committed private memory the OS can never reclaim The interesting part: the failure signature is consistent and predictable well before the process becomes unresponsive. Working set oscillates while private bytes climb monotonically — classic unmanaged heap leak, GC fighting a losing battle against its own allocations. Building this into a production-grade Windows monitoring agent — event log correlation, ETW telemetry, process health profiling, and an LLM layer that turns raw signals into plain-English diagnosis and actionable remediation steps. Targeted at power users on critical systems who can't afford unplanned downtime. Early access interest? Questions? DM @bsuneja. Will share when it takes shape.
English
0
0
1
279
Bharat Suneja
Bharat Suneja@bsuneja·
Opus 4.7 moves at the speed of light while Windows users are stuck translating PowerShell into something it'll understand. No reliable filesystem MCP makes it worse. WSL2 isn't a workaround anymore. It's the escape hatch. @AnthropicAI, Windows deserves better.
English
1
0
1
67
Bharat Suneja
Bharat Suneja@bsuneja·
There's a version of AI that writes mediocre essays. Then there's sitting with @AnthropicAI's Claude for a few hours and ending up with infra you'd have spent days or weeks on alone. That anticipates failure modes you hadn't considered. The gap between those two experiences is enormous and I don't think it gets talked about enough.
English
0
0
0
46
Bharat Suneja
Bharat Suneja@bsuneja·
Spoke three sentences into my mic this morning. Eight seconds later it showed up in a live dashboard with context I never typed. Months of building. One pipeline. Finally end to end. Can't help but mark the moment. 🎯 Back to building.
English
0
0
0
55
Bharat Suneja
Bharat Suneja@bsuneja·
The market got louder. Mac Mini M4 Pro 64GB: now completely unavailable. No delivery. No pickup at any Bay Area Apple Store. Every AI builder wants the same thing: 64GB unified memory for local LLM inference at a price that doesn't require a CFO's approval. Waiting for M5 Pro. @Apple — soon? 👀 #LocalLLM #AgenticAI
Bharat Suneja tweet media
Bharat Suneja@bsuneja

Did every AI founder, PM, and developer figure this out at the same time? Mac Mini M4 Pro 64GB delivery: now August. 😅 Once your AI agent runs 24/7, two things hit fast: → Cloud LLM costs don't sleep either. → Ask your agent to clean your inbox. Guess where all your email's going? Local LLM isn't nice to have. It's the architecture. The backorder is just the market saying so out loud. (Or M5 Minis coming @Apple? 👀) #LocalLLM #AgenticAI

English
0
0
0
623
Bharat Suneja
Bharat Suneja@bsuneja·
@paulrobichaux @AnthropicAI @claudeai Thanks Paul. The macOS bugs are real too — but at least the tooling was designed with macOS as the target. Windows feels like a port that nobody tested. The WSL2 workaround helps but it shouldn't be the answer. More on navigating this practically next week.
English
0
0
1
34
paulrobichaux
paulrobichaux@paulrobichaux·
@bsuneja @AnthropicAI @claudeai This sounds really painful. I can tell you that I see complaints about the native macOS Claude desktop application every single day, and it has a share of bugs (dealing with file locking, to name one of many). This sounds especially hellish though.
English
1
0
0
34
Bharat Suneja
Bharat Suneja@bsuneja·
@AnthropicAI @ClaudeAI is the most capable AI coding partner I've used. But there's something that needs to be said out loud: Anthropic is failing Windows developers. And with 1.4 billion Windows users on the planet, that's not a niche problem. Thread.
English
2
0
0
95
Bharat Suneja
Bharat Suneja@bsuneja·
To @AnthropicAI: the model is world-class. The Windows experience is not. This isn't UX papercuts. It's compounding daily friction slowing real product development. 1.4 billion Windows users deserve first-class tooling.
English
1
0
0
37
Bharat Suneja
Bharat Suneja@bsuneja·
Graphify claimed 71.5x token reduction per query. I ran it on a real Python codebase. Got 7.3x. Still worth installing. Here's the honest field report — benchmarks, Windows friction, and when NOT to use it. #BuildInPublic
English
1
1
1
123
Bharat Suneja
Bharat Suneja@bsuneja·
@karpathy @chrisparkX Exactly. The pricing for reading tweets is truly out of sync with the value it delivers. Dropping the idea of X integration was an easy decision.
English
0
0
0
96
Andrej Karpathy
Andrej Karpathy@karpathy·
I think it's a good direction (for Read endpoints, not for Write), I tried to use it for a project ~2 weeks ago but about 30 minutes of hacking around cost me $200, the pricing is imo really excessive. The docs were hard to ingest into agents because it's a lot of individual short pages, I think a big intro markdown doc, or a few of them behind simple curl locations. Also, the current version of docs seems to have no mention of XMCP? Or at least the Search / Grok Assistant seems to say there are 0 mentions of such a thing anywhere in the docs.
English
76
49
2.4K
262.8K
Chris Park
Chris Park@chrisparkX·
We’ve made major upgrades to X API: • Pay-Per-Use now GA worldwide • XMCP Server + xurl for agents • Official Python & TypeScript XDKs • API Playground - free realistic simulations New releases coming will be a game changer. Start building → docs.x.com 🚢
Elon Musk@elonmusk

Try using the X API

English
370
271
3.3K
71.4M
Bharat Suneja
Bharat Suneja@bsuneja·
Karpathy posts a vision. Someone ships it in 48 hours. Open source. Free. One pip install. The 71.5x token reduction is the headline number — but the real story is what this means for AI agents reasoning over large codebases. No vector DB. No config. Just a knowledge graph that maps everything. Haven't tested it yet. If the 71.5x holds up, this is significant. This is the leverage that makes a solo builder dangerous in 2026. #DeveloperVelocity
Muhammad Ayan@socialwithaayan

🚨 BREAKING: Someone just built the exact tool Andrej Karpathy said someone should build. 48 hours after Karpathy posted his LLM Knowledge Bases workflow, this showed up on GitHub. It's called Graphify. One command. Any folder. Full knowledge graph. Point it at any folder. Run /graphify inside Claude Code. Walk away. Here is what comes out the other side: -> A navigable knowledge graph of everything in that folder -> An Obsidian vault with backlinked articles -> A wiki that starts at index. md and maps every concept cluster -> Plain English Q&A over your entire codebase or research folder You can ask it things like: "What calls this function?" "What connects these two concepts?" "What are the most important nodes in this project?" No vector database. No setup. No config files. The token efficiency number is what got me: 71.5x fewer tokens per query compared to reading raw files. That is not a small improvement. That is a completely different paradigm for how AI agents reason over large codebases. What it supports: -> Code in 13 programming languages -> PDFs -> Images via Claude Vision -> Markdown files Install in one line: pip install graphify && graphify install Then type /graphify in Claude Code and point it at anything. Karpathy asked. Someone delivered in 48 hours. That is the pace of 2026. Open Source. Free.

English
0
0
0
156