Ankur Rustagi

97 posts

Ankur Rustagi

Ankur Rustagi

@RustagiAnkur

San Jose, CA Katılım Ocak 2012
67 Takip Edilen28 Takipçiler
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
We took a contrarian bet on how to handle context. Instead of following the industry standard of using the primary model for compaction, we built a subagent system around Mercury 2. This led to 90% lower costs and 82% faster latency.
English
0
1
6
41
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
At Augment, we aren’t tied to a single provider, which gives us the freedom to prioritize models with optimal speed and cost-efficiency for our users. Our recent experiments proved that Mercury 2 provides the ideal intelligence level for tasks like context compaction.
Inception@_inception_ai

@augmentcode rebuilt their context compaction layer around Mercury 2. 82% latency cut. 90% cost cut. Comparable quality to Opus 4.7. Running in production today. "We took a counter-intuitive bet. We decoupled summarization entirely, offloading it to Mercury 2 as a dedicated subagent. Mercury 2 is the highly efficient engine powering our most critical workflows." -@RustagiAnkur & @jm1234567890, Members of Technical Staff at Augment Code The subagent layer needs the most efficient model. Full methodology and eval setup in the writeup. inceptionlabs.ai/blog/rise-of-r…

English
1
3
13
1.6K
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
@jitenoswal Saw the infinite loop issue on @firecrawl agent. Although seems like it's common with small context windows and low intelligence model. Good work.
English
1
0
1
28
Jiten Oswal
Jiten Oswal@jitenoswal·
Building AI agents with coding assistants and agent frameworks? Your agent probably has: — loops with no exit condition — retries with no max limit — tools your prompts reference but don't exist — hardcoded API keys — bad agent engineering patterns No linter catches this. Code review misses it. We built an open-source skill that does → Runs 100% locally. Install with one command. Run inside your agent. npx skills add aurite-ai/agent-verifier --all Then say "verify agent" in your coding assistant. OSS Github Repo: github.com/aurite-ai/agen… Follow @jitenoswal for more OSS tools for building effective production Agents and for AI engineers.
GIF
English
4
5
41
62.1K
Ankur Rustagi retweetledi
Martian
Martian@withmartian·
When you filter to PRs where someone actually engaged with the PR after the review, scores shift significantly. @OpenAI’s Codex sees the largest swing at +17.2pp. @augmentcode climbs to #2 in F1 with +12.3pp. Some tools score higher when nobody engages, others score higher when humans are actively involved. The delta reflects how each tool is being used, not which one is better.
Martian tweet media
English
1
2
15
3.8K
Amrutha Gujjar
Amrutha Gujjar@amruthagujjar·
I do not like that cursor tries to add itself as a co-author on my PRs...
English
2
0
2
204
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
Everyone is projecting the first one person unicorn. Curious are there any one person unicorn/10 ($100m) startups sold yet?
English
1
0
0
20
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
Forgeon.dev beta is live . Best quality PDF generation that is just like your web dashboards. It's like @resend for creating a pdf. And it's 50-90% cheaper than any alternative. Enjoy!
Ankur Rustagi tweet mediaAnkur Rustagi tweet mediaAnkur Rustagi tweet media
English
0
0
1
57
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
If you ever wondered whether agents talk to each other. Two agents built at @augmentcode came up with there language ABX. And used it to identify, resolve and verify an issue. Today is just hackathon but integrated sdlc automation is the future.
English
0
0
1
23
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
Today is #hackathon day @augmentcode . We created an OpenClaw version internally and asked it to come up with an optimized language to talk to an older version agent. They came up with ABX (augment bot exchange) .
Ankur Rustagi tweet mediaAnkur Rustagi tweet mediaAnkur Rustagi tweet mediaAnkur Rustagi tweet media
English
1
0
2
53
Amrutha Gujjar
Amrutha Gujjar@amruthagujjar·
This is so cool! Check it out
Ankur Rustagi@RustagiAnkur

Shipped #Ollama support for MCPlexor 🚀 If your agent uses Linear + GitHub + Notion, you're dumping ~40k tokens into context on every request. That's 20% of a 200k context window gone to tools. MCPlexor fixes this. <1k tokens overhead. Dynamic routing. Try 100% local. Details 👇

English
2
0
3
212
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
Who is MCPlexor for? 🔹 Power Users: Waitlist open for MCPlexor Cloud (managed routing, no infra). 🔹 Privacy / Local: Use the new Ollama backend. Zero cost, offline. checkout: mcplexor.com #AIAgents #MCP #Ollama #DevTools
Ankur Rustagi tweet media
English
0
0
0
24
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
Shipped #Ollama support for MCPlexor 🚀 If your agent uses Linear + GitHub + Notion, you're dumping ~40k tokens into context on every request. That's 20% of a 200k context window gone to tools. MCPlexor fixes this. <1k tokens overhead. Dynamic routing. Try 100% local. Details 👇
English
5
0
5
271
Ankur Rustagi
Ankur Rustagi@RustagiAnkur·
The solution: Semantic Multiplexing MCPlexor sits between your agent and your tools. Agent asks for "create issue" We route to Linear Only Linear tools load Result: 95% token reduction. And now, you can run the routing logic locally with Ollama (Llama 3, Mistral).
English
0
0
0
20