Metriqual

35 posts

Metriqual banner
Metriqual

Metriqual

@metriqual

The Rust-based AI Gateway. less than 4ms Overhead. Zero GC Pauses.

India Katılım Kasım 2025
9 Takip Edilen144 Takipçiler
Metriqual retweetledi
Dikshika Sharma
Dikshika Sharma@intermurphi·
Tell me the best MLOps tools you guys have used.
English
1
1
4
134
Metriqual
Metriqual@metriqual·
I was running 3 LLM providers in production. 3 SDKs, 3 billing dashboards, 3 sets of docs. Every time I wanted to try a new model, I was rewriting integration code. OpenAI went down one night, and my app just sat there. Dead. No fallback, no routing, nothing. Just a blank screen and angry users. Then I checked my bill at the end of the month. That was a fun morning. So WE (@theunblunt, @intermurphi, @Palakonweb) built Metriqual. Same OpenAI SDK. Change 2 lines, base URL, and API key. Now, GPT-4, Claude, Gemini, and Llama all run through one endpoint. I swap models by changing a string. No refactoring. Provider goes down? It hits my fallback automatically. Rate-limited? Routes around it. My users never know something broke. I set a daily spend cap. I see every request, every cost, every failure in real time. I A/B test models from the dashboard. I update prompts without deploying. PII gets stripped before it ever touches the LLM. All the shit I was building in-house for months is gone in 60 seconds. Free tier. No credit card. 15+ providers. One key. metriqual.com
English
0
4
9
553
Metriqual
Metriqual@metriqual·
Most companies use one LLM provider for everything. Thats like running your entire backend on a single server with no monitoring. Metriqual gives you multi-provider routing, observability, and cost tracking out of the box. Built in Rust because your AI gateway shouldnt be the bottleneck
English
0
0
1
86
Metriqual retweetledi
Gopal
Gopal@theunblunt·
Models Are Commoditizing. GPT-4o, Claude Opus, Gemini 2.5, they are all converging to the same output. The model layer is becoming a utility. If you're still building a moat around 'which model you use,' you're already dead. metriqual.com
Gopal tweet media
English
0
1
3
115
Metriqual retweetledi
Gopal
Gopal@theunblunt·
Introducing PromptHub: Version control for your LLM prompts. The problem: Prompts hardcoded in your codebase. → Every tweak needs a deployment → No A/B testing → Product team blocked by engineering → No rollback when prompts break We fixed it. Instead of: prompt = "You are a helpful assistant..." You do: prompt_id = "customer_support_v2" Now your product team can: -Iterate prompts without code changes -A/B test in production -Roll back bad versions instantly -Track which prompts actually perform Prompt engineering becomes a product workflow, not an engineering bottleneck. Ship AI features faster. No deployment friction. Part of Metriqual's LLM infrastructure stack. Demo: metriqual.com
English
1
6
24
3.3K
Dikshika Sharma
Dikshika Sharma@intermurphi·
Hello there people, so we recently cut a client's LLM costs from $40K to $16K/month. One SDK change. Zero refactoring. The problem you face is that your app talks directly to OpenAI/Anthropic. There is no intelligent routing → No fallbacks when providers go down → No PII protection → AND DEF No cost control So we came up with the solution and built the middleware layer that should exist. You just have to import metriqual client = metriqual.OpenAI(api_key="your-key") THE SAME INTERFACE BUT NOW: Intelligent routing (cost/latency optimized) 2. Automatic failovers between providers 3. PII detection & redaction 4. Request caching 5. Prompt versioning without code changes 6. Sub-millisecond Rust infrastructure If you're spending $5K+/month on LLMs, we'll show you where 40-60% is leaking. Now is this not great? Demo: metriqual.com
English
40
46
476
61.4K
Deepesh.rs🦀
Deepesh.rs🦀@0xdeepeshW3·
@intermurphi @metriqual well, i'll prefer the hero section's headin font a bit more bigger, which should increase attention. It's matter of preference anyways.
English
1
0
2
25
Dikshika Sharma
Dikshika Sharma@intermurphi·
Working on bettering the UI of our product @metriqual. All sorts of suggestions are whole-heartedly welcomed. Hashtag dev life lol.
English
1
0
6
661
Metriqual
Metriqual@metriqual·
i luv anakin. ❤️
Metriqual tweet media
English
0
3
12
3.2K
Metriqual retweetledi
Dikshika Sharma
Dikshika Sharma@intermurphi·
Core team members of @metriqual chillin' for a bit.
Dikshika Sharma tweet media
English
1
1
13
3.9K
Metriqual retweetledi
Gopal
Gopal@theunblunt·
Latest video breaks down why your AI backend is fragile: hardcoded prompts that require redeploys, custom PII filtering services adding latency, and usage limits written as if statements in your app code. Metriqual moves all of this to the gateway layer. System prompts become infrastructure you update in a dashboard. Compliance is a checkbox. Cost controls reject requests at the edge before they hit your wallet. Built in Rust because this has to run at network speed. Sub-4ms overhead, zero-copy streaming, memory safety by design. You can't clone this with a library. You'd have to replicate the entire infrastructure layer. Your app becomes language-agnostic. Switch from Python to Go to Rust, and your guardrails stay identical. That's the architecture shift. Built by @purusa0x6c @intermurphi @Palakonweb and some coffee ;)
English
1
4
22
6K
Metriqual retweetledi
Dikshika Sharma
Dikshika Sharma@intermurphi·
Latest video breaks down why your AI backend is fragile: hardcoded prompts that require redeploys, custom PII filtering services adding latency, usage limits written as if statements in your app code. Metriqual moves all of this to the gateway layer. System prompts become infrastructure you update in a dashboard. Compliance is a checkbox. Cost controls reject requests at the edge before they hit your wallet. Built in Rust because this has to run at network speed. Sub-4ms overhead, zero-copy streaming, memory safety by design. You can't clone this with a library. You'd have to replicate the entire infrastructure layer. Your app becomes language-agnostic. Switch from Python to Go to Rust and your guardrails stay identical. That's the architecture shift. @metriqual @theunblunt @purusa0x6c @Palakonweb
English
3
3
16
2.3K
Metriqual retweetledi
purusha - n/eti
purusha - n/eti@purusa0x6c·
Latest video breaks down why your AI backend is fragile: hardcoded prompts that require redeploys, custom PII filtering services adding latency, usage limits written as if statements in your app code. Metriqual moves all of this to the gateway layer. System prompts become infrastructure you update in a dashboard. Compliance is a checkbox. Cost controls reject requests at the edge before they hit your wallet. Built in Rust because this has to run at network speed. Sub-4ms overhead, zero-copy streaming, memory safety by design. You can't clone this with a library. You'd have to replicate the entire infrastructure layer. Your app becomes language-agnostic. Switch from Python to Go to Rust and your guardrails stay identical. That's the architecture shift. video featuring: @theunblunt and @intermurphi
English
1
13
63
5K
Metriqual retweetledi
Palak
Palak@Palakonweb·
Building AI infrastructure shouldn’t feel complicated. Create. Check. Test. That’s the workflow. Metriqual sits between your app and AI models giving you visibility, usage control, and real-time observability across every request. No friction. No guesswork. Just predictable AI in production
English
9
10
54
5K