OpenRouter

4.7K posts

OpenRouter banner
OpenRouter

OpenRouter

@OpenRouter

Discover and use the latest LLMs. 500+ models (incl. 50+ free), explorable data, private chat, & a unified API. https://t.co/qJG5mKrigL

Katılım Temmuz 2023
373 Takip Edilen94.4K Takipçiler
Sabitlenmiş Tweet
OpenRouter
OpenRouter@OpenRouter·
Introducing Response Caching: save tons of money and time on tests and agent retries. Blog post: openrouter.ai/announcements/… Available for free. Learn more 👇
OpenRouter tweet media
English
36
62
1K
98.1K
Shubhankit
Shubhankit@shubhcodes·
@OpenRouter for devs this might not be useful, coz they can do this directly into their request handlers with redis, for vibe coders it make sense....
English
1
0
0
414
OpenRouter
OpenRouter@OpenRouter·
Introducing Response Caching: save tons of money and time on tests and agent retries. Blog post: openrouter.ai/announcements/… Available for free. Learn more 👇
OpenRouter tweet media
English
36
62
1K
98.1K
OpenRouter
OpenRouter@OpenRouter·
@YounesAka Yes! Works for all models. (But changing your model, of course, will cause a cache miss.)
English
0
0
0
150
OpenRouter
OpenRouter@OpenRouter·
(This is fixed now)
English
1
0
7
3.5K
OpenRouter
OpenRouter@OpenRouter·
Notice: there’s a bug in this Azure data that we’re working on correcting, due to a change in their error shapes upstream. But gpt5.5 is seeing a lot of growth!
Matías@matidotlol

@theo 1. What

English
9
4
265
46.5K
OpenRouter
OpenRouter@OpenRouter·
@mylifcc of course! just use the actual resolved model slug that last worked for you
English
0
0
10
482
lifcc
lifcc@mylifcc·
@OpenRouter semver works when 'minor' has a contract. but LLM 'minor' bumps freely change output. seen claude-3-5 to 3-7 quietly break a CLAUDE.md skill template after one update. alias resolved at request time, or pinned per-key? any way to roll back if a release breaks prompts mid-flight?
English
2
0
2
540
OpenRouter
OpenRouter@OpenRouter·
NEW: "-latest" model aliases 🔀 Route requests to "~anthropic/claude-opus-latest", "~openai/gpt-latest", etc to get the latest version of each major model. (Inspired by semver.) openrouter.ai/models?q=latest
OpenRouter tweet media
Wes Winder@weswinder

@levelsio openrouter has a cool “nitro” flag in the model names to use the fastest provider so like “gpt-5.5:nitro” would be cool if the labs just let you use “latest” or something

English
18
12
392
26.2K
OpenRouter
OpenRouter@OpenRouter·
Cache is scoped per API key, so different keys under the same account stay isolated. Cache hits don't count against provider rate limits since the request never reaches them. In beta now. Docs: openrouter.ai/docs/guides/fe…
English
2
0
18
4.2K
OpenRouter
OpenRouter@OpenRouter·
Controls: 1. `X-OpenRouter-Cache-TTL` to set your lifetime (1 second to 24 hours, defaults to 5 min) 2. `X-OpenRouter-Cache-Clear` to bust a specific entry Response headers (HIT/MISS, Age, TTL) so you can see what's happening Or set cache_enabled: true on a preset and forget the headers.
English
2
0
15
4.5K
OpenRouter retweetledi
mark
mark@r_marked·
S/o @OpenRouter for building the best AI routing solution on the market. If you are working with LLM APIs, you should be using them, full stop.
English
5
9
68
7.2K
@levelsio
@levelsio@levelsio·
If you work for OpenAI, Anthropic or xAI Please add a 'model'=>'latest' value so I can stop having to change model every 6 months!
Wes Winder@weswinder

@levelsio openrouter has a cool “nitro” flag in the model names to use the fastest provider so like “gpt-5.5:nitro” would be cool if the labs just let you use “latest” or something

English
63
26
1.4K
325.3K
OpenRouter
OpenRouter@OpenRouter·
And read more about it from @ArtificialAnlys:
Artificial Analysis@ArtificialAnlys

xAI has launched Grok 4.3, achieving 53 on the Artificial Analysis Intelligence Index with improved agentic performance, ~40% lower input price, and ~60% lower output price than Grok 4.20 The release of Grok 4.3 places @xAI just above Muse Spark and Claude Sonnet 4.6 on the Intelligence Index, and a 4 points ahead of the latest version of Grok 4.20. Grok 4.3 improves its Artificial Analysis Intelligence Index score while reducing cost to run the benchmark suite. Key Takeaways: ➤ Grok 4.3 improves on cost-per-intelligence relative to Grok 4.20 0309 v2: it scores higher on the Intelligence Index while costing less to run the full benchmark suite. Grok 4.3 costs $395 to run the Artificial Analysis Intelligence Index, around 20% lower than Grok 4.20 0309 v2, despite using more output tokens. This makes it one of the lower-cost models at its intelligence level ➤ Large increase in real world agentic task performance: The largest single benchmark improvement is on GDPval-AA, where Grok 4.3 scores an ELO of 1500, up 321 points from Grok 4.20 0309 v2’s score of 1179 Grok 4.3, surpassing Gemini 3.1 Pro Preview, Muse Spark, Gpt-5.4 mini (xhigh), and Kimi K2.5. Grok 4.3 narrows the gap to the leading model on GDPval-AA, but still trails GPT-5.5 (xhigh) by 276 Elo points, with an expected win rate of ~17% against GPT-5.5 (xhigh) under the standard Elo formula ➤ Grok 4.3’s performs strongly on instruction following and agentic customer support tasks. It gains 5 points on 𝜏²-Bench Telecom to reach 98%, in line with GLM-5.1. Grok 4.3 maintains an 81% IFBench score from Grok 4.20 0309 v2 ➤ Gains 8 points on AA-Omniscience Accuracy, but at the cost of lower AA-Omniscience Non-Hallucination Rate of 8 points, so Grok 4.20 0309 v2 still leads AA-Omniscience Non-Hallucination Rate, followed by MiMo-V2.5-Pro, in line with Grok 4.3 Congratulations to @xAI and @elonmusk on the impressive release!

English
2
17
237
46.8K
OpenRouter
OpenRouter@OpenRouter·
The new Grok-4.3 from @xai is live on OpenRouter! Grok-4.3 releases at a lower price than Grok-4.2, while seeing a large jump in agentic performance: a 321 point increase to 1500 ELO on @ArtificialAnlys GDPval-AA, surpassing other top models despite the lower price.
OpenRouter tweet media
English
142
239
1.9K
20.8M