OpenRouter

4.7K posts

OpenRouter banner
OpenRouter

OpenRouter

@OpenRouter

Discover and use the latest LLMs. 500+ models (incl. 50+ free), explorable data, private chat, & a unified API. https://t.co/qJG5mKrigL

Katılım Temmuz 2023
373 Takip Edilen94.5K Takipçiler
Sabitlenmiş Tweet
OpenRouter
OpenRouter@OpenRouter·
Introducing Response Caching: save tons of money and time on tests and agent retries. Blog post: openrouter.ai/announcements/… Available for free. Learn more 👇
OpenRouter tweet media
English
38
69
1.1K
153.7K
OpenRouter
OpenRouter@OpenRouter·
We analyzed GPT 5.5 vs GPT 5.4 and found that costs increased between 49-92%. The 2x price hike of GPT 5.5 is mitigated by the model generating 19-34% fewer completion tokens for longer prompts. More analysis here: openrouter.ai/announcements/…
OpenRouter tweet media
English
12
16
245
17.9K
Itay Adler
Itay Adler@itayad·
@OpenRouter we love using OpenRouter with Frontman, been super useful for us from the beginning of implementing it. We're also using it with req_llm, also a great lib. This is such a nice feature together with session tracking and all the other tracking features 🙏 github.com/frontman-ai/fr…
English
1
0
2
121
OpenRouter
OpenRouter@OpenRouter·
Introducing Response Caching: save tons of money and time on tests and agent retries. Blog post: openrouter.ai/announcements/… Available for free. Learn more 👇
OpenRouter tweet media
English
38
69
1.1K
153.7K
OpenRouter retweetledi
Luckey Faraday
Luckey Faraday@luckeyfaraday·
I did it I generated 100M tokens on @OpenRouter for free All thanks to Owl-Alpha For reference Opus 4.7 costs 5$/M input tokens and 25$/M output tokens Let’s assume 100% of tokens generated were output tokens Total cost for 100M tokens would’ve been 2500$ In 24hrs
Luckey Faraday tweet mediaLuckey Faraday tweet media
English
11
8
54
8.9K
Shubhankit
Shubhankit@shubhcodes·
@OpenRouter for devs this might not be useful, coz they can do this directly into their request handlers with redis, for vibe coders it make sense....
English
1
0
0
717
OpenRouter
OpenRouter@OpenRouter·
@YounesAka Yes! Works for all models. (But changing your model, of course, will cause a cache miss.)
English
0
0
0
465
OpenRouter
OpenRouter@OpenRouter·
(This is fixed now)
English
1
0
7
3.7K
OpenRouter
OpenRouter@OpenRouter·
Notice: there’s a bug in this Azure data that we’re working on correcting, due to a change in their error shapes upstream. But gpt5.5 is seeing a lot of growth!
Matías@matidotlol

@theo 1. What

English
9
4
266
46.9K
OpenRouter
OpenRouter@OpenRouter·
@mylifcc of course! just use the actual resolved model slug that last worked for you
English
0
0
10
490
lifcc
lifcc@mylifcc·
@OpenRouter semver works when 'minor' has a contract. but LLM 'minor' bumps freely change output. seen claude-3-5 to 3-7 quietly break a CLAUDE.md skill template after one update. alias resolved at request time, or pinned per-key? any way to roll back if a release breaks prompts mid-flight?
English
2
0
2
546
OpenRouter
OpenRouter@OpenRouter·
NEW: "-latest" model aliases 🔀 Route requests to "~anthropic/claude-opus-latest", "~openai/gpt-latest", etc to get the latest version of each major model. (Inspired by semver.) openrouter.ai/models?q=latest
OpenRouter tweet media
Wes Winder@weswinder

@levelsio openrouter has a cool “nitro” flag in the model names to use the fastest provider so like “gpt-5.5:nitro” would be cool if the labs just let you use “latest” or something

English
18
12
395
26.9K
OpenRouter
OpenRouter@OpenRouter·
Cache is scoped per API key, so different keys under the same account stay isolated. Cache hits don't count against provider rate limits since the request never reaches them. In beta now. Docs: openrouter.ai/docs/guides/fe…
English
2
0
24
4.7K
OpenRouter
OpenRouter@OpenRouter·
Controls: 1. `X-OpenRouter-Cache-TTL` to set your lifetime (1 second to 24 hours, defaults to 5 min) 2. `X-OpenRouter-Cache-Clear` to bust a specific entry Response headers (HIT/MISS, Age, TTL) so you can see what's happening Or set cache_enabled: true on a preset and forget the headers.
English
2
0
20
5.1K
OpenRouter retweetledi
mark
mark@r_marked·
S/o @OpenRouter for building the best AI routing solution on the market. If you are working with LLM APIs, you should be using them, full stop.
English
5
9
70
7.7K