Rohan Taneja

477 posts

Rohan Taneja

Rohan Taneja

@rtaneja_

ai gateway @vercel ▲ // alum @ucberkeleymet

SF Beigetreten Mayıs 2019
1.8K Folgt1.8K Follower
Angehefteter Tweet
Rohan Taneja
Rohan Taneja@rtaneja_·
✨ I made a tiny macOS app to help prevent digital eye strain by following the 20-20-20 rule! Every 20 minutes, it covers your screen for 20 seconds, forcing you to give your eyes a break and glance away from your laptop. Let me know what you think 🙂 apps.apple.com/us/app/glance-…
English
8
6
135
139.5K
adam
adam@theCTO·
i need a new office chair, recommendations? i bet @theo has at least one
English
22
0
91
34K
Rohan Taneja retweetet
Vercel Developers
Vercel Developers@vercel_dev·
You can now tell AI Gateway to fail over before a provider's default timeout kicks in. Set a custom timeout per-provider, with 𝚙𝚛𝚘𝚟𝚒𝚍𝚎𝚛𝚃𝚒𝚖𝚎𝚘𝚞𝚝𝚜, for more granular control. In beta for BYOK, with non-BYOK support coming soon. vercel.com/changelog/prov…
English
2
2
35
3K
Rohan Taneja retweetet
Prodia
Prodia@prodialabs·
The world's fastest Flux Schnell provided is now on Vercel's AI Gateway. From Prodia.
Prodia tweet media
English
1
5
12
763
Rohan Taneja retweetet
Rohan Taneja retweetet
Vercel Developers
Vercel Developers@vercel_dev·
Grok Imagine Video is now on AI Gateway. Text-to-video. Image-to-video. Video editing. Audio. 𝚊𝚠𝚊𝚒𝚝 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎𝚅𝚒𝚍𝚎𝚘({ 𝚖𝚘𝚍𝚎𝚕: '𝚡𝚊𝚒/𝚐𝚛𝚘𝚔-𝚒𝚖𝚊𝚐𝚒𝚗𝚎-𝚟𝚒𝚍𝚎𝚘', 𝚙𝚛𝚘𝚖𝚙𝚝: '𝚋𝚒𝚛𝚍𝚜 𝚊𝚌𝚛𝚘𝚜𝚜 𝚜𝚞𝚗𝚜𝚎𝚝' });
English
2
7
55
43.2K
ram
ram@ram_01_ram·
@techwraith @ZYPX4 @vercel_dev @vercel_support @aisdk Need fix on Response API. In request payload, "input": [ { "type": "message", "role": "user", "content": "Hello" } ] "type": "message" seems to mandatory field, but openrouter have this as optional, and openai doest have.
English
1
0
0
114
ram
ram@ram_01_ram·
@techwraith @ZYPX4 @vercel_dev @vercel_support @aisdk Can we make "type": "message" optional in the Response API? Many apps follow OpenSpec, where it isn’t required. Because of this, I can’t use Vercel AI Gateway directly in app where existing payloads don’t include "type". The same app works fine with OpenRouter.
English
3
0
0
136
Rohan Taneja retweetet
Brennan McEachran 👨‍🚀
Brennan McEachran 👨‍🚀@i_am_brennan·
@rtaneja_ @hp_arora I have a lot of random failures, but I'm not doing a good job of tracking them. TBH i fully expected these to be visible within the vercel dashboard. Grok returns forbidden randomly like maybe for a few hours every day. These are evals so literally the same inputs.
Brennan McEachran 👨‍🚀 tweet media
English
1
0
0
68
Kinder • Grinder
Kinder • Grinder@kinder_grinder·
Thanks for listening. I am not sure how the tech implementation should be done, but indeed, video generation takes time. In my app, I am using AI SDK to dynamically generate steps for agents and tools. I have image-gen agent using Nano Banana, and I wanna expand to video generation and different models while I still stay with AI Gateway.
Kinder • Grinder tweet mediaKinder • Grinder tweet media
English
1
0
1
47
Rohan Taneja
Rohan Taneja@rtaneja_·
@i_am_brennan @hp_arora could you give more info about what the warning/error is? what does the request you're sending look like? are you by any chance setting both temperature and topP?
English
2
0
0
54
Brennan McEachran 👨‍🚀
Brennan McEachran 👨‍🚀@i_am_brennan·
@rtaneja_ @hp_arora Oh, wait what?! There’s another issue with anthropic models on bedrock (I ended up pinning to avoid). Bedrock hard errors on a validation where Anthropic warns. Leads to random failures… but is that a gateway problem or bedrock?
English
1
0
0
67
Brennan McEachran 👨‍🚀
Brennan McEachran 👨‍🚀@i_am_brennan·
@rtaneja_ @hp_arora I get errors, when it routes to the azure randomly, from azure that say something like: this model requires the use of the responses api and you’ve used the chatcompletion. Codex model was the one most frequently erroring. This was today
English
1
0
0
69
Ryan Carson
Ryan Carson@ryancarson·
I think the Gemini team is going all-in on the Instructions API. If the model labs diverge on their primary generation API interface, it’s going to be bad times for all of us building on top of LLMs. I can’t get locked into a one-lab solution. It’s never going to be good for my customers. Curious how @aisdk is going to handle this.
English
16
4
66
7.1K