Daniel Kim
3.9K posts

Daniel Kim
@learnwdaniel
Developers! Developers! Developers! | Head of Growth @cerebras
San Francisco Katılım Mart 2015
1.6K Takip Edilen6.9K Takipçiler
Sabitlenmiş Tweet
Daniel Kim retweetledi

We're teaming up with @cerebras to build the fastest possible inference.
Coming soon to Amazon Bedrock, we’re delivering inference performance an order of magnitude faster than what’s available today by connecting AWS Trainium3 for compute-intensive prefill with Cerebras CS-3 to power decode.
Learn more about the partnership. go.aws/3Pzcota
English
Daniel Kim retweetledi

new job just dropped.
after a fun run at @clinebot, i'm joining @CodeRabbitAI to work on Developer Experience with an incredible team.
so next time CodeRabbit roasts your PR and you have no idea why, you'll have someone to blame.
let's go. 🐇
GIF
English

Bring your voice agents to life
LiveKit@livekit
Introducing Agents UI, an open-source @shadcn component library for building polished React frontends for your voice agents. Audio visualizers. Media controls. Session management tools. Chat transcripts. All wired to LiveKit Agents. Install via the shadcn CLI and own the code.
English

Who wants in to Cafe Compute by Cerebras? It is going to be a huge week at GTC and would be fun for you to join me! luma.com/cccerebrasgtc2…
English

Pics came back from upfront summit and they’re fire thanks again @pzakin @kobiefuller ♥️
Feat @ivanburazin 🚙




English

@philipkiely codex probably could one shot it lol /s jkjk congratulations can i get a paper copy?
English


When I tell you we found the most INCREDIBLE venues for RM and AIE...
AI Engineer: Miami@AIEMiami
After 2 days of world class talks at AI Engineer Miami, unwind under the palms trees! See you in the heart of Wynwood for the official 2026 Afterparty sponsored by @cerebras
English

Update on my Go-agent-sdk (v.0.3.0) !!!
It now supports multiple LLM providers out of the box
One interface, swap one line, your agent works with OpenAI, Anthropic, Gemini, or any OpenAI-compatible endpoint (Groq, DeepSeek, Cerebras, Mistral, and more)
What's new:
- ChatProvider interface - every provider implements CreateChat + ModelName, agent doesn't care which one you pick
- Native Anthropic provider - full translation layer for Claude's Messages API including tool calling
- Native Gemini provider - full translation layer for Google's generateContent API including the tool call detection quirk (Gemini returns STOP even for tool calls)
- 12 OpenAI-compatible base URLs built in - Groq, Cerebras, DeepSeek, Together, Fireworks, Mistral, Moonshot, DashScope (Qwen), ZAI (GLM), Anyscale
- Still zero dependencies. Pure standard library Go.
Each provider is a self-contained translator. Adding a new one means writing one file.
Nothing else changes. Remains fast concurrent and clean
Providers available now - @GeminiApp , @AnthropicAI , @OpenAI , @Kimi_Moonshot ,@deepseek_ai , @Alibaba_Qwen ,@cerebras , @Zai_org and so much more !
You can take a look at the repo here -
github.com/parthshr370/Go…
Demo below showing how easy it is to swap the Providers !
x.com/parthshr370/st…
English

in pursuit of taste
@allenpark @zhennydez and I are starting an intimate, fine-dining, series in sf
let us know if you'd like to join the next one




English

I just tried GLM 4.7 on @cerebras... That was the fastest I've ever spent $10.
The chick fil a drive thru has nothing on this api $/min.
English

@dhh @bdougieYO @opencode Pay per token for most inference infra providers is subsidized by large enterprise contracts
English

@bdougieYO @opencode There's no subsidization. The open weight models are proving that. Fireworks is not giving you anything for free when they're selling a million tokens for $3 (Kimi) or $1.20 (MiniMax). Anthropic is pricing based on R&D cost, not inferrence expenses.
English









