ippsav
1.3K posts


I love tinygrad, but with our megakernel you can go to 415 tok/s in decoding speed 🚄
the tiny corp@__tinygrad__
We set out to replicate Kimi's 193 tok/s Qwen3.5-0.8B on M3 Max. Our baseline is already 178 tok/s, beating LMStudio (160) and llama.cpp (140) out of the box, but with tinygrad's custom kernel feature Claude cranked it to 195.7!
English

Workspace agents are surprisingly powerful. Powered by Codex under the hood, the same implementation we have open-sourced here: github.com/openai/codex
OpenAI@OpenAI
Introducing workspace agents in ChatGPT—shared agents that can handle complex tasks and long-running workflows across tools and teams.
English

@ludwigABAP thinking of getting one soon saw some benchmarks tok/s and it's goooooooood
English

On the recent Framework laptop announcement: I have committed to a m5 max with 128gb of ram. #NoRagrets

English
ippsav retweetledi
ippsav retweetledi

@ippsav It's a Mac app designed to run coding and other agents in a better, safer way. Details out soon.
If you're interested, I can DM you our discord and f&f website hopefully soon.
English

A dev I know spent months building a perfectly optimized, over-engineered system. The team ignored it.
Another dev shipped a simple, readable solution and got a standing ovation in the code review. You could see the craftsmanship in that PR.
Young engineer, I rebuke any over-engineered solution in your codebase.
English










