
Dan Fu
861 posts

Dan Fu
@realDanFu
VP, Kernels @togethercompute Assistant Professor @ucsd_cse Looking for talented kernel engineers and performance engineers!


Composer 2 is now available in Cursor.

Composer 2 is now available in Cursor.

Composer 2 is now available in Cursor.

Personal AI should run on your personal devices. So, we built OpenJarvis: a personal AI that lives, learns, and works on-device. Try it today and top the OpenJarvis Leaderboard for a chance to win a Mac Mini! Collab w/ @Avanika15, John Hennessy, @HazyResearch, and @Azaliamirh. Details in thread.



Together Research has produced FlashAttention, ATLAS, ThunderKittens and more. This week at AI Native Conf: seven more releases, all coming to production soon. Thread → #ainativeconf #ainativecloud





DoubleAI’s AI system just beat a decade of expert GPU engineering WarpSpeed just beat a decade of expert-engineered GPU kernels — every single one of them. cuGraph is one of the most widely used GPU-accelerated libraries in the world. It spans dozens of graph algorithms, each written and continuously refined by some of the world’s top performance engineers. @_doubleAI_'s WarpSpeed autonomously rewrote and re-optimized these kernels across three GPU architectures (A100, L4, A10G). Today, we released the hyper-optimized version on GitHub — install it with no change to your code. The numbers: - 3.6x average speedup over human experts - 100% of kernels benefit from speedup - 55% see more than 2x improvement. But hasn’t AI already achieved expert-level status — winning gold medals at IMO, outperforming top programmers on CodeForces? Not quite. Those wins share three hidden crutches: abundant training data, trivial validation, and short reasoning chains. Where all three hold, today’s AI shines. Remove any one of them and it falls apart (as Shai Shalev Shwartz wrote in his post). GPU performance engineering breaks all three. Data is scarce. Correctness is hard to validate. And performance comes from a long chain of interacting choices — memory layout, warp behavior, caching, scheduling, graph structure. Even state-of-the-art agents like Claude Code, Codex, and Gemini CLI fail dramatically here, often producing incorrect implementations even when handed cuGraph’s own test suite. Scaling alone can’t break this barrier. It took new algorithmic ideas — our Diligent framework for learning from extremely small datasets, our PAC-reasoning methodology for verification when ground truth isn’t available, and novel agentic search structures for navigating deep decision chains. This is the beginning of Artificial Expert Intelligence (AEI) — not AGI, but something the world needs more: systems that reliably surpass human experts in the domains where expertise is rarest, slowest, and most valuable. If AI can surpass the world’s best GPU engineers, which domain falls next? For the full blog: doubleai.com/research/doubl… CuGraph: docs.rapids.ai/api/cugraph/st… Winning Gold at IMO 2025: arxiv.org/abs/2507.15855 Codeforces benchmarks: rdworldonline.com/openai-release… @shai_s_shwartz post: x.com/shai_s_shwartz… From Reasoning to Super-Intelligence: A Search-Theoretic Perspective arxiv.org/abs/2507.15865 Artificial Expert Intelligence through PAC-reasoning arxiv.org/abs/2412.02441

I've been working on a new LLM inference algorithm. It's called Speculative Speculative Decoding (SSD) and it's up to 2x faster than the strongest inference engines in the world. Collab w/ @tri_dao @avnermay. Details in thread.


Data mixing - determining ratios across your training datasets - matters a lot for model quality. While building Olmo 3, we learned it’s hard to set up a method that finds a strong mix, and hard to maintain that mix as datasets change throughout development. Introducing Olmix👇


Adaption has raised $50M to build adaptive AI systems that evolve in real time. Everything intelligent adapts. So should AI.


The End of GPU Scaling? Compute & The Agent Era My conversation with @Tim_Dettmers of @allen_ai and @realDanFu of @togethercompute about their blog posts on AGI and compute (links in replies) and agents in 2026 00:00 - Intro 01:06 – Two essays, two frameworks on AGI 01:34 – Tim’s background: quantization, QLoRA, efficient deep learning 02:25 – Dan’s background: FlashAttention, kernels, alternative architectures 03:38 – Defining AGI: what does it mean in practice 08:20 – Tim’s case: computation is physical, diminishing returns, memory movement 11:29 – “GPUs won’t improve meaningfully”: the core claim and why 16:16 – Dan’s response: utilization headroom (MFU) + “models are lagging indicators” 22:50 – Pre-training vs post-training (and why product feedback matters) 25:30 – Convergence: usefulness + diffusion (where impact actually comes from) 29:50 – Multi-hardware future: NVIDIA, AMD, TPUs, Cerebras, inference chips 32:16 – Agents: did the “switch flip” yet? 33:19 – Dan: agents crossed the threshold (kernels as the “final boss”) 34:51 – Tim: “use agents or be left behind” + beyond coding 36:58 – “90% of code and text should be written by agents” (how to do it responsibly) 39:11 – Practical automation for non-coders: what to build and how to start 43:52 – Dan: managing agents like junior teammates (tools, guardrails, leverage) 48:14 – Education and training: learning in an agent world 52:44 – What Tim is building next (open-source coding agent; private repo specialization) 54:44 – What Dan is building next (inference efficiency, cost, performance) 55:58 – Mega-kernels + Together Atlas (speculative decoding + adaptive speedups) 58:19 – Predictions for 2026: small models, open-source, hardware, modalities 1:02:02 – Beyond transformers: state-space and architecture diversity 1:03:34 – Wrap



