Vasanth Mohan
288 posts




We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.








Join us LIVE at MCP's first Birthday kickoff at 10 am PT today!🎂 Don't miss out on details about the celebration from the co-hosts, @Gradio and @AnthropicAI. 🔥 We've also got an exciting lineup of speakers from @Huggingface, @OpenAI, @GoogleDeepMind, @modal, @blaxelAI, @SambaNovaAI, and @nebiustf ready to share their insights.





🚨 New drop: @OpenAI-OSS 120B on SambaCloud at over 700 t/s ✅ US-built, Apache 2.0. Own, control, trust it ✅ Performance at low cost $0.22/$0.59 ✅ Deploy wherever you need & fine tune the model with your data ✅ Inference at over 700 t/s Build here: bit.ly/4nqGlYh


Introducing DeepSeek-V3.1: our first step toward the agent era! 🚀 🧠 Hybrid inference: Think & Non-Think — one model, two modes ⚡️ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528 🛠️ Stronger agent skills: Post-training boosts tool use and multi-step agent tasks Try it now — toggle Think/Non-Think via the "DeepThink" button: chat.deepseek.com 1/5

🐋 New drop: @DeepSeek_ai V3.1 on SambaCloud @ 200+ t/s Beats Claude Opus 4 & earlier versions in coding Hybrid Thinking Mode = reasoning when needed, raw speed when not Open-source + deploy privately at lower cost




The @xAI Grok 2.5 model, which was our best model last year, is now open source. Grok 3 will be made open source in about 6 months. huggingface.co/xai-org/grok-2







