
David Luan
551 posts

David Luan
@jluan
retired!! former cofounder of @adeptailabs, vp engineering @openai and @amazon, and @google LLMs lead. all about type II fun.






We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.

I enjoyed chatting with Amazon's @jluan about what he has been up to since kickstarting its AGI / agents research lab last year David has seen it all and is refreshingly candid theverge.com/decoder-podcas…


Nova Act is now⚡️ enterprise ready ⚡️ and we've added new capabilities to our preview to help you take your prototype to production—with 90%+ reliability across our early enterprise customer use cases!

Meet Amazon Nova Act — an effortless way to build AI agents that can reliably use browsers 🧑💻 With our new model, compose robust steps into complex workflows; handle everything from bookings to QA testing. Getting started takes just 3 lines of code. See what Nova Act can do 🧵👇

Meet Amazon Nova Act — an effortless way to build AI agents that can reliably use browsers 🧑💻 With our new model, compose robust steps into complex workflows; handle everything from bookings to QA testing. Getting started takes just 3 lines of code. See what Nova Act can do 🧵👇

Meet Gemini Robotics: our latest AI models designed for a new generation of helpful robots. 🤖 Based on Gemini 2.0, they bring capabilities such as better reasoning, interactivity, dexterity and generalization into the physical world. 🧵 goo.gle/gemini2-roboti…




Today, we are excited to announce Thinking Machines Lab (thinkingmachines.ai), an artificial intelligence research and product company. We are scientists, engineers, and builders behind some of the most widely used AI products and libraries, including ChatGPT, Character.ai, PyTorch, and Mistral. Our mission is to make artificial intelligence work for you by building a future where everyone has access to the knowledge and tools to make AI serve their unique needs. We are committed to open science through publications and code releases, while focusing on human-AI collaboration that serves diverse domains. Our approach embraces co-design of research and products to enable learning from real-world deployment and rapid iteration. This work requires three core foundations: state-of-the-art model intelligence, high-quality infrastructure, and advanced multimodal capabilities. We are committed to building models at the frontier of capabilities to deliver on this promise. If you’re interested in joining our team, consider applying here: 6wajk07p.paperform.co


Super-excited about what's ahead. Want to move the AI research frontier, join us! amazon.science/blog/amazon-op… AGI-SFLab-Jobs@amazon.com

!! @pabbeel and I are building a new AI research lab in SF for Amazon! We’re focused on the remaining major problems to build generally intelligent agents and are looking for a few dozen intrinsically motivated people to join our team and work with the Adept folks here. DM me!






