MatX

39 posts

MatX banner
MatX

MatX

@MatXComputing

High-throughput chips for LLMs.

Mountain View, California Beigetreten Temmuz 2023
42 Folgt3.6K Follower
Angehefteter Tweet
MatX
MatX@MatXComputing·
Our goal is to make the best chips physically possible for the large model needs of frontier labs. The MatX One chip delivers higher throughput than any announced product while also achieving the lowest latency. @reinerpope joined @BloombergTV this morning to discuss:
Reiner Pope@reinerpope

We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.

English
6
12
83
31.5K
MatX retweetet
John Collison
John Collison@collision·
Reiner Pope (@MatXComputing) just raised a $500m round led by @leopoldasch and Jane Street to build faster AI chips. I enjoyed having him on Cheeky Pint so I could ask all my questions about how chip design actually works, where the speed-up comes from, and how the industry will evolve. 00:00:15 Google’s AI revival 00:07:54 MatX 00:17:11 AI supply chain 00:21:48 Designing chips 00:37:11 TSMC 00:44:17 Token pricing 00:44:55 RL-ing chip design 00:49:26 Design to production 00:56:05 MatX culture 01:02:57 Rust 01:05:21 Cuckoo hashing 01:09:35 Unexplored model architectures
English
21
39
424
49.9K
MatX
MatX@MatXComputing·
Our goal is to make the best chips physically possible for the large model needs of frontier labs. The MatX One chip delivers higher throughput than any announced product while also achieving the lowest latency. @reinerpope joined @BloombergTV this morning to discuss:
Reiner Pope@reinerpope

We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.

English
6
12
83
31.5K
MatX retweetet
Reiner Pope
Reiner Pope@reinerpope·
@blip_tm @theaustinlyons Put your KVs in HBM! Contexts are growing ~infinitely long, and HBM is the best place to fit them. Definitely won't be underutilized: *everyone* is running out of HBM capacity.
English
4
1
17
1.9K
MatX retweetet
Reiner Pope
Reiner Pope@reinerpope·
We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.
English
124
201
2.2K
3M
MatX retweetet
tensorpro
tensorpro@tensorpro·
We trained models with MXFP4-quantized attention, but it turns out this can break causal modeling. Our latest post explains why this happens and how to fix it. matx.com/research/leaky…
English
1
17
97
29.6K
MatX retweetet
James Hill-Khurana
James Hill-Khurana@jtvhk·
I'll be in Toronto and Waterloo over the next week, I'd love to chat and tell you a bit more about what we're doing at MatX (and say hi); please feel free to reach out!
English
4
5
65
5.9K
MatX retweetet
James Hill-Khurana
James Hill-Khurana@jtvhk·
Excited to say I joined @MatXComputing late last year! The team is exceptionally thoughtful and the problems are both difficult and fun: from µarch, compilers, and models, to the systems we are building.
James Hill-Khurana tweet media
English
26
5
159
8.5K
MatX retweetet
Reiner Pope
Reiner Pope@reinerpope·
MatX hardware will maximize intelligence per dollar for the world’s largest models. We are a team of 50+ and growing quickly. If you are passionate about building the best chips for LLMs, consider joining us. matx.com/jobs
English
0
2
23
5.3K
MatX retweetet
Reiner Pope
Reiner Pope@reinerpope·
MatX is designing chips and systems to 10x the computing power for the world’s largest AI workloads. Today, we are pleased to announce the closing of a >$100M Series A funding round led by @sparkcapital, with participation from @JaneStreetGroup, @danielgross and @natfriedman, @TriatomicCap, @HarpoonVentures, and @adamdangelo. In two years, we proved out all our technical bets across ML numerics, chip design and implementation, software, and system design—and secured all the necessary partnerships—to develop our chip. With this round of investment, we are now sufficiently funded to bring our systems to market.
English
16
20
183
34.8K
MatX retweetet
Reiner Pope
Reiner Pope@reinerpope·
1. Breakdown of DeepSeek V3 efficiency vs Llama 3: - Better: 11x fewer FLOPs per token, thanks to MoE [37B vs 405B activated params] - Better: 2x faster numerics [fp8 vs bf16 training] - Worse: 0.5x flops utilization [16% vs 33% end-to-end MFU*] - Neutral: similar hardware platform [H800 and H100 both have 2Pflops/s dense fp8] - Neutral: same training data volume [14.8T vs 15T tokens] Llama 3’s design was obviously and intentionally conservative: dense model (not MoE), bf16 training (not fp8), GQA attention (not cheaper alternatives). DeepSeek benefited by being aggressive on all these fronts, at the cost of being later to market. 2. The core algorithmic improvements were already known; the closed source LLM labs were probably already doing similar things. DeepSeek’s improvements are real, but far more modest than the Llama comparison would suggest; my wild guess is closer to 1.5x improvement. MoE was published in 2017; in 2021 Switch Transformer reported 7x speedups vs dense models, similar to DeepSeek’s 11x. OpenAI is widely rumored to have been using MoE models for years. NVIDIA published their fp8 training paper in 2022. 3. NVIDIA’s stock price is down 15% after DeepSeek. Should it be? LLM compute is like a gas: it expands to fill the available budget. Over the last 3 years the labs have grown their budgets, despite algorithms and hardware improving. There’s no reason to expect this to change now: you win by making the best model, not by shrinking your budget. The more meaningful question: do algorithmic improvements like DeepSeek’s mean that margins will shift from hardware vendors to labs? Hard to see why. Algorithmic improvements are quickly copied from one lab to another, making it hard for them to maintain technological differentiation. Hardware improvements take much longer to copy.
English
7
41
385
68.1K
MatX retweetet
Mike Gunter
Mike Gunter@MikeGunter_·
I really enjoyed talking about the process and business of semiconductor design with @tracyalloway and @TheStalwart on the Odd Lots podcast. Joe and Tracy were wonderful hosts: They put me at ease and guided the conversation with the lightest of touch. We talked about what doing semiconductor design is like, why LLMs are hungry for as many FLOPS/$ as they can get, how @MatXComputing can provide that, and how NVIDIA's moat might be bridged. I particularly liked that @reinerpope got to communicate some of the sense of beauty that I also feel about good design. Helping lead MatX's engineering team (and meeting with our customers) is a humbling honor: It's talking with people who are the world experts in what we're chatting about. Being on Odd Lots was talking with grandmasters at conversation.
Joe Weisenthal@TheStalwart

NEW ODD LOTS: Two Veteran Chip Designers Have A Plan To Take On Nvidia @tracyalloway and I talked to @reinerpope and @MikeGunter_, both formerly of Alphabet, about their new company MatX that's aiming to build the ultimate semiconductor just for LLMs bloomberg.com/news/articles/…

English
3
6
33
27K
MatX retweetet
Reiner Pope
Reiner Pope@reinerpope·
MatX will be at MLSys. Come join us at our After Hours in Santa Clara to talk about chips, compilers, partitioning, and optimizing ML models for future hardware. Many of us will be there, including me and @mikegunter_. Tuesday May 14th at 4pm, see matx.com/meetmatx.
English
0
2
21
3.2K
MatX retweetet
Reiner Pope
Reiner Pope@reinerpope·
We’re releasing seqax, a research-focused LLM codebase that is simple, explicit, and performs well on up to ~100 GPUs/TPUs. Everything you need to edit, from the math, to parallelism, to memory footprint, is all there in 500 lines of JAX code. 🧵 github.com/MatX-inc/seqax
English
9
48
272
43.4K
MatX
MatX@MatXComputing·
Introducing MatX: we design hardware tailored for LLMs, to deliver an order of magnitude more computing power so AI labs can make their models an order of magnitude smarter. Our hardware would make it possible to train GPT-4 and run ChatGPT, but on the budget of a small startup. Our founding team has designed chips at Google and Amazon, and we’ve built chips with 1/10 the team size typically needed. Here’s how we’re approaching the problem of inefficient and insufficient compute. While other chips treat all models equally, we dedicate every transistor to maximizing performance on the world’s largest models. Our goal is to make the world’s best AI models run as efficiently as allowed by physics, bringing the world years ahead in AI quality and availability. A world with more widely available intelligence is a happier and more prosperous world—picture people of all socioeconomic levels having access to an AI staff of specialist MDs, tutors, coaches, advisors, and assistants. Our design focuses on cost efficiency for high-volume pre-training and production inference for large models. This means: 1/ We’ll support training and inference. Inference first. 2/ We optimize for performance-per-dollar first (we’ll be best by far), and for latency second (we’ll be competitive). 3/ We offer excellent scale-out performance, supporting clusters with hundreds of thousands of chips. 4/ Peak performance is achieved for these workloads: large Transformer-based models (both dense and MoE), ideally 20B+ parameters, and inference having thousands of simultaneous users. 5/ We give you low-level access to the hardware. We believe that the best hardware is designed jointly by ML hardware experts and LLM experts. Everyone on the MatX team, from new grad to industry veteran, is exceptional. Our industry veterans have built ML chips, ML compilers, and LLMs, at Google or Amazon or various startups. Our CEO, @reinerpope, was Efficiency Lead for Google PaLM, where he designed and implemented the world’s fastest LLM inference software. Our CTO, @mikegunter_, was Chief Architect for one of Google’s ML chips (at the time, Google’s fastest) and was an Architect for Google’s TPUs. Our CDO Silicon, @avinashgmani, has over 25 years of experience in building products and world-class engineering teams in silicon and software at Amazon, Innovium and Broadcom. We’re backed by $25M of investment from specialist investors and operators who share our vision, including: @danielgross and @natfriedman (lead investors, and experts in the AI space), @rkhemani (CEO at Auradine), @amasad (CEO at Replit), @outsetcap, @homebrew, @svangel. Additionally we have investment from leading AI and LLM researchers including @IrwanBello, @jekbradbury, @achowdhery, @liamfedus, and @hardmaru.
MatX tweet media
English
25
76
461
142.1K
MatX
MatX@MatXComputing·
@gangl_daniel We'll have chips in hand next year.
English
1
0
1
232