
Homebrew
17.5K posts

Homebrew
@homebrew
Seed stage VC by @hunterwalk & @satyap. $$ in @chime, @Habi_co_, @crowdbotics, @noyoHQ, @plaid, @boweryfarming, @gustohq, @shieldaitech, @tryfinch & more!


Introducing MatX: we design hardware tailored for LLMs, to deliver an order of magnitude more computing power so AI labs can make their models an order of magnitude smarter. Our hardware would make it possible to train GPT-4 and run ChatGPT, but on the budget of a small startup. Our founding team has designed chips at Google and Amazon, and we’ve built chips with 1/10 the team size typically needed. Here’s how we’re approaching the problem of inefficient and insufficient compute. While other chips treat all models equally, we dedicate every transistor to maximizing performance on the world’s largest models. Our goal is to make the world’s best AI models run as efficiently as allowed by physics, bringing the world years ahead in AI quality and availability. A world with more widely available intelligence is a happier and more prosperous world—picture people of all socioeconomic levels having access to an AI staff of specialist MDs, tutors, coaches, advisors, and assistants. Our design focuses on cost efficiency for high-volume pre-training and production inference for large models. This means: 1/ We’ll support training and inference. Inference first. 2/ We optimize for performance-per-dollar first (we’ll be best by far), and for latency second (we’ll be competitive). 3/ We offer excellent scale-out performance, supporting clusters with hundreds of thousands of chips. 4/ Peak performance is achieved for these workloads: large Transformer-based models (both dense and MoE), ideally 20B+ parameters, and inference having thousands of simultaneous users. 5/ We give you low-level access to the hardware. We believe that the best hardware is designed jointly by ML hardware experts and LLM experts. Everyone on the MatX team, from new grad to industry veteran, is exceptional. Our industry veterans have built ML chips, ML compilers, and LLMs, at Google or Amazon or various startups. Our CEO, @reinerpope, was Efficiency Lead for Google PaLM, where he designed and implemented the world’s fastest LLM inference software. Our CTO, @mikegunter_, was Chief Architect for one of Google’s ML chips (at the time, Google’s fastest) and was an Architect for Google’s TPUs. Our CDO Silicon, @avinashgmani, has over 25 years of experience in building products and world-class engineering teams in silicon and software at Amazon, Innovium and Broadcom. We’re backed by $25M of investment from specialist investors and operators who share our vision, including: @danielgross and @natfriedman (lead investors, and experts in the AI space), @rkhemani (CEO at Auradine), @amasad (CEO at Replit), @outsetcap, @homebrew, @svangel. Additionally we have investment from leading AI and LLM researchers including @IrwanBello, @jekbradbury, @achowdhery, @liamfedus, and @hardmaru.

1/ Today, we’re proud to announce that @finix's payments solution for merchants, marketplaces, SaaS platforms, and software companies is officially available in Canada 🇨🇦. In 2024, you may assume that would be simple, but here’s why it’s such a big deal: THREAD 🧵

Introducing CoinTracker 2.0 🎉 Two powerful products merge into one, get your optimized crypto tax reports and track your portfolio — all in one place. ⬇️ Learn more about our biggest launch in history ⬇️ cointracker.io/blog/introduci…














