Will Reed

1.8K posts

Will Reed banner
Will Reed

Will Reed

@willreed

gp @sparkcapital

Marin County, CA Katılım Ekim 2015
1.6K Takip Edilen4.1K Takipçiler
Sabitlenmiş Tweet
Will Reed
Will Reed@willreed·
she’s a good pup
English
4
0
16
4.4K
Will Reed retweetledi
Baseten
Baseten@baseten·
Cold starts for large models are one of the hardest problems in AI inference infrastructure. Today we're launching the Baseten Delivery Network (BDN) to solve one of the hardest parts of this problem. 2–3x faster cold starts for large models at scale via optimizations at the pod, node, and cluster levels.
Rachel Rapp@rapprach

x.com/i/article/2034…

English
2
1
29
1.9K
Will Reed retweetledi
Stephanie Palazzolo
Stephanie Palazzolo@steph_palazzolo·
Most data center developers have to raise hundreds of millions of dollars. This one has just raised $8m, and is generating $50m in revenue per quarter. I chatted with Giga Energy CEO @lohstroh about the company's approach: theinformation.com/newsletters/ai…
English
0
5
38
6K
Will Reed retweetledi
Clay Fisher
Clay Fisher@claymfisher·
@latent_health is one of the biggest and most strategic opportunities in healthcare, sitting in between providers, payors, patients, and pharma and $200B of pharma spend. @sparkcapital is delighted to support Rish and Sri and this consequential company and to work with @saranormous and Mike Dixon.
Latent@latent_health

Excited to share that Latent has raised $80M to build the clinical reasoning engine that closes the gap between diagnosis and treatment. This round is co-led by @sparkcapital and @transformcptl, with participation from @Conviction, @MCK_Ventures, @generalcatalyst, and @ycombinator. For the first time, AI makes it possible to reason through patient data, interpret drug criteria, extract key evidence, and orchestrate clinical workflows at scale. Latent is that reasoning layer. Today, over 45 of the top U.S. health systems, including Yale New Haven Health, UCSF Health, UCLA Health, Mount Sinai Health System, and Vanderbilt University Medical System all use Latent to perform high-stakes clinical knowledge work. We've helped over 2 million patients access life-saving medications faster and reduced denials by more than 30%. We're expanding our clinical reasoning engine across every process where clinical knowledge must be translated into action, and building a team to match the scale of the problem.

English
3
3
16
2.8K
Will Reed retweetledi
Baseten
Baseten@baseten·
Live from Jensen's keynote remarks at GTC: "The inflection point of inference has arrived. AI now has to think. In order to think, it has to inference. AI now has to do. In order to do, it has to inference. AI has to read. In order to do so, it has to inference. It has to reason. It has to inference. Every part of AI, every time it has to think, it has to reason, it has to do, it has to generate tokens, it has to inference. It's way past training now. It's in the field of inference. So the inference inflection has arrived."
Baseten tweet media
English
3
4
20
1.7K
Will Reed retweetledi
Garrett Lord
Garrett Lord@GarrettLord·
Agree minus inference. Data and post-training converge not because it's the same people but because it's the same loop. What you train on determines what improves. What improves determines what you need next. That cycle only compounds when the loop is tight. More distance between data and training means slower iterations and worse signal. Inference is chips and software optimization. Different game. Also - Post-training for enterprise is about to accelerate. Open source tooling, published methods, dropping compute costs. Every barrier is falling except the data itself. More companies training means more demand for verified expert signal. Synthetic scales generation. It doesn't scale verification. The bottleneck narrows toward human judgment, not away from it.
abhijay@abhijaymrana

All training data, inference, and RL-as-a-service companies will be doing the ~ exact same thing within 6 months. This convergence is already in motion.

English
5
3
50
9.6K
Will Reed retweetledi
Claude
Claude@claudeai·
1 million context window: Now generally available for Claude Opus 4.6 and Claude Sonnet 4.6.
Claude tweet media
English
1.2K
2K
25.1K
5.5M
Will Reed retweetledi
Claude
Claude@claudeai·
Introducing Code Review, a new feature for Claude Code. When a PR opens, Claude dispatches a team of agents to hunt for bugs.
English
2.1K
5.2K
62.9K
22.6M
Will Reed retweetledi
Garrett Lord
Garrett Lord@GarrettLord·
Someone on our subreddit just posted this. Paid off their student loans. $235K earned on HAI since July. Now they want to send their mom to Italy and start a college fund for their nephew. This is the part of building an AI company that never makes the headlines.
Garrett Lord tweet media
English
7
14
188
22.8K
Will Reed retweetledi
Base Power
Base Power@basepowerco·
We’re excited to announce a new agreement with @CoServ, deploying 100 MW of residential battery storage across their North Texas service territory. The program is Base’s largest collaboration to date and one of the largest distributed residential energy storage programs led by a Texas electric cooperative. This marks Base’s fifth utility collaboration in Texas, building on a proven model for rapidly bringing new capacity online.
English
4
8
58
36.1K
Will Reed retweetledi
John Collison
John Collison@collision·
Reiner Pope (@MatXComputing) just raised a $500m round led by @leopoldasch and Jane Street to build faster AI chips. I enjoyed having him on Cheeky Pint so I could ask all my questions about how chip design actually works, where the speed-up comes from, and how the industry will evolve. 00:00:15 Google’s AI revival 00:07:54 MatX 00:17:11 AI supply chain 00:21:48 Designing chips 00:37:11 TSMC 00:44:17 Token pricing 00:44:55 RL-ing chip design 00:49:26 Design to production 00:56:05 MatX culture 01:02:57 Rust 01:05:21 Cuckoo hashing 01:09:35 Unexplored model architectures
English
21
39
424
49.9K
Will Reed retweetledi
Abridge
Abridge@AbridgeHQ·
𝗦𝗵𝗶𝘃 𝗥𝗮𝗼 𝗡𝗮𝗺𝗲𝗱 𝘁𝗼 𝗙𝗼𝗿𝗯𝗲𝘀’ 𝗜𝗻𝗮𝘂𝗴𝘂𝗿𝗮𝗹 𝗟𝗶𝘀𝘁 𝗼𝗳 𝗔𝗺𝗲𝗿𝗶𝗰𝗮’𝘀 𝟮𝟱𝟬 𝗚𝗿𝗲𝗮𝘁𝗲𝘀𝘁 𝗟𝗶𝘃𝗶𝗻𝗴 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗼𝗿𝘀 @Forbes has named Dr. @ShivdevRao, CEO and Co-Founder of Abridge, to its inaugural list of America’s 250 Greatest Living Innovators, part of the American250 series recognizing leaders shaping the future of the country. Forbes describes him as an “AI pioneer building virtual helpers to rescue doctors from paperwork.” What started as a belief that the clinical conversation is the most important moment in healthcare has grown into an enterprise-grade AI platform projected to support more than 80 million conversations this year across more than 250 of the largest and most complex health systems in the country.
Abridge tweet media
English
0
4
7
1K
Will Reed retweetledi
Coen Armstrong
Coen Armstrong@coen_armstrong·
Reiner, Mike & the extraordinary MatX team have made prescient technical bets in deeply co-optimising chips with large models, and can significantly push the frontier on the cost & quality of intelligence. It’s a privilege to work with them; I’m very excited for what’s next.
Reiner Pope@reinerpope

We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.

English
1
1
25
2.3K
Will Reed retweetledi
Sholto Douglas
Sholto Douglas@_sholtodouglas·
Reiner taught me much of what I know - goes without saying that I trust him to make the best chip in the world.
Reiner Pope@reinerpope

We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.

English
12
10
412
50.9K
Will Reed retweetledi
Yasmin Razavi
Yasmin Razavi@YasminRazavi·
The @MatXComputing team is ambitiously building the compute infrastructure for our AGI future. It’s been a pleasure working with @reinerpope, @MikeGunter_ , and the team this past year. Incredible milestone and the best is still ahead.
Reiner Pope@reinerpope

We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.

English
8
6
132
61.4K
Will Reed retweetledi
Bloomberg
Bloomberg@business·
An AI chip startup founded by two Google alumni has raised more than $500 million in a new round to compete with Nvidia bloomberg.com/news/articles/…
English
13
33
289
184.9K
Will Reed retweetledi
Dannie Herzberg
Dannie Herzberg@DannieHerz·
At Baseten, we’re proud to serve category-defining companies like Abridge, Cursor, Clay, OpenEvidence, Gamma, and Writer. A glimpse into their impact 👇
English
1
1
25
2.7K
Will Reed retweetledi
Claude
Claude@claudeai·
Introducing Claude Code Security, now in limited research preview. It scans codebases for vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix issues that traditional tools often miss. Learn more: anthropic.com/news/claude-co…
English
1.9K
5.8K
50K
26M
Will Reed retweetledi
Baseten
Baseten@baseten·
"No other product lets you launch ten different training jobs on four different datasets." –Head of Clinical NLP, OpenEvidence Over 40% of U.S. physicians trust @EvidenceOpen's platform for fast, accurate medical information. Their secret: custom, specialized models built on Baseten Training. Here's how we helped them save $1.9M via model training and improved their latency 23x to power 100M+ clinical consultations per year. baseten.co/resources/cust…
Baseten tweet media
English
0
7
29
16.5K
Will Reed retweetledi
Abridge
Abridge@AbridgeHQ·
The head of @AnthropicAI's biology and life sciences business recently spoke about the company’s approach to building and partnering in healthcare. “In ambient AI, for example, Abridge is already a partner,” said Eric Kauderer-Abrams, Head of Biology and Life Sciences at Anthropic. “Our perspective is that we develop products where we see a gap. If there’s a great product already serving certain use cases, such as Abridge, we don’t need to reinvent the wheel.”
Abridge tweet media
English
2
5
19
1.8K