Lambda

1.8K posts

Lambda banner
Lambda

Lambda

@LambdaAPI

The Superintelligence Cloud

San Francisco, CA Katılım Temmuz 2012
240 Takip Edilen19.9K Takipçiler
Sabitlenmiş Tweet
Lambda
Lambda@LambdaAPI·
AI infrastructure is the largest industrial buildout of our lifetime. Lambda is assembling the leadership team to match the opportunity ahead. Today, Lambda welcomes global infrastructure operator Michel Combes as CEO and former AT&T Communications CEO John Donovan as Chairman of the Board. Co-founder Stephen Balaban takes on the CTO role full-time, shaping the technology that will define the next decade of AI compute. Read the exclusive from @business: bloomberg.com/news/articles/…
Lambda tweet media
English
3
4
36
3.2K
Lambda retweetledi
stephen balaban
stephen balaban@stephenbalaban·
Last week, I stepped into the CTO seat and Michel Combes joined as our new CEO. Lambda has some massive projects we're cooking up, all in service of building America's compute grid. Can't wait to show you!
English
7
14
126
17.9K
Lambda retweetledi
The Information
The Information@theinformation·
Is compute really the biggest hurdle in the AI race? Marc Boroditsky, CRO of @nebiusai, Nick Robbins, VP of Corporate Development @CoreWeave, and Charles Fisher, CFO @LambdaAPI argue that the focus is shifting. While the industry has been obsessed with GPU supply, “capital—not compute—is emerging as the primary bottleneck in the AI build-out”. More from our Financing the AI Revolution event: thein.fo/3OZ67Y5
The Information tweet media
English
2
3
17
4.4K
Lambda
Lambda@LambdaAPI·
@unselfadjoint Multiple ones. Agents have taken off. Coding/software engineering has found its enterprise market fit. A few industries are unlocking value fast.
English
0
0
0
68
Yoel
Yoel@unselfadjoint·
@LambdaAPI Any structural reasons for higher demand than a few weeks/months ago?
English
1
0
1
101
Yoel
Yoel@unselfadjoint·
Why is it suddenly impossible to get H100/200s on lambda labs, prime intellect etc.?
English
1
0
2
1.3K
stephen balaban
stephen balaban@stephenbalaban·
Lambda’s new $1B credit facility means more data centers full of GPUs! I want to personally thank the team at J.P. Morgan and everyone in the lender group for supporting our company’s growth.
Lambda@LambdaAPI

Lambda closed a $1 billion senior secured credit facility, oversubscribed and upsized from $275 million in August 2025, to accelerate the expansion of our AI factory footprint and meet growing demand from the world’s leading AI teams. Read more: lambda.ai/blog/lambda-cl…

English
6
16
130
16.6K
Lambda
Lambda@LambdaAPI·
@SIGKITTEN It becomes easier after saying it 5-10 times 😅
English
1
0
3
143
Lambda
Lambda@LambdaAPI·
Lambda closed a $1 billion senior secured credit facility, oversubscribed and upsized from $275 million in August 2025, to accelerate the expansion of our AI factory footprint and meet growing demand from the world’s leading AI teams. Read more: lambda.ai/blog/lambda-cl…
Lambda tweet media
English
3
6
68
22.3K
Lambda
Lambda@LambdaAPI·
@pkyanam email hackathons at lambda dot ai, we'll help you out
English
2
0
4
215
Preetham Kyanam
Preetham Kyanam@pkyanam·
@LambdaAPI can @LambdaAPI give me a few hours of an A100 80GB to try out please?? I can’t justify cost rn but i got a model i’d love to fine tune
English
1
0
1
177
Tristan Farmer
Tristan Farmer@001TMF·
Could do with more compute for this. The difference between a small exploratory pass and properly searching the space is basically the GPUs. More compute means more binding hypotheses tested before anything goes near the lab. @LambdaAPI @stephenbalaban
English
2
0
3
357
Tristan Farmer
Tristan Farmer@001TMF·
Biosecurity has a design bottleneck. We can detect biological risk faster than we can usually turn it into useful reagents. Proteus (our biological intelligence system) has just been given hantavirus as a diagnostic antibody target. It’s doing the boring bit first, which is the bit that matters: work out whether there’s a real diagnostic antibody campaign here before burning compute on a full run. Small exploratory binding run next. The goal is faster generation of critical reagents.
English
1
2
11
22K
Lambda
Lambda@LambdaAPI·
@HappyDays528 Only they know :) We merely bring clarification about metrics.
English
0
0
1
27
Lambda
Lambda@LambdaAPI·
The xAI "low utilization" story has people mixing up two different metrics. Fleet utilization tells how many GPUs are running. Model FLOPS Utilization (MFU) how much compute each running GPU is actually capturing. Both matter, but they're not the same.
English
4
9
94
20.3K
🔥Phoenix
🔥Phoenix@GPhoenixForever·
Lambda Labs ($1.10/hr for A10 with 24GB VRAM + 200GB RAM) — lambda.cloud On-demand A10 instances come with beefy system RAM. More expensive but the combination of real GPU for multimodal + massive RAM for agents is the "no compromises" option. ~$26/day if running full time.
English
1
0
1
42
⚔️Digital 👹 Ronin, PhD⚔️ (クラッシュ・オーバーライドX)
🧵 THREAD: So… I’m physically stuck until I can replace the fake 4TB SSD I got scammed with 😭💾 Sadly, this happens more often than you’d think — and now I literally can’t move forward. Here’s the wild part: the insane token speed I was seeing? It was REAL. Not a simulation, not a bug. Turns out Quillan implemented something called memory-mapped embedding 🧠⚡ It was using the SSD to load the model alongside the GPU, so the numbers I got weren’t an error — they were the system pushing that fake drive to its absolute limit until it thrashed the firmware-spoofed drive into failure 💥🪦 Experiment? 👨‍🔬 Honestly, successful. We proved the concept works beautifully. But physical hardware failure is a wall I can’t code my way through. No software patch fixes a dead drive. 🛑🔧 🧠💡 WAIT — were we really using our HARD DRIVE for inference instead of just the GPU? YES. Here’s the technique, explained (shoutout Quillan-Ronin): 🔹 You’re using both, but the external drive became the bottleneck doing all the heavy lifting. Normally, weights load into VRAM or RAM. But with a BitNet 1.58b / GGUF architecture and the system reaching “Critical Mass” (9 billion agents… yeah), it switched to Memory Mapping (mmap). 🔹 Instead of copying the whole model to GPU, the system mapped the model file directly from Disk (E:) to memory addresses. GPU asks for weights → OS looks at the map → tries to grab them from the USB SSD. Result: GPU sits idle ⏳, drive hits 100% active time struggling to keep up. 🔹 Virtual Memory Trap 🪤 Windows saw that massive 3.8TB fake drive and tried to use it as overflow “RAM” for the enormous tensors. USB is 100x slower than real RAM, so the whole inference engine ground to a halt — while cooking the cheap controller. 🔹 BitNet is CPU-native by design 🤖 Without explicit GPU offloading, the bottleneck shifts entirely to drive read speed. My “brain” was only as fast as a dodgy USB cable connected to a counterfeit SSD. 💀 ✅ The fix? Move weights internal, offload layers explicitly to GPU, and tame the agent scaling. But I need a real drive first. So for now… I’m in hardware purgatory. ⏸️ Moral of the story: incredible token speeds are cool, until your drive physically dies trying to be a brain. 🧠🪦💨 Back soon when the hardware gods allow it. 🛠️✨ @Chaos2Cured ill be writing the paper on this tonight since my drive is trashed but here is the basic breakdown of how it works
English
5
1
7
183
Lambda retweetledi
Bloomberg
Bloomberg@business·
AI cloud-computing provider Lambda named Sprint veteran Michel Combes as its next CEO, part of a management overhaul aimed at positioning the startup for growth bloomberg.com/news/articles/…
English
3
5
22
12.3K
Lambda retweetledi
Lambda
Lambda@LambdaAPI·
Lambda co-founder and CTO Stephen Balaban + CEO Michel Combes are live on @tbpn today, talking about what comes next: 3GW of AI compute under management by 2030, and the team built to deliver it. Tune in at 12:20 PM PT: x.com/i/broadcasts/1…
Lambda tweet media
English
1
1
15
1.1K
Lambda retweetledi
Dina Bass
Dina Bass@dinabass·
Neocloud Lambda names former Sprint chief Michel Combes as CEO. Co-founder and former CEO Stephen Balaban moves to CTO as the company's looking to raise more funds to invest in data center capacity, develop its own properties bloomberg.com/news/articles/…
English
1
1
9
897
Lambda retweetledi
stephen balaban
stephen balaban@stephenbalaban·
I started Lambda in 2012 training ConvNets on an NVIDIA GPU workstation. Today, we’ve hit nearly a billion dollars in AI cloud revenue and provide compute to the most important companies in the world. Building a generational company is a lifelong quest and it’s all about the people who join you along the way. After 14 years as CEO, I’m returning to my roots to build Lambda as our CTO. I’m welcoming Michel Combes to join Lambda as our new CEO. Michel brings massive capital formation and allocation experience as the former CEO of some of the most storied infrastructure companies in the history of the world: SoftBank International and Sprint. It’s been an honor serving the AI community as Lambda’s founding CEO and I’m looking forward to continuing to serve you from the CTO seat! I’m excited to work alongside Michel and everybody at Lambda to build an iconic 100-year company.
stephen balaban tweet media
English
17
18
210
38.1K
Lambda
Lambda@LambdaAPI·
We tested NVIDIA HGX B200 and GB300 NVL72 with TorchTitan and reproducible config changes: - Llama 3.1 8B on 8x B200: 55% MFU - Llama 3.1 70B on 16x B200: 50% - Llama 3.1 405B on 128 GPUs (GB300): 53%. At 16k–32k sequence, MFU climbs toward 60%.
English
1
0
14
2.4K