Cuong Le

120 posts

Cuong Le banner
Cuong Le

Cuong Le

@cuongleqq

Engineering leader | Writing about high-performance Rust, Go, and database internals

Katılım Eylül 2017
139 Takip Edilen236 Takipçiler
Cuong Le
Cuong Le@cuongleqq·
> Non-technical teams are now shipping production code > AI-generated code has rigorous human review Oh man. Imaging seeing dozens of 10k-line PRs the next morning… from non-technical teams. Then when you review, they feed it back to the agents and it makes a commit that changes 1k lines. You ended up open your agent and ask it to review RIGOROUSLY 😂
Brian Armstrong@brian_armstrong

This is an email I sent earlier today to all employees at Coinbase: Team, Today I’ve made the difficult decision to reduce the size of Coinbase by ~14%. I want to walk you through why we're doing this now, what it means for those affected, and how this positions us for the future. Why now Two forces are converging at the same time. We need to be front footed to respond to both. First, the market. Coinbase is well-capitalized, has diversified revenue streams, and is well-positioned to weather any storm. Crypto is also on the verge of the next wave of adoption, with stablecoins, prediction markets, tokenization, and more taking off. However, our business is still volatile from quarter to quarter. While we've managed through that cyclicality many times before and come out stronger on the other side, we’re currently in a down market and need to adjust our cost structure now so that we emerge from this period leaner, faster, and more efficient for our next phase of growth. Second, AI is changing how we work. Over the past year, I’ve watched engineers use AI to ship in days what used to take a team weeks. Non-technical teams are now shipping production code and many of our workflows are being automated. The pace of what's possible with a small, focused team has changed dramatically, and it's accelerating every day. All of this has led us to an inflection point, not just for Coinbase, but for every company. The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast, and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core. What this means To get there, we are not just reducing headcount and cutting costs, we’re fundamentally changing how we operate: rebuilding Coinbase as an intelligence, with humans around the edge aligning it. What does this mean in practice? - Fewer layers, faster decisions: We are flattening our org structure to 5 layers max below CEO/COO. Layers slow things down and create coordination tax. The future is small, high context teams that can move quickly. Leaders will own much more, with as many as 15+ direct reports. Fewer layers also means a leaner cost structure that is built to perform through all market cycles. - No pure managers: Every leader at Coinbase must also be a strong and active individual contributor. Managers should be like player-coaches, getting their hands dirty alongside their teams. - AI-native pods: We’ll be concentrating around AI-native talent who can manage fleets of agents to drive outsized impact. We’ll also be experimenting with reduced pod sizes, including “one person teams” with engineers, designers, and product managers all in one role. In short: AI is bringing a profound shift in how companies operate, and we’re reshaping Coinbase to lead in this new era. This is a new way of working, and we need to leverage AI across every facet of our jobs. To those who are affected I know there are real people behind these decisions — talented colleagues who have poured themselves into this company and our mission. To those of you who will be leaving: thank you. You’ve helped build Coinbase into what it is today, and I am sincerely grateful for everything you've done. All impacted team members will receive an email to their personal account in the next hour with more information, and an invitation to meet with an HRBP and a senior leader in your organization. Coinbase system access has been removed today. I know this feels sudden and harsh, but it is the only responsible choice given our duty to protect customer information. To those affected, we will be providing a comprehensive package to support you through this transition. US employees will receive a minimum of 16 weeks base pay (plus 2 weeks per year worked), their next equity vest, and 6 months of COBRA. Employees on a work visa will get extra transition support. Those outside of the US will receive similar support, based on local factors and subject to any consultation requirements. Coinbase prides itself on talent density. Our employees are among the most talented people in the world, and I have no doubt that your skills and experience will be highly sought after as you pursue your next chapters. How we move forward To the team that is staying, I know this is a difficult day. We’re saying goodbye to colleagues and friends you've been in the trenches with. But here’s what I want you to know as we move forward together: Over the past 13 years, we have weathered four crypto winters, gone public, and built the most trusted platform in our industry. We’ve made it this far by making hard decisions and by always staying focused on our mission. This time will be no different – nothing has changed about the long term outlook of our company or industry. And most importantly, our mission has never been more important for the world. Increasing economic freedom requires a new financial system, and we’re building it. The Coinbase that emerges from this will be more capable than ever to achieve our mission. Brian

English
0
0
2
86
Cuong Le
Cuong Le@cuongleqq·
Cool video. Postgres handles large scans differently -- and I find this one equally clever. Its shared_buffers cache uses clock-sweep instead of an LRU list. Each buffer slot has a usage_count. When Postgres needs a victim, it sweeps through slots until it finds one: • pinned: skip • usage_count > 0: decrement and skip • usage_count == 0 and unpinned: victim found For large scans (table blocks > 1/4 of shared_buffers slots), Postgres allocates a small ring buffer -- just 256KB, roughly 32 slots for 8KB pages. The scan fills victims into this ring, then cycles back to reuse them. If a buffer in the ring has become hot (pinned or usage_count > 1), clock-sweep finds a replacement from the main pool instead. Net effect: a full table scan mostly churns through a tiny ring, typically around 256KB, instead of blowing out the whole shared_buffers cache. Simple idea, surprisingly effective.
Arpit Bhayani@arpit_bhayani

Table pages are cached, and this is how databases get a performance boost. But, a single full table scan can destroy such cache because, but here's how MySQL cleverly prevents this... When a full table scan happens, it can flush out all valuable cached pages and replace them with pages that are often never needed again. Given that every single page that is accessed is put into the cache. This 'cache pollution' forces subsequent queries to hit slow disk storage. MySQL solves this with the particularly simple yet clever midpoint insertion strategy, and I have explained this approach in my video. Give it a watch. Hope this helps.

English
0
0
3
167
Cuong Le
Cuong Le@cuongleqq·
@real_redp Fair point. I don't. People have always submitted patches they don't fully understand (but that's also how they learn the project). AI just makes it much easier to do that at scale. But it removes the learning part, and shifts more of the cost onto reviewers.
English
1
0
0
64
red plait
red plait@real_redp·
@cuongleqq why do you think that they understand non-ai generated patches?
English
1
0
0
91
Cuong Le
Cuong Le@cuongleqq·
Something is seriously wrong with how people contribute to open source these days. People submit AI-generated code they clearly don't understand. When reviewers comment, they feed it back to the AI and submit again. The loop never ends. Low-effort PR requires extremely high effort to review. That math is never fair. You think this is how you build a reputation in open source? No, no, and no. You're just burning out maintainers. --- Use AI to understand the codebase instead. Ask it to explain things. Build your own judgment. It's ok to use AI as a tool for coding, but make sure you filter out any AI slop before submitting. You gain the skills. You gain the reputation. The project actually benefits. Is it that hard?
English
3
2
15
1.2K
Cuong Le
Cuong Le@cuongleqq·
If you're using Go for years and feel stuck at intermediate level, I know the feeling. It's not you, it's your project. CRUD + business logic + REST APIs - no matter how big the codebase, this stack won't teach you advanced Go. But these projects will: • VictoriaMetrics / VictoriaLogs (custom storage engine, block compression, zero-alloc hot paths) • etcd (Raft consensus, WAL durability, election timeouts) • containerd (OCI runtime stack, cgroups, plugin architecture) • Prometheus (TSDB, memory-mapped chunks, query engine) There are many more. Pick one, read the code, send a PR. These projects can't afford slow code, so you'll learn escape analysis, zero-alloc patterns, and real concurrency whether you plan to or not.
English
1
0
7
187
Cuong Le retweetledi
FFmpeg
FFmpeg@FFmpeg·
FFmpeg is moving to Rust 🦀 Our use of C and Assembly in FFmpeg has been an unacceptable violation of safety. FFmpeg will be running 10x slower - but we're doing it for your safety. All your videos will appear green - safety first, working software later.
English
1.6K
3.7K
44.5K
2M
Aliaksandr Valialkin
Aliaksandr Valialkin@valyala·
@brankopetric00 I prefer writing a Makefile rule, which can be debugged locally, and then calling it as a one-liner at GitHub Actions. This saves a lot of time on debugging and maintenance.
English
4
1
28
2.3K
Branko
Branko@brankopetric00·
Debugging a GitHub Actions workflow is just committing a file with 'test' in the message 47 times until the YAML gods accept your offering.
English
41
130
2.3K
75.9K
Cuong Le
Cuong Le@cuongleqq·
Contributing to Rust has been a dream of mine for a long time. Last December, I finally did it. A small but real contribution to Rust Clippy. Here's what I learned that no tutorial will teach you. 1. "It works" ≠ "it belongs here" My code had zero logic errors from the first submission. Tests passed. The lint fired correctly. It still took multiple review rounds before it was accepted. Each round, the reviewer flagged something I hadn't considered. A different API, a different factoring, a different order of checks. None were correctness issues. All were belonging issues. Mature codebases have a shape. A vocabulary. Conventions no doc can fully capture. And when that codebase operates on AST and HIR, the learning curve is steeper than most. Getting your code to work is step one. Getting it to fit is step two. Only the first is obvious to newcomers. 2. Performance is a first-class design constraint I spent years building web and backend systems where readability almost always beats micro-optimization. Clippy changed that. The reviewer pushed back on how I structured my checks. His version was slightly harder to read, but it was faster. I had optimized for clarity. He optimized for performance. Both were valid, but only one was right for Clippy. The reasoning: this code runs on every file, every build, for every Rust developer in the world. Nanoseconds x millions of daily builds = something real. Readability-first is a fine default. But in performance-critical code like compilers, runtimes, and toolchains, scale flips the equation. Know which world you're writing for. 3. Don't let ego get in the way of the reviewer I was surprised by how many specific changes were requested. It felt strict at first. Then I paused and thought about it. Every request was reasonable. The reviewer has lived with this codebase. He understands every design decision and genuinely cares about keeping it in good shape. His knowledge was the most valuable resource I had access to, and I almost let pride get in the way of using it. In a review from someone who knows the codebase far better than you: every comment is a gift. Receive it that way. The flip side: if you're ever reviewing a first contribution, that same dynamic works in reverse. Your patience, or lack of it, is the whole experience for them. 4. A good test suite makes new contributors brave My biggest fear as a first-timer: breaking something silently. Clippy's golden-file tests killed that fear. The exact expected output is checked in alongside the code. Shift a diagnostic message even slightly, the test fails and shows you exactly what changed. Clippy even runs its own lints on itself. Knowing the safety net was that solid made me willing to experiment instead of playing it safe. A comprehensive test suite isn't just quality assurance. For anyone contributing to an open source project, it's a confidence multiplier. The difference between experimenting freely and playing it safe. --- That was my first time contributing to a real, large-scale open source project. Small contribution. Big lessons. More to come as I go deeper.
English
0
2
22
1.3K
Cuong Le
Cuong Le@cuongleqq·
Rust Beats C++ by a mile security.googleblog.com/2025/11/rust-i… The numbers are crazy IMO. • Rust: ~0.2 memory bugs per million lines, C/C++: ~1000 per million • Rust code reviews move ~25% faster • ~20% fewer review revisions • Rust changes get reverted far less often, roughly 4x better Google is pushing Rust into more of Android: drivers, firmware, the whole stack
English
1
1
4
272
Cuong Le
Cuong Le@cuongleqq·
Couldn't agree more with Jon Gjengset in this interview: youtube.com/watch?v=nOSxua… Rust adoption isn't held back by the learning curve. It's the switching costs of existing codebases. Honestly, learning to write safe concurrent C++ is even harder.
YouTube video
YouTube
English
0
0
4
435
Cuong Le
Cuong Le@cuongleqq·
I just read Cloudflare's blog about rewriting their core proxy in Rust and wow, the results are impressive: • 25% performance boost overall • Half the CPU, way less memory • No more memory bugs - crashes now = hardware fails • Ships features in 48hrs vs weeks (!) • 100+ devs, 130+ modules, zero drama • 10ms faster median response time This is what Rust at Internet scale looks like. It just works. Source: blog.cloudflare.com/20-percent-int…
Cuong Le tweet media
English
0
2
5
565
cache crab
cache crab@cachecrab·
you guys aren't using rust enums enough
English
11
1
71
4.2K
Cuong Le
Cuong Le@cuongleqq·
I wrote a post covering advanced Pattern Matching and best practices in Rust. It received a lot of likes on Reddit. Check it out here: blog.cuongle.dev/p/level-up-you…
English
0
2
14
1.4K