
On this week's episode, @DylanBourque is back! He joins @skriptble and @sudomateo to discuss the UUID proposal, which Dylan thinks should have been the least contentious proposal in the history of Go! fallthrough.fm/ep/62
Dylan Bourque
758 posts


On this week's episode, @DylanBourque is back! He joins @skriptble and @sudomateo to discuss the UUID proposal, which Dylan thinks should have been the least contentious proposal in the history of Go! fallthrough.fm/ep/62


We did a wrap up for the year episode called Stack Trace! We pulled a bunch of stats from the first year of Fallthrough. In this episode, @skriptble, @sudomateo, and @DylanBourque talk through these stats and how they feel about Fallthrough's first year. fallthrough.fm/ep/52








🧠 What if you could design Go systems the way nature designs organisms? In “Goroutines and Cells: Lessons in Goal-Directed Systems,” @carlisia shows how biological principles like graded signaling and decentralized coordination can inspire more adaptive, scalable, and fault-tolerant Go concurrency. Walk away with fresh mental models to level up your concurrent programming! Don’t miss this talk at #GopherCon: gophercon.com/agenda/session… #RoadToGopherCon #TheGophersTakeManhattan #golang




And we've published episode 11! In this episode, @DylanBourque is joined by @skriptble , @sudomateo , and guest @kelseyhightower to discuss communities, AI, and retirement! This episode has bonus content for Fallthrough subscribers. Happy Listening! zurl.co/B78kA




Can you tell the difference between the end-to-end latencies of a multi-region Bufstream cluster and a single-region one? On the left is the single-region cluster from our last blog post. On the right is a new test we just completed with 100 GiB/s writes replicated through two (!) cloud regions entirely through GCS. A detailed blog post is coming soon, but to give out some details: • 108 n2d-standard-32 brokers with Tier 1 networking. • Dual-region GCS bucket with Turbo Replication. • 9 node multi-region Spanner cluster with read-write nodes in us-west1 and us-west2; witness nodes in us-west3. This setup is enough to handle 100 GiB/s writes and 300 GiB/s reads with active-active producers/consumers running in two regions simultaneously. On top of it all, it provides the best RPO/RTO guarantees in the industry with Spanner’s p99.999% multi-region uptime SLA and GCS’ p99 15 minutes replication SLA. A single-region Apache Kafka cluster handling this load would be challenging. A stretch cluster is unthinkable. We will leave the cost of that to your imagination 😊

