Shvm
553 posts

Shvm
@final_const
Building dynamoDB on cloudflare, Senior staff engineer, chess addict

A sync engine solves many accidentally complex parts of building fullstack applications Why are you not using one again?

The Rust community is pretty annoying, but the anti-Rust community is on a whole other level of insufferable. Guys, grow up, there's more to life. 😅




We’re introducing Dynamic Workers, which allow you to execute AI-generated code in secure, lightweight isolates. This approach is 100 times faster than traditional containers. cfl.re/4c2NvPl

There are two ways to build real-time collaboration - either everything goes through a central server, or you go for a P2P mesh. Assume a collaborative canvas, like Figma, Canva, or Miro, with 10 users ... When you route every cursor movement through a central server, 10 users generate 60 pointer updates each second, which means 600 messages arriving at the server, which then fans them out to 9 recipients each. That is 5,400 messages per second, per session, just for mouse tracking. The alternative is a P2P mesh - every client connects directly to every other client, and the server never touches these high-frequency packets at all. But the mesh has its own problem - connections grow as n × (n - 1) / 2. With 4 users, 6 connections. With 10 users, it is 45. With 20, it becomes 190. i.e., each individual browser holds open (n - 1) simultaneous WebRTC connections. The server load goes to zero, but the client complexity grows quadratically. So when does mesh make sense? Use mesh topology when the data is high-frequency, low-stakes, and latency-sensitive - cursor positions, live selections, drawing strokes. Losing one update is fine; the next one arrives in 16 ms anyway. The server genuinely adds no value in this path. Do not use it for writes that matter - document saves, access control changes, conflict resolution. Those still go through the server. A better way to think about mesh topology is as a way to offload a specific class of traffic. Here's something worth remembering - not all real-time data is the same. Cursor positions and committed state have completely different requirements. Treating them identically - routing both through the server - is what creates the bottleneck in the first place. Split the traffic by its tolerance for loss and latency, and the architecture becomes obvious. Hope this helps.

whoa. so i was evaluating durable objects for a cool internal discord bot that i'm working on. and i just learned that the durable objects sqlite api let's you use the fts5 extension too. (and a couple others too) this is fucking perfect for my use case.



DynamoDB has an interesting constraint: one writer per partition. Durable Objects work in a similar way. Each object is a single-threaded authority over its own state. That similarity got me thinking. What if we map DynamoDB partitions directly to DOs and just see what happens?









