
Decided to make public my Alpine-based, Crystal development container image: hydrofoil-crystal github.com/luislavena/hyd… Quick demo of Radix library using it to automatically run specs on changes. Have fun! @CrystalLanguage #crystallang
Luis Lavena
7.6K posts

@luislavena
Creator of RubyInstaller for Windows, rake-compiler and many other Ruby tools for devs. Job: @AREA_17_ Opinions: Mine

Decided to make public my Alpine-based, Crystal development container image: hydrofoil-crystal github.com/luislavena/hyd… Quick demo of Radix library using it to automatically run specs on changes. Have fun! @CrystalLanguage #crystallang























Occasionally new machines joining Uncloud's cluster would lock themselves out of the network, permanently 💀 This week I fixed a subtle race condition in the cluster join flow. When a new machine joins, it needs to configure WireGuard peers and sync the distributed database (Corrosion) with one of the peers. The problem is that these two steps depend on each other: - WireGuard controller watches the DB for new peers to configure - DB needs the network peers to be configured to sync In some cases, WireGuard was reading from the DB before the initial sync finished. It was getting an empty peer list, misconfiguring the network, and locking the machine out without a chance to recover. The fix is simple but required rethinking the startup sequence. Start WireGuard early so DB can reach its peers but delay everything that reads from the DB until sync completes: 1. Start WireGuard + API server 2. Wait for DB sync 3. Start WireGuard peer updates, DNS, Docker, Caddy, etc. We pass the latest DB version (Lamport logical clock) during the join handshake. This is how the joining machine knows when the DB is caught up enough. Now the startup machinery is much easier to reason about and evolve. And I’m so glad I pushed back hard on Claude with all its overengineered ideas and crutches to resolve this issue. Also added automatic gRPC retries with exponential backoff for transient failures. "Temporarily unavailable" in a cluster just means a peer hasn't finished its own startup yet.