Luis Lavena

7.6K posts

Luis Lavena

Luis Lavena

@luislavena

Creator of RubyInstaller for Windows, rake-compiler and many other Ruby tools for devs. Job: @AREA_17_ Opinions: Mine

Paris, France انضم Ekim 2008
60 يتبع1.4K المتابعون
تغريدة مثبتة
Luis Lavena
Luis Lavena@luislavena·
Last year I released the container image I use to compile @CrystalLanguage apps. You can use it to build and test against different version, 1.2, 1.3 or latest 1.5.1 This started a series of experiments on things I'm working on since then... 1/ 🧵
Luis Lavena@luislavena

Decided to make public my Alpine-based, Crystal development container image: hydrofoil-crystal github.com/luislavena/hyd… Quick demo of Radix library using it to automatically run specs on changes. Have fun! @CrystalLanguage #crystallang

English
2
4
19
0
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
Uncloud just got its first university adoption! 🎓 Radboud's Faculty of Science is rolling it out to manage Docker clusters for their research groups. They're going for: → 300+ websites across a dozen machines → 100+ people who need to self-serve → each research group gets a VM or cluster they fully control → self-host the entire web stack including DB + automated S3 backups → GitLab environments for deploy-on-green workflows → platform team out of the loop >"It looks very much like docker compose or the (semi-deprecated) docker swarm, and there is good documentation" And a huge thanks to @miekg for digging in, reporting issues and contributing fixes over the past few weeks 🙏 Can't wait to see it running in production!
Pasha Sviderski tweet media
English
3
3
32
1.2K
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
Just shipped Uncloud v0.18 🎉 → Deploy plan got a glow-up - much nicer to read and understand 👇 → Pre-deploy hooks - run DB migrations before new code goes live → System 'ssh' is now the default - faster commands and full SSH config support → Pin compose.yaml to cluster with 'x-context' 📚 New docs!
Pasha Sviderski tweet media
English
2
4
32
1.3K
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
🎙️ I went on go podcast() this week to talk about why I’m building Uncloud - Docker Compose for production, bridging the gap between Docker and Kubernetes. We covered: - why most projects don't need k8s and what alternatives we have - a hybrid setup mixing cloud VMs and on-prem - WireGuard mesh networking - unregistry - direct image push to servers without a registry Give it a listen on any podcast platform and let me know what you think! 🎧👇
Pasha Sviderski tweet media
English
1
4
15
847
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
In the next version of Uncloud: detailed deployment plan with style ✨ The plan shows exactly how your services will be rolled out: - what changes - which containers get replaced - where and in what order - whether there's a downtime risk Review, then deploy. Before → After 👇
Pasha Sviderski tweet media
English
5
5
40
1.5K
Luis Lavena
Luis Lavena@luislavena·
@aarondfrancis For me worktrees advantage is that I don't need to cd and git pull the other copy multiple times to get the updates, I just switch and rebase them from the same remote update. Is not about space efficiency but performance and memory (did I remember to pull on this clone?)
English
0
0
0
41
Aaron Francis
Aaron Francis@aarondfrancis·
Why do people like git worktrees over discrete checkouts? (This isn't bait, it's research)
English
269
4
364
137.8K
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
Just released Uncloud 0.17 🚀 Think multi-machine Docker Compose for production Rolling deployments now monitor crashes + health checks after new containers start. If a container keeps restarting or turns unhealthy, the deployment rolls it back to prevent downtime. Enjoy safer deploys by default.
Pasha Sviderski tweet media
English
2
3
33
1.1K
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
TIL that K8s rolling deployments are not zero-downtime by default. To achieve zero downtime during rollout (stateless workloads), old containers should typically be terminated the following way: 1. Remove the container endpoint from the ingress/load balancer to stop sending new requests to it but continue processing the active ones. 2. Gracefully stop and remove the container. On stop, the process in the container should stop accepting new requests and finish serving all active ones, then terminate. The updated ingress will guarantee that new requests won't be sent to the stopping container. The strict order is essential. Alternatively, if updating the ingress configuration is not practical, the ingress should be able to automatically detect unhealthy endpoints and retry the failing requests with healthy endpoints (aka passive health checks). The problem is that K8s doesn't coordinate pod termination with the ingress, nor do popular ingress controllers implement retries for failed requests by default. It simply terminates a pod which triggers the ingress update and process shutdown simultaneously. If the process happens to start terminating before the ingress removes the endpoint, new requests will get 5xx - downtime. The workaround is to add a wonky sleep(10s) in the app’s shutdown handler or as a preStop hook in the pod. This ensures the changes are propagated to the ingress before actually stopping the process in the container. It's a shame that the behaviour the majority would expect is not the default. Sensible defaults matter. UX first please
English
1
2
11
731
Luis Lavena
Luis Lavena@luislavena·
Using @aarondfrancis counselors to build it (thank you!), then the judge evaluates the results (baseline vs improved version) and comes back with a verdict on success/failure, but still playing with it.
Luis Lavena tweet mediaLuis Lavena tweet media
English
0
0
1
63
Luis Lavena
Luis Lavena@luislavena·
Has anyone worked on a testing/validation framework for AI skills? Been setting one up to validate improvements on my own skills but that might be biased, so would love a second opinion😅
English
1
0
0
107
Nate Berkopec
Nate Berkopec@nateberkopec·
You’re about to see a lot more “port of X to Y” software. It hits the sweet spot of what agents are good at: translation, and looping against a huge, known spec (the previous implementation)
English
14
2
109
6.5K
Nate Berkopec
Nate Berkopec@nateberkopec·
Interesting observation: none of the "humanizer" skills I've tried (and even the ones I've coded myself) have been able to "defeat" @pangramlabs in humanizing a 100% ai-written original text. However, they do successfully "humanize" text which was originally dictated by voice.
English
1
0
1
1.7K
Luis Lavena
Luis Lavena@luislavena·
... hours later, after tackling bunch of other issues ... Claude: Multiline plain scalars are complex. Let me look at a simpler validation error first. 🙈🤦
English
0
0
0
76
Luis Lavena
Luis Lavena@luislavena·
Me: Here are some errors, fix them. Claude: This test case involves multiline plain scalars, which is a complex issue. Let me look at other failing tests to find easier wins first. 🤖🐓
English
1
0
0
105
Luis Lavena أُعيد تغريده
David Cramer
David Cramer@zeeg·
How did you verify your code worked before AI? You still need to do that.
English
40
11
164
16.2K
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
Finally sent my first newsletter: A year of building Uncloud (a simpler Kubernetes alternative for small teams and solo developers) What happened, where we are now, and what’s coming in 2026 👇
Pasha Sviderski tweet media
English
3
7
30
2.7K
Luis Lavena
Luis Lavena@luislavena·
@nateberkopec So it's the agent connect to slack via MCP and has access to the conversation? Deploying one agent per client? Or having an orchestrator that will dispatch them when needed?
English
0
0
0
29
Nate Berkopec
Nate Berkopec@nateberkopec·
gist.github.com/nateberkopec/d… We're actively using this to serve Japanese clients now (yes, I live in Japan, no, my Japanese is not nearly good enough for full on ビジネス敬語
日本語
1
0
3
905
Nate Berkopec
Nate Berkopec@nateberkopec·
My Ruby on Rails performance retainer service is now available in any (human) language, anywhere.
Nate Berkopec tweet media
English
3
0
12
2.2K
Luis Lavena أُعيد تغريده
Pasha Sviderski
Pasha Sviderski@psviderski·
Shipped this race condition fix in Uncloud v0.16 🚀 Also: - new 'uc wg show' command for troubleshooting the WireGuard connectivity issues and mesh topology - cap_add, cap_drop, sysctls support in Compose files
Pasha Sviderski@psviderski

Occasionally new machines joining Uncloud's cluster would lock themselves out of the network, permanently 💀 This week I fixed a subtle race condition in the cluster join flow. When a new machine joins, it needs to configure WireGuard peers and sync the distributed database (Corrosion) with one of the peers. The problem is that these two steps depend on each other: - WireGuard controller watches the DB for new peers to configure - DB needs the network peers to be configured to sync In some cases, WireGuard was reading from the DB before the initial sync finished. It was getting an empty peer list, misconfiguring the network, and locking the machine out without a chance to recover. The fix is simple but required rethinking the startup sequence. Start WireGuard early so DB can reach its peers but delay everything that reads from the DB until sync completes: 1. Start WireGuard + API server 2. Wait for DB sync 3. Start WireGuard peer updates, DNS, Docker, Caddy, etc. We pass the latest DB version (Lamport logical clock) during the join handshake. This is how the joining machine knows when the DB is caught up enough. Now the startup machinery is much easier to reason about and evolve. And I’m so glad I pushed back hard on Claude with all its overengineered ideas and crutches to resolve this issue. Also added automatic gRPC retries with exponential backoff for transient failures. "Temporarily unavailable" in a cluster just means a peer hasn't finished its own startup yet.

English
0
1
6
456