Max Sanna

47.4K posts

Max Sanna banner
Max Sanna

Max Sanna

@MaxSanna

Photography. Cloud architecture. Techno. Employee experience. IoT. Not necessarily in this order.

Berlin, Germany Katılım Ocak 2009
941 Takip Edilen2.2K Takipçiler
Max Sanna
Max Sanna@MaxSanna·
@livingdevops You forgot when a node disappears and all pods were on it so it goes down. So you add a PDB. Now the node can’t be restarted because it violated the PDB. So you add a topology spread constraint, now your pods stay available and nodes can churn.
English
1
0
21
2.7K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Kubernetes is beautiful. Every Concept Has a Story, you just don't know it yet. In k8s, you run your app as a pod. It runs your container. Then it crashes, and nobody restarts it. It is just gone. So you use a Deployment. One pod dies and another comes back. You want 3 running, it keeps 3 running. Every pod gets a new IP when it restarts. Another service needs to talk to your app but the IPs keep changing. You cannot hardcode them at scale. So you use a Service. One stable IP that always finds your pods using labels, not IPs. Pods die and come back. The Service does not care. But now you have 10 services and 10 load balancers. Your cloud bill does not care that 6 of them handle almost no traffic. So you use Ingress. One load balancer, all services behind it, smart routing. But Ingress is just rules and nobody executes them. So you add an Ingress Controller. Nginx, Traefik, AWS Load Balancer Controller. Now the rules actually work. Your app needs config so you hardcode it inside the container. Wrong database in staging. Wrong API key in production. You rebuild the image every time config changes. So you use a ConfigMap. Config lives outside the container and gets injected at runtime. Same image runs in dev, staging and production with different configs. But your database password is now sitting in a ConfigMap unencrypted. Anyone with basic kubectl access can read it. That is not a mistake. That is a security incident. So you use a Secret. Sensitive data stored separately with its own access controls. Your image never sees it. Some days 100 users, some days 10,000. You manually scale to 8 pods during the spike and watch them sit idle all night. You cannot babysit your cluster forever. So you use HPA. CPU crosses 70 percent and pods are added automatically. Traffic drops and they scale back down. You are not woken up at 2am anymore. But now your nodes are full and new pods sit in Pending state. HPA did its job. Your cluster had nowhere to put the pods. So you use Karpenter. Pods stuck in Pending and a new node appears automatically. Load drops and the node is removed. You only pay for what you actually use. One pod starts consuming 4GB of memory and nobody told Kubernetes it was not supposed to. It starves every other pod on that node and a cascade begins. One rogue pod with no limits takes down everything around it. So you use Resource Requests and Limits. Requests tell Kubernetes the minimum your pod needs to be scheduled. Limits make sure no pod can steal from everything around it. Your cluster runs predictably.
English
87
341
2.8K
270.7K
Max Sanna
Max Sanna@MaxSanna·
@robertocommit Inngest or Temporal are great tbh. Inngest even has steps for ai inference, with durability, retries, fan out etc.
English
0
0
1
41
Roberto
Roberto@robertocommit·
what are you guys using for long agentic tasks? is AutoGPT the best option out there?
Roberto tweet media
English
1
0
0
150
Max Sanna
Max Sanna@MaxSanna·
@runaway_vol One might say not enough! Xp did everything the government needed. Floppies? ✅ fax support? ✅ printer support? Also ✅
English
1
0
1
2
Max Sanna
Max Sanna@MaxSanna·
@AndrewCurioso @lemire Try context7, absolute life saver and I use it for everything, I’m pretty sure it would have taught Claude code to do this CI task quite quickly.
English
2
0
0
66
Daniel Lemire
Daniel Lemire@lemire·
You thought AI would save you? I maintain a relatively popular Java project, RoaringBitmap. For historical reasons, Java engineers expect libraries to be found on something called Maven Central. It is a hosting site run by a company that sells security-related services. There are far easier alternatives like JitPack, which just works. But Java has this "enterprise computing" vibe today. The Java engineers are really smart and productive, but they tend to accept conventions as they are. "We have always done it this way." Oracle has done a great job pushing Java forward, but the ecosystem evolves painfully slowly. Unfortunately, the hosting service is awkward to use. Unlike most other software repositories, you can't just release and push. There is a song and dance involved. For moderately complex projects, it may become harder to release code than to complete a PhD. Releasing and publishing software can be simple or complicated. When it is complicated, as it is in the case of Java libraries, you want to automate the process as much as possible. Thus, I expect to just trigger a workflow on something like GitHub and get the release out (after a time). After months of work and with help from kind folks, I got it working for RoaringBitmap. All it takes is a few lines of YAML (maybe 20) and some Kotlin configuration code (again, maybe 20 lines). "Use Claude, Gemini, Grok...," people might think. Well. Yeah. Doesn't work. At all. Don't get me wrong, I so wish that AI could do it. Effectively, the solution we ended up with was the result of a massive trial-and-error process where different solutions were tried, and they failed. And so forth. So what is happening? Why is something so silly at the frontier of what AI can do? I mean, I hear that AI can solve the toughest math problems, right? So it should be able to get 20 lines of YAML right. And there you get at the core of the issue. Mathematics is extensively documented. The rules of the game have been finely tuned. But when you do engineering, you don't know what the rules are. Will this flag work safely? The documentation, if there is one, spans 2 sentences and doesn't tell you. You can scan Reddit or random GitHub issues, but what is true and what is false? So you need to try. And try. And try. Of course, you might answer: AI can try many things too. And so it can. But the search space is vast, the errors are not instructive. Effectively, the answers cannot be found online. There is no good example that you can apply to your use case. Writing YAML is sometimes akin to doing advanced research. Will this all get solved in the next round of model updates? I don't think so. I wish it were so, but I don't think it will work. It is not chess or Go, or programming, that is the sign of human superiority, but YAML. YAML is the ultimate challenge.
Daniel Lemire tweet media
English
25
11
264
49.2K
Max Sanna
Max Sanna@MaxSanna·
@baaadl @kmanojkumar @brankopetric00 When that happens I let the image update itself first, then I change the env config. The app should work without the new env var or with the old value of an existing one without breaking, otherwise it should be fixed in code first.
English
0
0
1
33
Branko
Branko@brankopetric00·
You implement 'GitOps' using ArgoCD. Behavior: 1. Developer pushes code. 2. CI builds image. 3. CI commits new image tag to the Helm chart repo. 4. ArgoCD sees change and syncs. Problem: You have 50 services. If 5 developers push at once, they create a 'merge conflict' loop on the Helm chart repo, causing the CI pipeline to fail on the 'git push' step. How do you decouple the image build from the config update to prevent this race condition?
English
8
7
95
20.9K
Max Sanna
Max Sanna@MaxSanna·
@kmanojkumar @brankopetric00 This is the way. Argocd image updater is perfect as you can then have a clear boundary between ci and cd. I’m not a fan of having GitHub actions committing to the gitops repo.
English
1
0
1
103
K Manoj Kumar
K Manoj Kumar@kmanojkumar·
The right way is to stop CI from touching the config repo entirely. ArgoCD Image Updater watches your container registry directly. Sees new tag, updates the manifest, commits. One process writing to the repo instead of 50 pipelines fighting over it. CI builds and pushes image. Thats it. Config update becomes ArgoCD's problem.
English
3
0
9
1.4K
Max Sanna
Max Sanna@MaxSanna·
@RDarrylR I’ve been using this pattern for years, with a root app that scans for *-app.yaml in the repo and works well but I think Argo is progressively moving towards ApplicationSets now. Haven’t migrated to them yet also because they add templating complexity.
English
0
0
1
97
Darryl Ruggles
Darryl Ruggles@RDarrylR·
Managing multiple apps across Kubernetes clusters can get complicated. Using a Gitops approach with ArgoCD and using The "App of Apps" pattern offers a clean way to bootstrap everything. This uses one parent manifest including add-ons, workloads, and everything else and you let Git be the source of truth. I find this approach works really well. You apply one YAML file, and Argo CD handles creating namespaces, deployments, and services for all your child apps. Changes go through Git, sync automatically, and drift gets corrected without manual intervention. Stéphane Noutsa shows the full setup on EKS from Helm install to repo structure and watching replicas scale via a Git push. Check it out! lckhd.eu/dUiXyZ @nexus_share
Darryl Ruggles tweet media
English
3
6
62
3.2K
Tera
Tera@FlowFinderCat·
@steipete @MaxSanna @Mehuljd @Yuchenj_UW Im new to the open claw game and study your docs. I’m in love with your masterpiece! Thank you. ❤️ Allow me one question: 10 codex in parallel, with 10 different api on one account. Or 10 different accounts max subscribtion?
English
1
0
0
193
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
The creator of Clawdbot/Moltbot/OpenClaw @steipete, pushes 144 commits per day on average. Pre-AI, this was impossible. He ships code he never reads. He’s a conductor. GPT and Claude are his orchestra. 5–10 AI agents run in parallel under his command. One person is now an army.
English
106
55
1.3K
159.9K
Max Sanna
Max Sanna@MaxSanna·
@steipete @Mehuljd @Yuchenj_UW Yeah when I can I also parallelise, but I don’t feel like I can always do it. Has to be separate features that don’t depend on each other in the same project. If it’s multiple projects then I forget wtf I’m doing and really get confused trying to keep up.
English
0
0
1
167
Max Sanna
Max Sanna@MaxSanna·
@steipete @Mehuljd @Yuchenj_UW Every time I use Claude code I remember your rants about it and switch to codex, then get bored of waiting and I end up watching funny videos for 20 minutes 😅
English
1
0
5
1.9K
Max Sanna
Max Sanna@MaxSanna·
@brankopetric00 I value not depending on vendor cloud primitives like ECS and cloud run. Also it’s bad practice to set cpu limits (but recommended for memory). CPU is compressible, memory isn’t.
English
0
0
2
492
Branko
Branko@brankopetric00·
Kubernetes was built by Google to run millions of containers across a global fleet. You're running a Django app with 500 users on a $3,000/month EKS cluster. Stop it. I've seen 3-person startups hire "platform teams" to babysit their cluster. Six-month migrations for apps that ran perfectly fine on a $50 VPS. Engineers debugging YAML at 3 AM for problems that didn't need to exist. You don't have a scaling problem. You have a resume-driven development problem. Docker Compose exists. Railway exists. A single VPS exists. "But we might scale!" You won't. And if you do, migrating later is easier than maintaining Kubernetes without the team for it. And if you actually need Kubernetes? You're still probably doing it wrong: - No resource limits (one pod kills the node) - Using latest docker image tag - No health checks (traffic routing to dead pods) - Running as root (security? never heard of it) - Single replicas in production - Secrets in plain env vars Full breakdown with fixes:
Branko@brankopetric00

x.com/i/article/2017…

English
59
135
1.4K
196.1K
Max Sanna
Max Sanna@MaxSanna·
@AndreaVenanzoni No vabbè, tra poco trovano Teams e Onedrive e poi è finita. Documenti nel cloud spiati dai russi e messaggistica istantanea simile a quella usata dagli hacker.
Italiano
0
0
1
42
Andrea Venanzoni
Andrea Venanzoni@AndreaVenanzoni·
Magistrati, dirigenti pubblici e giornalisti che scoprono cose
Andrea Venanzoni tweet media
Italiano
30
14
171
4.6K
𝔸𝕟𝕥𝕙𝕠𝕟𝕪👾
Go to your ChatGPT and send this prompt: “Create an image of how I treat you”. Share your image result. 😂
𝔸𝕟𝕥𝕙𝕠𝕟𝕪👾 tweet media
English
18.7K
1.4K
21.7K
4.9M
Max Sanna
Max Sanna@MaxSanna·
@DrApocalypse Mamma mia che butthurt quelle risposte. E i commenti dando del lei, come se stessi rispondendo dall’ufficio anagrafe.
Italiano
0
0
3
515
ApocaFede
ApocaFede@DrApocalypse·
Qualcuno dica a #LauraPausini che no, non è così che si gestisce una pagina social. É allucinante dover leggere decine e decine di risposte passivo-aggressive da parte del suo “staff” a chiunque esponga dissenso nei confronti della sua disastrosa cover di #DueVite di Mengoni
ApocaFede tweet mediaApocaFede tweet mediaApocaFede tweet mediaApocaFede tweet media
Italiano
204
180
2.9K
341K
Max Sanna
Max Sanna@MaxSanna·
@aledeniz Literally what I thought when I saw the article earlier. ‘Only 10 minutes per train?!’
English
1
0
5
446
Alessandro Riolo
Alessandro Riolo@aledeniz·
On the Northern Italian 🇮🇹 elites' high expectations: here they are complaining because the yearly cumulative delay of high-speed railways – over 90,000 trains, roughly 50 million miles – is just a couple of months short of two years. That’s an average of 10 minutes per train, or a couple of minutes every hundred miles. True, Italy 🇮🇹 could do even better, but have they looked for a term of comparison to their beloved Germany 🇩🇪?
Alessandro Riolo tweet media
La Stampa@LaStampa

Nella sua crudezza, il dato è oggettivamente impressionante: le Frecce che hanno attraversato il Paese negli ultimi dodici mesi hanno accumulato un ritardo complessivo di quasi due anni (676 giorni per l’esattezza, pari a un anno e dieci mesi). Numero che, applicato agli oltre 90 mila treni presi in considerazione fra Frecciarossa, argento e bianca, cioè tutti quelli di Trenitalia che percorrono le linee ad alta velocità, si traduce in una media di dieci minuti in più per ogni convoglio rispetto all’orario previsto per l’arrivo. Lo studio, a cura di Claudia Calore, porta la firma di Europa Radicale, s’intitola allusivamente “Altra velocità” e offre una panoramica estesa del tempo perduto nel trasporto viaggiatori in Italia: «Ciò che emerge è che la nostra rete ferroviaria non regge e che e che vi sono settori che di frequente collassano letteralmente», spiega Igor Boni. Ne scrive Franco Giubilei per La Stampa #Treni #Frecciarossa #trenitalia

English
5
0
54
8.3K
Max Sanna
Max Sanna@MaxSanna·
@scottbuscemi @hassankhan @nicksainato Which brings us back to the original post :) I think passkeys are good to increase security for a broad % of users who’d do that instead. It’s just that most implementations suck and are tedious to use.
English
0
0
0
13
hk
hk@hassankhan·
I am moderately to above average tech savvy and I do not at all understand how passkeys work what an absolute failure of UX
English
62
7
1.1K
68.1K