Mr_Freezeex

2.3K posts

Mr_Freezeex

Mr_Freezeex

@Mr_Freezeex

Katılım Ağustos 2013
1.5K Takip Edilen101 Takipçiler
Mr_Freezeex retweetledi
can
can@can·
your db query is slow? just add this! boom, now you are ai-native instead of lazy!
can tweet media
English
92
812
14.4K
424.4K
Mr_Freezeex retweetledi
Mr_Freezeex retweetledi
dax
dax@thdxr·
i really don't care about using AI to ship more stuff it's really hard to come up with stuff worth shipping i want to ship the same amount of stuff with higher quality both in product and code
English
114
131
2.4K
68.6K
Mr_Freezeex retweetledi
Theo - t3.gg
Theo - t3.gg@theo·
I’ve been a Browser Company hater for a year. I’ve been an Atlassian hater for a decade. I was formed in the fires of hell specifically for the task of covering this. Video later today.
English
18
6
927
33.7K
Mr_Freezeex retweetledi
INKU
INKU@Inku_fr·
「OFFICIEL」UN NOUVEAU FILM ANIME MADE IN ABYSS EST ANNONCÉE ! 🧗‍♀️ Rendez-vous en 2026 !
INKU tweet media
Français
66
266
2.9K
302K
Mr_Freezeex retweetledi
TOKANIM
TOKANIM@Tokanim_FR·
🚨 RUMEUR : LA SAISON 2 DE 86 EIGHTY-SIX DEVRAIT COMMENCER SA PRODUCTION PROCHAINEMENT !
TOKANIM tweet media
Français
33
133
1.2K
89.2K
Mr_Freezeex retweetledi
K Srinivas Rao
K Srinivas Rao@sriniously·
The story of Go's garbage collector is one of the most remarkable engineering transformations in modern computing that I have ever studied about. When Go launched in 2009, its garbage collector would freeze applications for literal seconds, making it unusable for any serious production system. Rob Pike and the team knew this was a problem but chose garbage collection anyway because manual memory management was simply too error prone for the kind of systems they wanted people to build. The early implementation was brutally simple, stop everything, mark all reachable objects, sweep away the garbage, then resume. This conservative mark and sweep approach couldn't even distinguish between actual pointers and random integers that happened to look like memory addresses, so it would accidentally keep dead objects alive. Production teams at Twitter were seeing 300 to 400 millisecond pauses regularly, which made achieving any reasonable uptime mathematically impossible. Rick Hudson joined the Go team in 2014 and immediately recognized that incremental improvements wouldn't cut it. The entire collector needed to be reimagined from scratch. Hudson had previously worked on the Train and Sapphire algorithms and understood that the secret wasn't just making the collector faster, but making it concurrent. The breakthrough came with implementing Dijkstra's tri-color marking algorithm from 1978, which had been mostly theoretical until that point. The tri-color approach is conceptually beautiful, objects start white (unvisited), become gray when discovered but not fully scanned, and turn black when completely processed. You can do most of this work while the application keeps running, as long as you maintain the invariant that black objects never directly reference white ones. Write barriers track when the application modifies pointers during collection, ensuring the algorithm stays consistent. Go 1.5 in August 2015 shipped this concurrent collector and immediately dropped pause times from hundreds of milliseconds to 30-40 milliseconds. But the team wasn't satisfied. They systematically eliminated every remaining bottleneck, Go 1.6 removed operations that scaled with heap size during stop the world phases, Go 1.7 eliminated stack scanning pauses, and by Go 1.8 they had achieved consistent 100 to 200 microsecond pauses regardless of heap size. The interesting part in the whole narrative is not what the team did, but what they chose not to do. Most modern collectors are generational, based on the observation that young objects tend to die quickly. Go deliberately avoided this complexity because their escape analysis already keeps short lived objects on the stack. They also chose a non-moving collector, which prevents compaction but supports C interoperability and simplifies the implementation dramatically. The engineering team made a bet that predictable low latency mattered more than peak throughput, and that simplicity was worth more than theoretical efficiency. This shows up in their pacer algorithm, which uses feedback control to decide when to start collection based on allocation rate and remaining heap space. The GOGC parameter lets you trade memory for CPU, setting it to 200 doubles memory usage but halves garbage collection overhead. Recent innovations like GOMEMLIMIT in Go 1.19 adapts collection frequency based on memory pressure, preventing out of memory crashes in containers while maintaining performance. The experimental Green Tea collector processes memory in larger chunks rather than individual objects, improving cache locality and showing 10 to 50 percent improvements on memory intensive workloads. Tracing down this whole story is impressive AF, a thousand fold improvement from multi-second pauses to sub-millisecond consistency. Production teams went from implementing desperate workarounds like request forwarding during garbage collection to barely thinking about memory management at all. Twitter saw 100x improvement, Pusher achieved sub-10ms pauses on 200MB heaps, and benchmarks demonstrate successful scaling to 200+ gigabyte heaps with microsecond level pause times. This transformation enabled an entire class of applications that were previously impossible in garbage collected languages. Real time systems, high frequency trading, game engines, and latency sensitive microservices all became viable in Go. The collector works so well by default that most developers never need to tune it, yet it scales to massive heaps when necessary. The Go team could have chased maximum throughput or theoretical elegance, but they chose predictable performance and operational simplicity. They proved that breakthrough improvements are possible in mature systems when you're willing to fundamentally rethink the problem rather than just optimizing around the edges.
English
23
150
1.1K
77.7K
Mr_Freezeex retweetledi
Sam Lambert
Sam Lambert@samlambert·
@bytebot we will open source it as a complete project after it is in production with a number of real workloads.
English
3
2
69
6.4K
Mr_Freezeex
Mr_Freezeex@Mr_Freezeex·
@guilhemlettron Nice! N'hésite pas si tu as des questions / veux discuter de MCS :D
Français
1
0
1
78
⛵ Guilhem Lettron
⛵ Guilhem Lettron@guilhemlettron·
@Mr_Freezeex Nous (linode) avons aujourd'hui des clusters par DC. Nous réfléchissons a une solution pour nos clients de communication multi-cluster. MCS pourrait être l'interface que l'on proposerait à nous clients :)
Français
1
0
3
113
Mr_Freezeex retweetledi
Dan Garfield
Dan Garfield@todaywasawesome·
Kubernetes anti patterns that belong in the 🗑️ in 2025 🧵 (#10 will trigger many) 1) Secret injection in manifests. Come on, what are we doing here? We literally have external secrets, the problem is solved. Stop shoving this stuff into manifests.
English
1
12
129
11.4K
Mr_Freezeex retweetledi
PA 🕺🏻
PA 🕺🏻@Domingo·
Nouvel arc, rendez-vous avec toute l’équipe en bootcamp lundi à 20h ! 🫡
Français
623
3.4K
31.1K
3.3M
Mr_Freezeex retweetledi
Amine 😞🌧
Amine 😞🌧@AmineMaTue·
KINGS WORLD CUP ⚽️ COUPE DU MONDE 7vs7 32 équipes au Mexique Notre but : être sur le toit du monde. Soyez prêts le casting de la coupe va être monstrueux. A partir du 26 MAI sur ma chaîne 😀
Français
2.4K
28.1K
125.6K
28.3M
simply joel
simply joel@Jseguillon·
@katia_tal Oui c'est un vrai test pour la CNCF ! Si Flux continue d'être maintenu ce sera une preuve supplémentaire de l'utilité de cet organe de gouvernance. Si pas, il faudra se poser des questions douloureuses :)
Français
2
0
2
101
Mr_Freezeex retweetledi
FreeBSD Frau
FreeBSD Frau@freebsdfrau·
ssh has secrets. Too many to share in one tweet. One of which is how it acts as a serial-line processor for secret keyboard functionality you probably never knew about. For example, why, when you press ENTER and then ~ immediately after, does the ~ not appear right away? Thread…
English
46
667
4K
560.9K
Mr_Freezeex retweetledi
小島秀夫
小島秀夫@Kojima_Hideo·
小島秀夫 tweet media
ZXX
11
182
3.4K
211.6K
Alexandre Derumier🇧🇪🇨🇵🐧
@Mr_Freezeex merci! mais j'imagine que tu dois autoriser à la main les signatures ? Moi j'aimerai un truc dynamique, je peux autoriser un user à récupérer une image sur une registry tout moisie à partir du moment où j'ai pu scanner l'image avec divers outils.
Français
1
0
0
183
Alexandre Derumier🇧🇪🇨🇵🐧
Question k8s: Quel est le meilleur de moyen de forcer le mirror (via harbor par ex) de n'importe quel registry externe et d'autoriser une image uniquement après scan de vuln ? (et que ce soit transparent)
Français
4
2
6
3.9K
Mr_Freezeex
Mr_Freezeex@Mr_Freezeex·
@_Akanoa_ @RobertStphane19 Je sais pas si ça aide ton cas d'utilisation mais sur le projet kubespray par exemple on fait en sorte d'avoir un nom et/ou des ressources tagué avec le nom du job de CI pour pouvoir les retrouver et les détruire après "sans avoir à garder de state".
Français
1
0
0
196
Noa 🦀
Noa 🦀@_Akanoa_·
salut @RobertStphane19 :) je suis en train de jouer avec ansible molecule et je désire créer une base de données dans la step prepare et la détruire dans la step cleanup. j'ai un module qui me réalise un appel API sur clever pour la commande de la DB et qui me renvoie un UUID. UUID que je dois récupérer dans ma step cleanup pour déprovisionner la resource. Connais-tu un moyen propre de réaliser ça écrire l'ID sans un fichier dans un dossier /tmp Merci d'avance :D
Français
2
0
2
2.8K
Mr_Freezeex retweetledi
Jono Bacon
Jono Bacon@jonobacon·
Checks out.
Jono Bacon tweet media
English
4
8
43
2.8K