Mr. Lightning bolt

36.6K posts

Mr. Lightning bolt banner
Mr. Lightning bolt

Mr. Lightning bolt

@CruzaderTurbo

Engineer, Otaku, Gamer. Las opiniones aquí son mías y no te las presto :v

เข้าร่วม Kasım 2010
362 กำลังติดตาม256 ผู้ติดตาม
ทวีตที่ปักหมุด
Mr. Lightning bolt
Mr. Lightning bolt@CruzaderTurbo·
Verficamesta
Português
0
0
2
498
Phuong Le
Phuong Le@func25·
Go is simple, so I ended up writing an 865-page book about how it works internally, just to see how it maintains that simplicity 😇
Phuong Le tweet media
English
48
162
2.2K
88.2K
Mr. Lightning bolt
Mr. Lightning bolt@CruzaderTurbo·
Las asiáticas bailadoras me han dejado tan buenas rolas en mi playlist. Dios me las bendiga
Español
0
0
0
11
Mr. Lightning bolt
Mr. Lightning bolt@CruzaderTurbo·
No mamá! No busques los videos musicales de Rei Ami antes de Huntrix! Mamá: Aaaaaaaaaaaaahhhhhh!
Mr. Lightning bolt tweet media
Español
0
0
0
37
Mr. Lightning bolt
Mr. Lightning bolt@CruzaderTurbo·
Busco oportunidad para decir: Ah! Maniaca!
Español
0
0
0
12
Mr. Lightning bolt รีทวีตแล้ว
忘れられない動画
忘れられない動画@tanatana5252·
ラプラスちょこちょこ歩きなの可愛いすぎ
日本語
155
9K
58.9K
1.3M
Mr. Lightning bolt รีทวีตแล้ว
Akshay Shinde
Akshay Shinde@ConsciousRide·
GoLang Interviewers love tricky concurrency & performance questions. Sharing 7 such advanced GoLang interview questions that you should be prepared to answer: 1. How do you handle race conditions and implement thread-safe data structures using channels and sync primitives? 2. Explain the difference between goroutines and OS threads. How would you design a worker pool with context cancellation? 3. What are the trade-offs of using sync.Mutex vs sync.RWMutex vs channels for high-throughput services? 4. How would you implement graceful shutdown, panic recovery, and rate limiting in a production Go HTTP server? 5. Describe your approach to memory management, garbage collection tuning, and pprof profiling in a latency-sensitive Go service. 6. How do you handle JSON unmarshaling efficiently for large payloads and implement custom Marshal/Unmarshal for performance? 7. In a distributed Go system, how would you implement distributed tracing, structured logging, and error handling with context?
English
1
15
180
8.7K
Mr. Lightning bolt รีทวีตแล้ว
El Rayado Objetivo
El Rayado Objetivo@objetivo_mty·
Todos esperando a que empiece el México vs Portugal Un wey de la nada en los palcos del Estadio Azteca:
Español
37
950
13K
243K
Mr. Lightning bolt รีทวีตแล้ว
アムロ・メイ
アムロ・メイ@Chris_novas·
STAND ALONE COMPLEX
English
48
3.5K
16.4K
542.6K
Mr. Lightning bolt รีทวีตแล้ว
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Kubernetes is beautiful. Every Concept Has a Story, you just don't know it yet. In k8s, you run your app as a pod. It runs your container. Then it crashes, and nobody restarts it. It is just gone. So you use a Deployment. One pod dies and another comes back. You want 3 running, it keeps 3 running. Every pod gets a new IP when it restarts. Another service needs to talk to your app but the IPs keep changing. You cannot hardcode them at scale. So you use a Service. One stable IP that always finds your pods using labels, not IPs. Pods die and come back. The Service does not care. But now you have 10 services and 10 load balancers. Your cloud bill does not care that 6 of them handle almost no traffic. So you use Ingress. One load balancer, all services behind it, smart routing. But Ingress is just rules and nobody executes them. So you add an Ingress Controller. Nginx, Traefik, AWS Load Balancer Controller. Now the rules actually work. Your app needs config so you hardcode it inside the container. Wrong database in staging. Wrong API key in production. You rebuild the image every time config changes. So you use a ConfigMap. Config lives outside the container and gets injected at runtime. Same image runs in dev, staging and production with different configs. But your database password is now sitting in a ConfigMap unencrypted. Anyone with basic kubectl access can read it. That is not a mistake. That is a security incident. So you use a Secret. Sensitive data stored separately with its own access controls. Your image never sees it. Some days 100 users, some days 10,000. You manually scale to 8 pods during the spike and watch them sit idle all night. You cannot babysit your cluster forever. So you use HPA. CPU crosses 70 percent and pods are added automatically. Traffic drops and they scale back down. You are not woken up at 2am anymore. But now your nodes are full and new pods sit in Pending state. HPA did its job. Your cluster had nowhere to put the pods. So you use Karpenter. Pods stuck in Pending and a new node appears automatically. Load drops and the node is removed. You only pay for what you actually use. One pod starts consuming 4GB of memory and nobody told Kubernetes it was not supposed to. It starves every other pod on that node and a cascade begins. One rogue pod with no limits takes down everything around it. So you use Resource Requests and Limits. Requests tell Kubernetes the minimum your pod needs to be scheduled. Limits make sure no pod can steal from everything around it. Your cluster runs predictably.
English
86
339
2.8K
273K
Mr. Lightning bolt รีทวีตแล้ว
Trung Phan
Trung Phan@TrungTPhan·
Costco CEO Ron Vachris did the “CEO eats his own product” challenge by destroying a hot dog (and confirms the Costco hot dog combo is staying at $1.50 forever). Legend.
English
1.1K
2.5K
43.2K
4.7M
Mr. Lightning bolt
Mr. Lightning bolt@CruzaderTurbo·
Verga, si va a haber token podcast mañana?
Español
0
0
0
19
Mr. Lightning bolt รีทวีตแล้ว
🇨🇳 大牛🇨🇳
🇨🇳 大牛🇨🇳@Appraiser008·
八十万禁军教头林黛玉。
中文
377
4.2K
46.6K
2.5M
プー‎
プー‎@puutin_cos·
プーティア更新しました❕ まだバレンタイン🍫
プー‎ tweet media
日本語
52
390
9K
250.5K