Akhilesh Mishra

5.3K posts

Akhilesh Mishra banner
Akhilesh Mishra

Akhilesh Mishra

@livingdevops

Founder LivingDevOps | DevOps Lead |Educator | Mentor | Tech Writer

Noida, India Beigetreten Şubat 2023
272 Folgt21.2K Follower
Angehefteter Tweet
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Before you learn Kubernetes, understand why to learn Kubernetes. Or should you? 25 years back, if you wanted to run an application, you bought a $50,000 physical server. You did the cabling. Installed an OS. Configured everything. Then run your app. Need another app? Buy another $50,000 machine. Only banks and big companies could afford this. It was expensive and painful. Then came virtualization. You could take 10 physical servers and split them into 50 or 100 virtual machines. Better, but you still had to buy and maintain all that hardware. Around 2005, Amazon had a brilliant idea. They had data centers worldwide but weren't using full capacity. So they decided to rent it out. For startups, this changed everything. Launch without buying a single server. Pay only for what you use. Scale when you grow. Netflix was one of the first to jump on this. But this solved only the server problem. But "How do people build applications?" was still broken. In the early days, companies built one big application that did everything. Netflix had user accounts, video player, recommendations, and payments all in one codebase. Simple to build. Easy to deploy. But it didn't scale well. In 2008, Netflix had a major outage. They realized if they were getting downtime with just US users, how would they scale worldwide? So they broke their monolith into hundreds of smaller services. User accounts, separate. Video player, separate. Recommendations, separate. They called it microservices. Other companies started copying this approach. Even when they didn't really need it. But microservices created a massive headache. Every service needed different dependencies. Python version 2.7 for one service. Python 3.6 for another. Different libraries. Different configs. Setting up a new developer's machine took days. Install this database version. That Python version. These specific libraries. Configure environment variables. And then came the most frustrating phrase in software development: "But it works on my machine." A developer would test their code locally. Everything worked perfectly. They'd deploy to staging. Boom. Application crashed. Why? Different OS version. Missing dependency. Wrong configuration. Teams spent hours debugging environment issues instead of building features. Then Docker came along in 2012. Google had been using containers for years with their Borg system. But only top Google engineers could use it, too complex for normal developers. Docker made containers accessible to everyone. Package your app with all dependencies in one container. The exact Python version. The exact libraries. The exact configuration. Run it on your laptop. Works. Run it on staging. Works. Run it in production. Still works. No more "works on my machine" problems. No more spending days setting up environments. By 2014, millions of developers were running Docker containers. But running one container is easy. Running 10,000 containers? That's a nightmare. Microservices meant managing 50+ services manually. Services kept crashing with no auto-restart. Scaling was difficult. Services couldn't find each other when IPs changed. People used custom shell scripts. It was error-prone and painful. Everyone struggled with the same problems. Auto-restart, auto-scaling, service discovery, load balancing. AWS launched ECS to help. But managing 100+ microservices at scale was still a pain. This is exactly what Kubernetes solved. Google saw an opportunity. They were already running millions of containers using Borg. In 2014, they rebuilt it as Kubernetes and open-sourced it. But here's the smart move. They also launched GKE, a managed service that made running Kubernetes so easy that companies started choosing Google Cloud just for it. AWS and Azure panicked. They quickly built EKS and AKS. People jumped ship, moving from running k8s clusters on-prem to managed kubernetes on the cloud. 12 years later, Kubernetes runs 90% of production infrastructure. Netflix, Uber, OpenAI, Medium, they all run on it. Now advanced Kubernetes skills pay big bucks. Why did Kubernetes win? Perfect timing. Docker has made containers popular. Netflix made microservices popular. Millions of people needed a solution to manage these complex microservices at scale. Kubernetes solved that exact problem. It handles everything. Deploying services, auto-healing when things crash, auto-scaling based on traffic, service discovery, health monitoring, and load balancing. Then AI happened. And Kubernetes became even more critical. AI startups need to run thousands of ML training jobs simultaneously. They need GPU scheduling. They need to scale inference workloads based on demand. Companies like OpenAI, Hugging Face, and Anthropic run their AI infrastructure on Kubernetes. Training models, running inference APIs, orchestrating AI agents, all on K8s. The AI boom made Kubernetes essential. Not just for traditional web apps, but for all AI/ML workloads. Understanding this story is more important than memorizing kubectl commands. Now go learn Kubernetes already. Don't take people who write "Kubernetes is dead" articles are just doing it for views/clicks. They might have never used k8s.
English
132
561
2.9K
183.1K
Akhilesh Mishra retweetet
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Kubernetes is beautiful. Every Concept Has a Story, you just don't know it yet. In k8s, you run your app as a pod. It runs your container. Then it crashes, and nobody restarts it. It is just gone. So you use a Deployment. One pod dies and another comes back. You want 3 running, it keeps 3 running. Every pod gets a new IP when it restarts. Another service needs to talk to your app but the IPs keep changing. You cannot hardcode them at scale. So you use a Service. One stable IP that always finds your pods using labels, not IPs. Pods die and come back. The Service does not care. But now you have 10 services and 10 load balancers. Your cloud bill does not care that 6 of them handle almost no traffic. So you use Ingress. One load balancer, all services behind it, smart routing. But Ingress is just rules and nobody executes them. So you add an Ingress Controller. Nginx, Traefik, AWS Load Balancer Controller. Now the rules actually work. Your app needs config so you hardcode it inside the container. Wrong database in staging. Wrong API key in production. You rebuild the image every time config changes. So you use a ConfigMap. Config lives outside the container and gets injected at runtime. Same image runs in dev, staging and production with different configs. But your database password is now sitting in a ConfigMap unencrypted. Anyone with basic kubectl access can read it. That is not a mistake. That is a security incident. So you use a Secret. Sensitive data stored separately with its own access controls. Your image never sees it. Some days 100 users, some days 10,000. You manually scale to 8 pods during the spike and watch them sit idle all night. You cannot babysit your cluster forever. So you use HPA. CPU crosses 70 percent and pods are added automatically. Traffic drops and they scale back down. You are not woken up at 2am anymore. But now your nodes are full and new pods sit in Pending state. HPA did its job. Your cluster had nowhere to put the pods. So you use Karpenter. Pods stuck in Pending and a new node appears automatically. Load drops and the node is removed. You only pay for what you actually use. One pod starts consuming 4GB of memory and nobody told Kubernetes it was not supposed to. It starves every other pod on that node and a cascade begins. One rogue pod with no limits takes down everything around it. So you use Resource Requests and Limits. Requests tell Kubernetes the minimum your pod needs to be scheduled. Limits make sure no pod can steal from everything around it. Your cluster runs predictably.
English
15
87
647
55K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Dear recruiters, if you are looking for: - Go, Python, Bash, Powershell - Kubernetes, Docker, Openshift - GCP, AWS, Azure - Linux & Window system administration - Jenkins, Github Action, Azure Devops - ELK, Prometheus, Grafana, Datadog, splunk - Terraform, CDK, Bicep - Argocd, Flux, Helm - Sonarcube, trivy, and 20 other Devsecops tools That's not a Devops Engineer. That's Devops team for 5 different company.
Akhilesh Mishra tweet media
English
8
16
116
6.4K
Piyush
Piyush@piyush784066·
No disrespect to Linus Torvalds, but this guy is the greatest geek alive 🫡 Created UNIX in 1971 when he was 28 years old. Created Go in 2009 when he was 66 years old😲 He also developed the B programming language (which led to C), created UTF-8 encoding (making international text possible online), and designed essential tools like grep that developers still rely on daily. He also helped with the development of Multics (that led to UNIX), Plan 9 from Bell Labs and Inferno operating systems. That's 4 operating systems in total... Most people don't even use these many OS. Pretty impressive resume, right? 🔥 And it's a shame that many people, even the ones in the IT and tech industry, don't know him. Ken Thompson.... Remember the name 🙏
Piyush tweet media
English
153
721
4.9K
159.8K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Most engineers can’t design for small. They jump straight to Kubernetes clusters, microservices mesh, and multi-AZ RDS before asking one simple question: What are we actually solving for? 100 users don’t need 12 microservices. 100 users don’t need a Kubernetes cluster humming 24/7. 100 users need a monolith, one database, and a deploy that doesn’t cost $4000/month. The best engineers I’ve seen in production didn’t flex complexity. They asked: → How many users today? → What’s the growth curve? → What breaks first if we scale? Then they built the simplest thing that survives. Kubernetes is not a solution. It’s a tool for a specific problem at a specific scale. Knowing when NOT to use it is the real skill.
English
12
5
66
6.8K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Storytelling is one of the most powerful skills you can develop.
English
2
1
17
375
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
@__karnati Fix connectivity( gateway endpoint, Nat if private instance, outbound on 443) If auth issues then bastion profile iam
English
0
0
0
350
Sri
Sri@__karnati·
Your app running on EC2 cannot access S3. No network errors. No timeout. Just access denied. Where would you debug?
English
12
4
41
6.1K
andy
andy@andyteecf·
@livingdevops Did you make this? really good illustration
English
1
0
0
1.3K
Akhilesh Mishra retweetet
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
🔥Learn How a Linux machine boot in 60 seconds
English
16
302
2.3K
145.1K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
No disrespect to Linus Torvalds, and Dennis Ritchie But Ken Thompson might be the biggest geek who ever lived. And almost nobody knows his name. At 28, he created Unix. > The OS that inspired every modern operating system on the planet. At 66, the age when most engineers retire, he co-created Go. > A language millions of developers love, and used to build most of modern Devops tools like Kubernetes, Terraform, Prometheus, Grafana, etc. But that is still not the full story. - Dennis Ritchie built on Thompson’s B to create C. - Linus built Linux inspired by Thompson’s Unix. - He co-invented UTF-8, the encoding behind every website you visit. - He built grep, a tool developers still use daily in 2024. The internet you are scrolling right now exists because of him. Ken Thompson. Remember the name.
Akhilesh Mishra tweet media
English
5
28
156
3.3K
Akhilesh Mishra retweetet
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
90% of the world's devices (servers, pc, phones, IoT) run Linux. Linus wrote the Linux kernel in C. Dennis Ritchie invented C. Every line of Linux is built on Ritchie's foundation.
English
1
4
51
2.1K