Akhilesh Mishra

5.3K posts

Akhilesh Mishra banner
Akhilesh Mishra

Akhilesh Mishra

@livingdevops

Founder LivingDevOps | DevOps Lead |Educator | Mentor | Tech Writer

Noida, India Bergabung Şubat 2023
272 Mengikuti21.1K Pengikut
Tweet Disematkan
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Before you learn Kubernetes, understand why to learn Kubernetes. Or should you? 25 years back, if you wanted to run an application, you bought a $50,000 physical server. You did the cabling. Installed an OS. Configured everything. Then run your app. Need another app? Buy another $50,000 machine. Only banks and big companies could afford this. It was expensive and painful. Then came virtualization. You could take 10 physical servers and split them into 50 or 100 virtual machines. Better, but you still had to buy and maintain all that hardware. Around 2005, Amazon had a brilliant idea. They had data centers worldwide but weren't using full capacity. So they decided to rent it out. For startups, this changed everything. Launch without buying a single server. Pay only for what you use. Scale when you grow. Netflix was one of the first to jump on this. But this solved only the server problem. But "How do people build applications?" was still broken. In the early days, companies built one big application that did everything. Netflix had user accounts, video player, recommendations, and payments all in one codebase. Simple to build. Easy to deploy. But it didn't scale well. In 2008, Netflix had a major outage. They realized if they were getting downtime with just US users, how would they scale worldwide? So they broke their monolith into hundreds of smaller services. User accounts, separate. Video player, separate. Recommendations, separate. They called it microservices. Other companies started copying this approach. Even when they didn't really need it. But microservices created a massive headache. Every service needed different dependencies. Python version 2.7 for one service. Python 3.6 for another. Different libraries. Different configs. Setting up a new developer's machine took days. Install this database version. That Python version. These specific libraries. Configure environment variables. And then came the most frustrating phrase in software development: "But it works on my machine." A developer would test their code locally. Everything worked perfectly. They'd deploy to staging. Boom. Application crashed. Why? Different OS version. Missing dependency. Wrong configuration. Teams spent hours debugging environment issues instead of building features. Then Docker came along in 2012. Google had been using containers for years with their Borg system. But only top Google engineers could use it, too complex for normal developers. Docker made containers accessible to everyone. Package your app with all dependencies in one container. The exact Python version. The exact libraries. The exact configuration. Run it on your laptop. Works. Run it on staging. Works. Run it in production. Still works. No more "works on my machine" problems. No more spending days setting up environments. By 2014, millions of developers were running Docker containers. But running one container is easy. Running 10,000 containers? That's a nightmare. Microservices meant managing 50+ services manually. Services kept crashing with no auto-restart. Scaling was difficult. Services couldn't find each other when IPs changed. People used custom shell scripts. It was error-prone and painful. Everyone struggled with the same problems. Auto-restart, auto-scaling, service discovery, load balancing. AWS launched ECS to help. But managing 100+ microservices at scale was still a pain. This is exactly what Kubernetes solved. Google saw an opportunity. They were already running millions of containers using Borg. In 2014, they rebuilt it as Kubernetes and open-sourced it. But here's the smart move. They also launched GKE, a managed service that made running Kubernetes so easy that companies started choosing Google Cloud just for it. AWS and Azure panicked. They quickly built EKS and AKS. People jumped ship, moving from running k8s clusters on-prem to managed kubernetes on the cloud. 12 years later, Kubernetes runs 90% of production infrastructure. Netflix, Uber, OpenAI, Medium, they all run on it. Now advanced Kubernetes skills pay big bucks. Why did Kubernetes win? Perfect timing. Docker has made containers popular. Netflix made microservices popular. Millions of people needed a solution to manage these complex microservices at scale. Kubernetes solved that exact problem. It handles everything. Deploying services, auto-healing when things crash, auto-scaling based on traffic, service discovery, health monitoring, and load balancing. Then AI happened. And Kubernetes became even more critical. AI startups need to run thousands of ML training jobs simultaneously. They need GPU scheduling. They need to scale inference workloads based on demand. Companies like OpenAI, Hugging Face, and Anthropic run their AI infrastructure on Kubernetes. Training models, running inference APIs, orchestrating AI agents, all on K8s. The AI boom made Kubernetes essential. Not just for traditional web apps, but for all AI/ML workloads. Understanding this story is more important than memorizing kubectl commands. Now go learn Kubernetes already. Don't take people who write "Kubernetes is dead" articles are just doing it for views/clicks. They might have never used k8s.
English
132
564
2.9K
182.8K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
🔥Learn How a Linux machine boot in 60 seconds
English
1
12
70
3K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
No disrespect to Linus Torvalds, and Dennis Ritchie But Ken Thompson might be the biggest geek who ever lived. And almost nobody knows his name. At 28, he created Unix. > The OS that inspired every modern operating system on the planet. At 66, the age when most engineers retire, he co-created Go. > A language millions of developers love, and used to build most of modern Devops tools like Kubernetes, Terraform, Prometheus, Grafana, etc. But that is still not the full story. - Dennis Ritchie built on Thompson’s B to create C. - Linus built Linux inspired by Thompson’s Unix. - He co-invented UTF-8, the encoding behind every website you visit. - He built grep, a tool developers still use daily in 2024. The internet you are scrolling right now exists because of him. Ken Thompson. Remember the name.
Akhilesh Mishra tweet media
English
5
24
137
2.7K
Akhilesh Mishra me-retweet
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
90% of the world's devices (servers, pc, phones, IoT) run Linux. Linus wrote the Linux kernel in C. Dennis Ritchie invented C. Every line of Linux is built on Ritchie's foundation.
English
1
4
48
1.8K
Yoshik K
Yoshik K@AskYoshik·
Linus Torvalds started Linux in 1991 without AWS, Kubernetes, Docker, or any cloud platform. - No cloud credits. - No managed services. - No CI/CD pipelines. - Just a PC, a terminal, and curiosity. He built an operating system in his bedroom. Today, it runs everything. AWS. Azure. GCP. Every Kubernetes cluster. Every container you deploy. Most servers on the internet. - Docker runs on it. - Kubernetes runs on it. - Your cloud runs on it. If you have ever deployed anything in DevOps, you did it on top of Linux.
Yoshik K tweet media
Akhilesh Mishra@livingdevops

Dennis Ritchie created C in the early 1970s without Google, Stack Overflow, GitHub, or any AI ( Claude, Cursor, Codex) assistant. - No VC funding. - No viral launch. - No TED talk. - Just two engineers at Bell Labs. A terminal. And a problem to solve. He built a language that fit in kilobytes. 50 years later, it runs everything. Linux kernel. Windows. macOS. Every iPhone. Every Android. NASA’s deep space probes. The International Space Station. > Python borrowed from it. > Java borrowed from it. > JavaScript borrowed from it. If you have ever written a single line of code in any language, you did it in Dennis Ritchie’s shadow. He died in 2011. The same week as Steve Jobs. Jobs got the front pages. Ritchie got silence. This Legend deserves to be celebrated.

English
5
4
47
2.9K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Dennis Ritchie created C in the early 1970s without Google, Stack Overflow, GitHub, or any AI ( Claude, Cursor, Codex) assistant. - No VC funding. - No viral launch. - No TED talk. - Just two engineers at Bell Labs. A terminal. And a problem to solve. He built a language that fit in kilobytes. 50 years later, it runs everything. Linux kernel. Windows. macOS. Every iPhone. Every Android. NASA’s deep space probes. The International Space Station. > Python borrowed from it. > Java borrowed from it. > JavaScript borrowed from it. If you have ever written a single line of code in any language, you did it in Dennis Ritchie’s shadow. He died in 2011. The same week as Steve Jobs. Jobs got the front pages. Ritchie got silence. This Legend deserves to be celebrated.
Akhilesh Mishra tweet media
English
646
5.4K
26.6K
890K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
People think DevOps engineering is: ➜ Automating everything with a single click
➜ Deploying code seamlessly in seconds
➜ Spinning up servers in milliseconds with infinite scalability
➜ Achieving 99.999% uptime without breaking a sweat
➜ Living the dream of “Infrastructure as Code” with zero issues
➜ Sipping coffee while watching flawless CI/CD pipelines It actually is: ➜ Debugging why the deployment failed… again
➜ Arguing with developers about why root access isn’t a good idea
➜ Staring at Terraform errors that make no sense
➜ Explaining (for the 100th time) that “99.999% uptime” isn’t free
➜ Waking up at 3 AM because the monitoring system finally decided to alert you
➜ Wondering why the same script works in staging but breaks in production
➜ Figuring out which “small config change” took the whole system down
English
3
8
28
2.8K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Every DevOps engineer memorizes kubectl commands. Almost none of them know why Kubernetes exists. Before you learn Kubernetes, understand why it was built. Or ask yourself if you even need it. 25 years ago, running an app meant buying a $50,000 physical server. Cabling. OS install. Configuration. Then run your app. Need another app? Buy another machine. Only banks and big companies could afford this. Then came virtualization. One physical server could run 50 virtual machines. Better. But you still owned the hardware. Around 2005, Amazon had a brilliant idea. They had data centers sitting half empty worldwide. So they rented them out. AWS was born. For startups, everything changed. Launch without buying a single server. Pay only for what you use. Netflix jumped on this early. But the server problem was only half the battle. Early apps were monoliths. One giant codebase doing everything. Simple to build. Easy to deploy. But impossible to scale. In 2008, Netflix had a major outage. If they were struggling with just US users, worldwide scale would kill them. So they broke everything into smaller independent services. User accounts, separate. Video player, separate. Recommendations, separate. Microservices were born. Everyone copied them. Even teams that did not need it. But microservices created a new headache. Different Python versions. Different libraries. Different configs. Setting up a developer machine took days. Then came the most frustrating phrase in software history. “It works on my machine.” Code worked locally. Crashed in staging. Teams spent more time debugging environments than building features. Then Docker arrived in 2013 and fixed that. But running one container is easy. Running 10,000 is a nightmare. Services crashed with no auto-restart. Scaling was manual and painful. Teams wrote hacky shell scripts to manage everything. This is exactly what Kubernetes solved. Google had been running containers internally for years with their Borg system. In 2014 they rebuilt it, called it Kubernetes, and open-sourced it. Then they launched GKE. AWS and Azure panicked and quickly built EKS and AKS. Today Kubernetes runs 90% of production infrastructure. Netflix, Uber, OpenAI, Medium. All of them. Then AI happened and Kubernetes became even more critical. Thousands of ML training jobs. GPU scheduling. Scaling inference on demand. OpenAI, Hugging Face, Anthropic all run on it. Understanding this story matters more than memorizing kubectl commands. And ignore the “Kubernetes is dead” articles. Written for clicks by people who have probably never run it in production. Now go learn Kubernetes. What part of Kubernetes clicked for you first? Drop it below.
English
5
12
70
4.4K
Akhilesh Mishra me-retweet
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Most DevOps interviews ask you what is the monitoring setup in your project. And most people say the same thing. I use Prometheus and Grafana. That is not wrong. But that is not enough. The interviewer wants to know how your logs get collected. Where they go after CloudWatch. Why you use Kinesis Firehose in between. How long you retain data. How metrics are scraped. How SLA is tracked. That is the difference between someone who has used the tools and someone who has actually set this up in production. Here is how I set up monitoring in an enterprise banking environment on EKS. We had two workload types in my projects. Microservices on Fargate and a stateful application running as a StatefulSet. Both need monitoring. But the approach for each is completely different. > For Fargate workloads, I use the OpenTelemetry add-on. It automatically picks up logs from all Fargate pods and ships them to CloudWatch. Simple and it works. For the StatefulSet, I use Fluent Bit as a sidecar container. It sits inside the pod, reads logs from a shared volume, and ships them to CloudWatch. This gives me full control over formatting and filtering which matters in a regulated environment. From CloudWatch the logs go through a formatter Lambda, then into Kinesis Firehose which batches and writes to OpenSearch every 20 minutes. We keep 7 days in OpenSearch, 30 days in CloudWatch and everything backed up to S3. > For metrics I use Prometheus with a ServiceMonitor for each application. > Prometheus scrapes the /metrics endpoint every 30 seconds. Grafana visualizes everything. >Prometheus, CloudWatch and OpenSearch all as data sources in one dashboard. And SLA becomes a real number you track. 99.1% uptime means your application can be down for a maximum of 6.4 hours in a month. You track that in Grafana, and your stakeholders can see it in real time. That is the story you tell in your interview. Not just Prometheus and Grafana. The whole pipeline. With reasons behind every decision.
Akhilesh Mishra tweet media
English
9
15
128
6.8K