Puneet Patwari

969 posts

Puneet Patwari banner
Puneet Patwari

Puneet Patwari

@system_monarch

Principal @Atlassian | Helping engineers reach Staff/Principal | 1:1 Mentorship & Mock Interviews | 90+ System design fundamentals - https://t.co/Ots2nRhO5f

Hyderabad Katılım Aralık 2025
121 Takip Edilen8.2K Takipçiler
Puneet Patwari
Puneet Patwari@system_monarch·
Need some honest opinions from people who have seen both colleges closely. Which would you choose today for CSE and why? MIT Manipal main campus or VIT Vellore main campus CSE in Category 4? Thanks 🙏
English
6
0
12
2.9K
Amin Tai
Amin Tai@aminnnn_09·
@system_monarch tbh, the fix lies in token scoping and metadata. issue tokens with a "purpose" claim human, ci, agent and let the gateway enforce limits per purpose. a stolen ci token then can't run inference like a human one. ever layered in device fingerprinting on top?
English
1
0
1
681
Puneet Patwari
Puneet Patwari@system_monarch·
System design round at Anthropic for Sr. SWE ($350K+ CTC) : Your platform issues API tokens to 400,000 developers. A stolen token just ran $140,000 worth of inference overnight. Your logs show a valid token. Your auth system sees nothing wrong. The token could belong to a developer, a CI pipeline, or an autonomous AI agent. Your gateway has no way to tell which one it is talking to and applies the same trust level to all three. How do you redesign your token system to distinguish between a human, a pipeline, and an AI agent without breaking 400,000 existing integrations?
English
5
6
70
7.5K
UniqueOne
UniqueOne@Unique_O1·
@system_monarch @anishmoonka @grok Yeah, that makes sense. The video felt more like a solid system design walkthrough than spilling proprietary secrets. Atlassian’s blogs are pretty open anyway.
English
1
0
2
747
Anish Moonka
Anish Moonka@anishmoonka·
A software engineer at Atlassian got laid off in March after 8 years. His response: a 38-minute YouTube video showing how the company's entire tech works, free for anyone to copy. That same quarter, Atlassian's revenue hit $1.79 billion, a record. His name is Vasilios Syrakis. He worked in Sydney on Atlassian's digital plumbing: the system that handles the company's web traffic, made up of about 2,000 programs running across 13 regions of the world. Every time someone clicks on Atlassian's software, the system Syrakis worked on decides which of those servers answers. Atlassian's own engineering blog wrote about his team's work in February 2025. On Sunday, Syrakis walked through the whole architecture on YouTube, every box on the diagram. The financial picture doesn't fit the layoff story. Atlassian's cloud business grew 29% year over year last quarter. The company has 350,000 customers, including 80% of the Fortune 500. None of that looks like a company that needs to cut a tenth of its staff to "self-fund AI investment," as the CEO put it in March. In the six months before the layoffs, CEO Mike Cannon-Brookes sold 866,145 of his own shares for roughly $134 million. Co-founder Scott Farquhar sold exactly the same number on the same schedule. The board also approved spending $2.5 billion to buy back Atlassian stock from the market, a move that props up the share price. The shares still fell 56% this year. Investors think AI lets companies do more work with fewer employees, and Atlassian charges its customers per employee. Sam Altman called this practice "AI washing" in February. Of the 1.2 million American jobs cut in 2025, only 55,000 blamed AI. The rest had different reasons, or none at all. The engineer who helped build Atlassian's plumbing is now teaching the internet how it works, for free, because he no longer has a paycheck to protect.
Ed Andersen@edandersen

Incredible video by randomly sacked Atlassian engineer telling all about the entire company Love this genre, like LinkedIn green banner with zero fcks given

English
67
748
8.1K
1.7M
UniqueOne
UniqueOne@Unique_O1·
@anishmoonka But is it legal to do that and reveal everything to the public?? Can't atlassian file a case on him for the thing he did? @grok your views on this.
English
2
0
10
30.2K
yoshitha
yoshitha@yoshitha_dev·
@system_monarch Almost 2 years into my career and starting to feel stuck. Most of my work is tool-based, with frequent context switching and limited development exposure. I want to transition into Java backend roles and focus on real engineering work. Would appreciate any advice
English
1
0
0
139
bekku
bekku@Areddy4493·
@system_monarch Why does almost every tweet start with principal eng with 12 yes exp? You could state your message without it
English
1
0
2
426
Puneet Patwari
Puneet Patwari@system_monarch·
I'm a Principal with 12 years of experience. If I was coaching you to crack system design rounds for Sr to Staff+ AI/ML roles at companies like Meta, Google, Salesforce, Amazon, etc. I would 100% ask you to work on these fundamentals before we would start talking about interviews. Because AI system design is still system design. The only difference is that now your bottlenecks are not just databases, caches, and queues. They are tokens, context windows, retrieval quality, inference cost, hallucinations, model latency, evals, and user trust. Here are the fundamentals I would start with: ➤ LLM Basics ↬ Tokens ↬ Context Window ↬ Prompt Design ↬ System Prompts ↬ Temperature ↬ Top-p Sampling ↬ Structured Outputs ↬ JSON Mode ↬ Function Calling ↬ Tool Calling ↬ Agents ↬ Memory ↬ Guardrails ↬ Hallucinations ↬ Model Latency ↬ Model Routing ↬ Small vs Large Models ↬ Fine-tuning vs Prompting ↬ Open-source vs Closed Models ➤ RAG & Retrieval ↬ Embeddings ↬ Vector Search ↬ Vector Databases ↬ Chunking ↬ Chunk Overlap ↬ Metadata Filtering ↬ Hybrid Search ↬ Keyword Search ↬ Semantic Search ↬ Reranking ↬ Retrieval Recall ↬ Retrieval Precision ↬ Query Rewriting ↬ Document Freshness ↬ Permission-aware Retrieval ↬ Citation Grounding ↬ Evidence Selection ↬ Context Packing ↬ Missing Information Detection ➤ AI System Architecture ↬ API Gateway ↬ Request Routing ↬ Model Gateway ↬ Prompt Service ↬ Inference Service ↬ Retrieval Service ↬ Ranking Service ↬ Feature Store ↬ Offline Pipelines ↬ Online Serving ↬ Async Processing ↬ Queueing ↬ Streaming Responses ↬ Rate Limiting ↬ Fan-out/Fan-in ↬ Batch Inference ↬ Real-time Inference ↬ Human-in-the-loop Systems ↬ Fallback Workflows ➤ Cost & Performance ↬ Token Budgeting ↬ Prompt Compression ↬ Prompt Caching ↬ Semantic Caching ↬ Response Caching ↬ Batch Requests ↬ Model Quantization ↬ Distillation ↬ Latency Budgets ↬ Cold Starts ↬ GPU Utilization ↬ Throughput ↬ Cost per Query ↬ Cost per User ↬ Model Selection ↬ Inference Scaling ↬ Backpressure ↬ Load Shedding ➤ Evaluation & Quality ↬ Offline Evals ↬ Online Evals ↬ Golden Dataset ↬ Human Review ↬ LLM-as-Judge ↬ A/B Testing ↬ Regression Testing ↬ Answer Relevance ↬ Factual Accuracy ↬ Faithfulness ↬ Groundedness ↬ Toxicity Checks ↬ Safety Checks ↬ Drift Detection ↬ Feedback Loops ↬ Confidence Scoring ↬ Escalation Criteria ↬ Quality Monitoring ➤ Reliability & Security ↬ Timeouts ↬ Retries ↬ Circuit Breakers ↬ Failover ↬ Model Fallbacks ↬ Graceful Degradation ↬ Observability ↬ Tracing ↬ Prompt Logs ↬ Token Metrics ↬ Error Budgets ↬ PII Redaction ↬ Data Privacy ↬ Access Control ↬ Prompt Injection ↬ Jailbreak Defense ↬ Audit Logs ↬ Compliance
English
12
68
391
36.6K
viswaMitra007
viswaMitra007@VMitra007·
@system_monarch Whats your suggestion to someone who is not interested in RAG ? Only interested in solving nueral nets for SDEs ? Does any roles exist for that in india ? SDE = Stochastic Differential Equations
English
1
0
0
870
Puneet Patwari
Puneet Patwari@system_monarch·
If you notice carefully, most AI system design questions are just combinations of these fundamentals. Design ChatGPT? You need model serving, memory, tool calling, safety, and cost control. Design enterprise search? You need RAG, permissions, reranking, freshness, and grounding. Design AI customer support? You need retrieval, confidence scoring, escalation, evals, and fallback paths. Design an LLM router? You need model selection, quality gates, cost tracking, and failure handling. So before jumping into mock interviews, I would build depth here first.
Puneet Patwari@system_monarch

I'm a Principal with 12 years of experience. If I was coaching you to crack system design rounds for Sr to Staff+ AI/ML roles at companies like Meta, Google, Salesforce, Amazon, etc. I would 100% ask you to work on these fundamentals before we would start talking about interviews. Because AI system design is still system design. The only difference is that now your bottlenecks are not just databases, caches, and queues. They are tokens, context windows, retrieval quality, inference cost, hallucinations, model latency, evals, and user trust. Here are the fundamentals I would start with: ➤ LLM Basics ↬ Tokens ↬ Context Window ↬ Prompt Design ↬ System Prompts ↬ Temperature ↬ Top-p Sampling ↬ Structured Outputs ↬ JSON Mode ↬ Function Calling ↬ Tool Calling ↬ Agents ↬ Memory ↬ Guardrails ↬ Hallucinations ↬ Model Latency ↬ Model Routing ↬ Small vs Large Models ↬ Fine-tuning vs Prompting ↬ Open-source vs Closed Models ➤ RAG & Retrieval ↬ Embeddings ↬ Vector Search ↬ Vector Databases ↬ Chunking ↬ Chunk Overlap ↬ Metadata Filtering ↬ Hybrid Search ↬ Keyword Search ↬ Semantic Search ↬ Reranking ↬ Retrieval Recall ↬ Retrieval Precision ↬ Query Rewriting ↬ Document Freshness ↬ Permission-aware Retrieval ↬ Citation Grounding ↬ Evidence Selection ↬ Context Packing ↬ Missing Information Detection ➤ AI System Architecture ↬ API Gateway ↬ Request Routing ↬ Model Gateway ↬ Prompt Service ↬ Inference Service ↬ Retrieval Service ↬ Ranking Service ↬ Feature Store ↬ Offline Pipelines ↬ Online Serving ↬ Async Processing ↬ Queueing ↬ Streaming Responses ↬ Rate Limiting ↬ Fan-out/Fan-in ↬ Batch Inference ↬ Real-time Inference ↬ Human-in-the-loop Systems ↬ Fallback Workflows ➤ Cost & Performance ↬ Token Budgeting ↬ Prompt Compression ↬ Prompt Caching ↬ Semantic Caching ↬ Response Caching ↬ Batch Requests ↬ Model Quantization ↬ Distillation ↬ Latency Budgets ↬ Cold Starts ↬ GPU Utilization ↬ Throughput ↬ Cost per Query ↬ Cost per User ↬ Model Selection ↬ Inference Scaling ↬ Backpressure ↬ Load Shedding ➤ Evaluation & Quality ↬ Offline Evals ↬ Online Evals ↬ Golden Dataset ↬ Human Review ↬ LLM-as-Judge ↬ A/B Testing ↬ Regression Testing ↬ Answer Relevance ↬ Factual Accuracy ↬ Faithfulness ↬ Groundedness ↬ Toxicity Checks ↬ Safety Checks ↬ Drift Detection ↬ Feedback Loops ↬ Confidence Scoring ↬ Escalation Criteria ↬ Quality Monitoring ➤ Reliability & Security ↬ Timeouts ↬ Retries ↬ Circuit Breakers ↬ Failover ↬ Model Fallbacks ↬ Graceful Degradation ↬ Observability ↬ Tracing ↬ Prompt Logs ↬ Token Metrics ↬ Error Budgets ↬ PII Redaction ↬ Data Privacy ↬ Access Control ↬ Prompt Injection ↬ Jailbreak Defense ↬ Audit Logs ↬ Compliance

English
1
8
50
3.8K
Puneet Patwari
Puneet Patwari@system_monarch·
Also, I’ve already created a system design fundamentals guide for senior and staff engineers who are preparing for interviews or want to go after better opportunities. Check it out here: puneetpatwari.in
English
1
0
2
1.4K
Shubh Jain
Shubh Jain@shubh19·
@system_monarch going deep means understanding why you chose the db and the exact trade offs involved
English
1
0
1
152
Puneet Patwari
Puneet Patwari@system_monarch·
I keep getting this question regularly in my mentoring sessions. I would try to answer with a popular example. "What "going deep" actually looks like in a system design interview at staff/principal level?" ---------------------------------------------------------- Interviewer: "Design a notification system." Mid-level: "We'll use a message queue like Kafka and push notifications to devices." Senior: "We'll partition by user_id, use priority queues for urgent vs marketing, add exponential backoff for retries, and deduplicate using a Redis set with a 24h TTL." Staff/Principal: "Before we pick infrastructure, let's talk about the delivery guarantees we actually need. Transactional notifications like OTPs need at-least-once with idempotent delivery and a fallback channel (SMS if push fails within 30s). Marketing can tolerate batching and best-effort. That distinction changes everything downstream. Priority queues alone won't cut it because we need separate processing pipelines with different SLOs, different retry policies, and different cost envelopes. The dedup strategy also depends on whether we own the last mile or delegate to APNs/FCM, because their idempotency semantics are different. I'd also want to talk about how we handle a million users going silent on push tokens and what that does to our delivery metrics and cost." The difference isn't knowing more technologies but it's asking better questions before picking any.
English
4
6
41
2.9K
Yoshik
Yoshik@AskYoshik·
As a Junior DevOps / SRE trying to move towards Mid-level, I'll tell you bluntly: Knowing how to use Docker, Kubernetes, and CI/CD is not enough. At Junior level, you're expected to follow existing docs and keep things running. At Mid level, you're expected to understand why the system is built that way, choose the right tools, and reduce future operational pain. If you can already deploy stuff and write basic pipelines but still feel stuck at Junior, spend the next 3–6 months building these Mid-level DevOps muscles. Environment & Infra Foundations ↬ Linux basics (permissions, processes, networking) ↬ Shell scripting you're not scared to edit later ↬ Understanding VPCs, subnets, security groups ↬ Basic DNS, TLS, certificates ↬ SSH, bastion hosts, jump boxes ↬ Packaging apps: Docker images, image hygiene ↬ Container runtime basics (cgroups, namespaces) ↬ IaC fundamentals (Terraform/CloudFormation) ↬ Understanding cloud pricing basics ↬ Reading infra diagrams and drawing your own ↬ Knowing what is "state" and where it lives ↬ Secrets management basics (KMS, Vault, SSM) CI/CD & Release Engineering ↬ Building reliable pipelines (GitHub Actions, GitLab CI, Argo, Jenkins) ↬ Build cache, artifacts, and image registries ↬ Branching strategies and release channels ↬ Rollbacks vs rollforwards ↬ Blue/green and canary basics ↬ Promotion across envs (dev/stage/prod) ↬ Test stages: unit, integration, e2e ↬ Handling flaky tests ↬ Environment parity and drift ↬ Deploying DB changes safely (migrations) ↬ Approval flows and change management ↬ Pipeline observability (logs, metrics) Kubernetes & Orchestration ↬ Pods, Deployments, Services, Ingress ↬ Requests/limits and basic capacity planning ↬ ConfigMaps, Secrets, and env separation ↬ Liveness/readiness probes ↬ HPA basics (what metric, what threshold) ↬ Rolling updates and rollout strategies ↬ Debugging with logs, exec, port-forward ↬ Basic network policies ↬ When not to use Kubernetes ↬ Helm/Kustomize fundamentals ↬ Understanding cluster responsibility boundaries Observability & Reliability ↬ Metrics, logs, traces: what each is for ↬ SLI/SLO basics, not just alerts ↬ Alert fatigue vs useful alerts ↬ Dashboards that answer "what broke?" quickly ↬ On-call handoff and escalation paths ↬ Incident communication in Slack/Zoom ↬ Writing and following runbooks ↬ Postmortem basics (timeline, impact, action items) ↬ Health checks and readiness checks ↬ Using tools: Prometheus/Grafana, Loki/ELK, Jaeger/Tempo ↬ Understanding dependency failure chains Tooling & Automation Mindset ↬ Knowing your core stack well (AWS/Azure/GCP) ↬ Picking one main IaC tool and getting good at it ↬ Picking one main CI tool and mastering it ↬ Using CLIs efficiently (kubectl, aws, gcloud, az) ↬ Small scripts to remove repetitive work ↬ Writing reusable pipeline templates ↬ Standardising logging/metrics across services ↬ Adding safety rails: guardrails in pipelines, policies ↬ Keeping docs close to the code (READMEs, runbooks) ↬ Cleaning up unused infra and cruft The Junior to Mid jump is not just "I can follow the docs and restart pods" It is: "I understand the system, the tools, and the tradeoffs well enough to make changes safely and reduce future ops pain for my team" That is the mindset shift.
Puneet Patwari@system_monarch

As a Principal Backend Engineer with over 12 years of experience, I can tell you quite certainly that if you're still getting rejections in system design interviews after good efforts, I think your fundamentals are not strong... Dedicate 2-3 months to mastering these design fundamentals, then practice designing a few systems(and do plenty of mock interviews). Scaling & Architecture ↬ CDN ↬ Caching ↬ Sharding ↬ Queueing ↬ Replication ↬ Partitioning ↬ API Gateway ↬ Rate Limiting ↬ CAP Theorem ↬ Microservices ↬ Load Balancing ↬ Fault Tolerance ↬ Database Scaling ↬ Service Discovery ↬ Consistency Models ↬ Eventual Consistency ↬ Distributed Transactions ↬ Monolith vs Microservices ↬ Leader Election Databases & Storage ↬ Leader-Follower Replication ↬ WAL (Write Ahead Log) ↬ Asynchronous Processing ↬ Transaction Isolation ↬ Read/Write Patterns ↬ Consistent Hashing ↬ Redis/Memcached ↬ Backup & Restore ↬ Hot/Cold Storage ↬ Data Partitioning ↬ Object Storage ↬ SQL vs NoSQL ↬ Data Retention ↬ Data Modeling ↬ OLAP vs OLTP ↬ ACID & BASE ↬ Bloom Filters ↬ File Systems ↬ S3 Basics ↬ B+ Trees ↬ Indexing Communication & APIs ↬ JWT ↬ CORS ↬ OAuth ↬ Throttling ↬ Serialization ↬ API Security ↬ Long Polling ↬ WebSockets ↬ API Gateway ↬ Idempotency ↬ Service Mesh ↬ Retry Patterns ↬ REST vs gRPC ↬ API Versioning ↬ Circuit Breaker ↬ API Rate Limits ↬ Fan-out/Fan-in ↬ Protocol Buffers ↬ Message Queues ↬ Dead Letter Queue Reliability & Observability ↬ Metrics ↬ Alerting ↬ Failover ↬ Logging ↬ Rollbacks ↬ Monitoring ↬ Heartbeats ↬ Retry Logic ↬ Autoscaling ↬ SLO/SLI/SLA ↬ Load Testing ↬ Error Budgets ↬ Health Checks ↬ Circuit Breaker ↬ Incident Response ↬ Chaos Engineering ↬ Distributed Tracing ↬ Canary Deployments ↬ Graceful Degradation ↬ Blue-Green Deployment

English
5
30
225
14.1K
Sagar
Sagar@gayabprani·
@system_monarch The way you post if it was youtube, it’s bot would flag it as mass produced 😁
English
1
0
0
507
Puneet Patwari
Puneet Patwari@system_monarch·
As a Principal Backend Engineer with over 12 years of experience, I can tell you quite certainly that if you're still getting rejections in system design interviews after good efforts, I think your fundamentals are not strong... Dedicate 2-3 months to mastering these design fundamentals, then practice designing a few systems(and do plenty of mock interviews). Scaling & Architecture ↬ CDN ↬ Caching ↬ Sharding ↬ Queueing ↬ Replication ↬ Partitioning ↬ API Gateway ↬ Rate Limiting ↬ CAP Theorem ↬ Microservices ↬ Load Balancing ↬ Fault Tolerance ↬ Database Scaling ↬ Service Discovery ↬ Consistency Models ↬ Eventual Consistency ↬ Distributed Transactions ↬ Monolith vs Microservices ↬ Leader Election Databases & Storage ↬ Leader-Follower Replication ↬ WAL (Write Ahead Log) ↬ Asynchronous Processing ↬ Transaction Isolation ↬ Read/Write Patterns ↬ Consistent Hashing ↬ Redis/Memcached ↬ Backup & Restore ↬ Hot/Cold Storage ↬ Data Partitioning ↬ Object Storage ↬ SQL vs NoSQL ↬ Data Retention ↬ Data Modeling ↬ OLAP vs OLTP ↬ ACID & BASE ↬ Bloom Filters ↬ File Systems ↬ S3 Basics ↬ B+ Trees ↬ Indexing Communication & APIs ↬ JWT ↬ CORS ↬ OAuth ↬ Throttling ↬ Serialization ↬ API Security ↬ Long Polling ↬ WebSockets ↬ API Gateway ↬ Idempotency ↬ Service Mesh ↬ Retry Patterns ↬ REST vs gRPC ↬ API Versioning ↬ Circuit Breaker ↬ API Rate Limits ↬ Fan-out/Fan-in ↬ Protocol Buffers ↬ Message Queues ↬ Dead Letter Queue Reliability & Observability ↬ Metrics ↬ Alerting ↬ Failover ↬ Logging ↬ Rollbacks ↬ Monitoring ↬ Heartbeats ↬ Retry Logic ↬ Autoscaling ↬ SLO/SLI/SLA ↬ Load Testing ↬ Error Budgets ↬ Health Checks ↬ Circuit Breaker ↬ Incident Response ↬ Chaos Engineering ↬ Distributed Tracing ↬ Canary Deployments ↬ Graceful Degradation ↬ Blue-Green Deployment
English
25
304
2.2K
306.7K
Puneet Patwari
Puneet Patwari@system_monarch·
@hariommandloi4 Okay, and why is that? I have published lot of posts and not every other post has it.😅
English
0
0
0
152
Jetha
Jetha@hariommandloi4·
@system_monarch Puneet, you don’t need to start every other post by showing your designation.
English
1
0
0
990
yuvraj
yuvraj@yuvraj_io·
@system_monarch I want to study these now and I am not even an engineer
English
1
0
2
879
Kun Chen
Kun Chen@kunchenguid·
since many people are getting tricked by Anthropic’s sugar coating, let me further clarify what the change really is programmatically invoking claude through “claude -p” (and equivalent) is how a lot of people have been using claude. this was an approved use case of the subscription quota since a year ago and people built various advanced tools and workflows with this the change today makes it so that such usage can no longer draw from the original subscription quota. instead, they now get charged at API pricing which can be over 25x more expensive than subscription quota if i keep using my claude subscription the same way, last month it cost me $200. next month it will cost me over $5500 - Anthropic will give me $200 credit back from the $5500 expense so I pay $5300 please don’t get tricked into thinking this is a “new free credit pool”. it’s taking back something that’s been part of the subscription value since the beginning
Kun Chen tweet media
English
22
12
113
13.4K
Kun Chen
Kun Chen@kunchenguid·
it's official. Anthropic pulled the plug on ALL programmatic use of claude subscription i've found myself increasingly bullish about OpenAI a few key reasons - 1. Anthropic's only lead was on coding, and gpt 5.5 has flipped that already. coding with gpt 5.5 fast mode is an experience nothing else can match right now 2. OpenAI has voice, image, video - when building my AI tutor app, it became very clear OpenAI was the ONLY viable platform choice 3. it took an Anthropic to see the "Open"-ness of OpenAI. Anthropic is destroying its developer ecosystem with changes like this which tells their subscribers they can't use the best apps out there on Anthropic's models - they can only use Anthropic's apps. Codex on the other hand remained an open platform for developers to build on not just trying to dunk on Anthropic - i genuinely hope they change their strategy and recognize that the world needs them to focus on making better, faster, and cheaper models, not locking everyone into their apps
ClaudeDevs@ClaudeDevs

Starting June 15, paid Claude plans can claim a dedicated monthly credit for programmatic usage. The credit covers usage of: - Claude Agent SDK - claude -p - Claude Code GitHub Actions - Third-party apps built on the Agent SDK

English
116
81
1.5K
605.9K
David G
David G@dagonzago·
@system_monarch Prerender the results, store them in a cache and serve them from there. Then depending on capacity, regenerate the results every x period where x is a period of time that your computer capacity can handle. Decoupling processing from serving.
English
1
0
1
1.3K
Puneet Patwari
Puneet Patwari@system_monarch·
System design round at Swiggy for Sr. SDE: You order during IPL finals. 3 million users open Swiggy at the same time. Every restaurant within 5km is getting hit simultaneously. Swiggy needs to show you real time delivery estimates, live stock availability, and surge pricing. All three change every 30 seconds. How do you serve accurate data to 3 million users at once without your database collapsing under 3 million reads every 30 seconds?
English
19
12
209
53.4K