Shikha Pandey

516 posts

Shikha Pandey banner
Shikha Pandey

Shikha Pandey

@ShikhaPy36

SWE-II @AmericanExpress | 5.5+ YOE Building scalable products. Ex-EY. Building Hirelcube (https://t.co/6JcFfRE1Ir) | EaselToScreen (https://t.co/FUS3rYjuW0)

Gurgaon Katılım Ağustos 2021
18 Takip Edilen103 Takipçiler
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@__karnati Timeouts + circuit breakers first. Stop waiting on D once it’s slow. Then bulkheads to isolate thread pools, and retries with backoff (not hammering D). Optional: caching or fallback so A/B/C can degrade gracefully instead of failing.
English
0
1
3
340
Sri
Sri@__karnati·
🚨 Design scenario: You have a microservices architecture. Service A calls B, B calls C, C calls D. Service D is slow, latency jumped from 40ms to 4 seconds. Services A, B, and C are now all hanging on D. Thread pools fill up. A, B, and C start failing, too. Its entire cascading failure. What design patterns would have prevented this? Bonus if you explain how those help. Drop your answer👇
English
4
8
46
4.2K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@xoaanya You don’t really invalidate JWTs. You either keep them short-lived or use a blacklist / refresh token strategy.
English
0
0
1
47
Aanya
Aanya@xoaanya·
Interviewer: If JWT tokens are stateless, how does a server invalidate them after logout?
English
32
9
115
16.5K
Surendar
Surendar@Surendar__05·
Interviewer: You committed .env to GitHub. Why did your API keys stop working 5 minutes later?
English
16
4
58
8.5K
Harvey Specter
Harvey Specter@Sparvey_Hecter·
@ShikhaPy36 @brankopetric00 You should always run your build locally to ensure all tests work. Make sure you have all profiles on that are active on the CI/CD. It's not the pipeline. It's the de devs
English
1
0
1
134
Branko
Branko@brankopetric00·
CI/CD pipeline took 45 minutes. Developer pushed a fix. Found a typo while waiting. Pushed another fix. First build cancelled. Second build had a flaky test. Pushed again. Three hours later, one line of code reached production.
English
27
6
643
39.4K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@thesayannayak Because cache is fast but not reliable. Limited memory, no durability, eviction policies. Databases are built for correctness. Cache is built for speed.
English
1
0
1
84
Sayan
Sayan@thesayannayak·
Interviewer: If cache is faster than a database, why isn’t everything stored in cache ?
English
71
9
115
17.6K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@_jaydeepkarale Because “who” and “how” you rate limit matters. Shared IPs, NATs, or mobile networks can make multiple legit users look like one. All of them get throttled together.
English
0
0
1
17
Jaydeep
Jaydeep@_jaydeepkarale·
Interviewer: If rate limiting protects services from overload, why can poorly designed rate limits hurt legitimate users?
English
4
1
21
2.4K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@0xlelouch_ MongoDB shines with: • Flexible schema • High write throughput • Easy sharding Postgres shines with: • ACID guarantees • Joins & relations • Data integrity Different trade-offs, different use cases.
English
0
0
1
297
Abhishek Singh
Abhishek Singh@0xlelouch_·
If NoSQL scales horizontally so well, why not use MongoDB for everything instead of Postgres?
English
19
1
99
26.5K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@vivoplt By not letting the server handle it directly. • Client uploads in chunks • Direct-to-object storage (pre-signed URLs) • Stateless backend • Async processing pipeline Backend = coordinator, not bottleneck.
English
0
0
1
70
Vivo
Vivo@vivoplt·
Interviewer: How does the backend handle a 5GB video upload without crashing the server?
English
48
6
151
29.8K
SumitM
SumitM@SumitM_X·
Your branch is 200 commits behind main. What will you do : Merge or rebase?
English
84
11
459
162K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@AtharvaXDevs JWTs are stateless, logout is a client + strategy problem, not server-side. • Delete token on client • Keep access tokens short-lived • Revoke refresh tokens if needed Anything else adds state back.
English
0
0
1
84
Atharva
Atharva@AtharvaXDevs·
Interviewer: If JWTs are stateless, how do you “log out” a user?
English
19
11
142
23.5K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@SumitM_X It’s not scanning millions of users. Highly optimized indexes + sharded storage + in-memory caches. Basically a constant-time lookup with a globally distributed system behind it.
English
0
0
4
2.1K
SumitM
SumitM@SumitM_X·
As a developer, Have you ever wondered : You type a Gmail username and UI instantly shows "Username already taken"... There are millions of users globally How is this check so fast?
SumitM tweet media
English
219
85
4.6K
2.6M
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@_jaydeepkarale They’re logically similar but semantically different. DISTINCT is for deduplication, while GROUP BY is for aggregation. Optimizers often generate similar plans, but once you add aggregates (COUNT, SUM), GROUP BY becomes necessary.
English
0
0
10
1.4K
Jaydeep
Jaydeep@_jaydeepkarale·
Two engineers wrote these queries: SELECT DISTINCT country FROM users; vs SELECT country FROM users GROUP BY country; Both return the same result. So why do both exist?
English
11
1
59
13.8K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
Loop unrolling is one of those optimizations that looks trivial but highlights how much modern CPU performance depends on pipeline efficiency and branch behavior. Fewer loop-control instructions + better ILP often give surprisingly measurable gains. Of course, the instruction cache trade-off is the part people usually forget.
English
0
0
2
249
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Loop unrolling is one of those compiler tricks that feels like it should not work, but it does :) It is also super easy to implement. The idea is simple: instead of iterating one element at a time, the compiler (or you, manually) writes the loop to process multiple elements per iteration. A loop that runs 1000 times becomes one that runs 250 times but does 4x the work per pass. 1. sum += arr[i] 2. sum += (arr[i] + arr[i+1] + arr[i+2] + arr[i+3]) Why does this help? Modern CPUs are deeply pipelined. The branch check at the end of every iteration - "are we done yet?" - is small, but it adds up. Fewer branches mean fewer pipeline stalls and more room for instruction-level parallelism. Another improvement is memory prefetching. When you process four elements per iteration, the CPU has a cleaner, more predictable access pattern. Prefetchers love this and will load upcoming cache lines well before you need them. The trade-off is code size and readability. Unrolling a loop 4x means 4x the instructions in the binary. This might blow the instruction cache, which can actually make things slower. Most compilers use heuristics to find the sweet spot. If you want to, you can write this by hand. But in most cases, you should let the compiler do it for you. GCC and Clang will often do it automatically with `-O2` or `-O3`, and they may even leverage SIMD instructions to squeeze out more throughput. By the way, you can code every single thing I mentioned in under 15 minutes and see it for yourself. Anyway, it's Friday, so give it a shot. It will be fun!
English
11
4
170
12.4K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@arpit_bhayani Love this one. Robin Hood hashing is such a neat idea. That tiny swap rule reducing variance in probe length is what makes it powerful in practice, especially for dense tables. Simple change. Big impact.
English
0
0
0
126
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
There is a very interesting hashing approach called Robin Hood Hashing. It is one of those ideas that is both simple and elegant. Hear me out... In standard open-addressing hash tables, when a collision happens, you move forward to find an open slot. The problem is that some keys end up very close to their ideal slot, while others get placed far away. This creates long probe sequences for unlucky keys, and this degrades lookup performance. Robin Hood Hashing fixes this with a very small change in the insertion strategy. When inserting a new key, if it is farther from its ideal slot than the key it is displacing, it takes the spot, and the displaced key continues probing. So, essentially, you are stealing from the 'rich' (keys sitting comfortably near home) and giving to the 'poor' (keys that have drifted far). Hence the name, Robin Hood. The result is that the variance in probe length across all keys stays very low. No key gets left too far behind. This algorithm can be found in Rust's standard HashMap implementation, in several high-performance database indexes, and in memory systems where cache efficiency matters. It's also popular in hash tables where you need consistent lookup time, not just a good average case. It doesn't change the worst case on paper, but in practice, it brings the distribution of keys closer to the ideal slot, and lookups are noticeably faster for dense tables. It is pretty interesting that one small rule change at insert time has such a high impact on the overall performance of the hash table.
English
10
14
301
19.1K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@e_opore Single gateway = single point of failure. Add multi-provider failover, idempotent retries, and circuit breakers. And don’t show “payment failed” instantly. Graceful degrade the UX. Resilience is tech + user experience.
English
0
0
1
8
Dhanian 🗯️
Dhanian 🗯️@e_opore·
DATABASE REPLICATION IN SYSTEM DESIGN → Database Replication is the process of copying and maintaining database data across multiple servers. → It ensures high availability, fault tolerance, and scalability in distributed systems. WHY DATABASE REPLICATION MATTERS → Improves system availability → Enables read scalability → Provides disaster recovery → Reduces single point of failure → Enhances performance for global users HOW DATABASE REPLICATION WORKS → Client → Write Request → Primary (Master) Database → Primary → Records change in log (WAL / Binlog) → Primary → Sends updates to Replica(s) → Replica → Applies changes → Stays synchronized → Read Requests → Routed to Replicas TYPES OF DATABASE REPLICATION 1. MASTER–SLAVE (PRIMARY–REPLICA) → One Primary handles writes → Multiple Replicas handle reads → Simple to implement → Common in web applications 2. MASTER–MASTER (ACTIVE–ACTIVE) → Multiple nodes handle both reads and writes → Data synchronized between nodes → Higher complexity → Requires conflict resolution strategies 3. SYNCHRONOUS REPLICATION → Write confirmed only after replicas acknowledge → Strong consistency → Higher latency 4. ASYNCHRONOUS REPLICATION → Primary commits write immediately → Replicas update later → Lower latency → Risk of temporary inconsistency REPLICATION STRATEGIES IN SYSTEM DESIGN → Read Scaling → Route read-heavy traffic to replicas → Geo-Replication → Place replicas in multiple regions → Failover Mechanism → Promote replica if primary fails → Load Balancer → Distribute read queries efficiently DATABASE REPLICATION & HIGH AVAILABILITY → Primary failure → Automatic failover to replica → Health checks → Monitor replication lag → Backup nodes → Prevent downtime → Reduces service disruption REPLICATION CHALLENGES → Replication Lag → Delay between primary and replicas → Data Conflicts (in multi-master setups) → Network Partition Issues → Increased operational complexity REPLICATION VS SHARDING → Replication → Copies same data across nodes (availability + read scaling) → Sharding → Splits data across nodes (write + storage scaling) → Large-scale systems → Use both together BEST PRACTICES → Monitor replication lag continuously → Automate failover → Use read/write separation carefully → Combine replication with backups → Test disaster recovery plans Quick Tip → Database replication improves availability and performance → Enables read scaling and disaster recovery → Requires careful consistency management → Essential for scalable and resilient system design 📘 Grab the System Design Handbook: codewithdhanian.gumroad.com/l/ntmcf
Dhanian 🗯️ tweet media
English
15
89
404
9.9K
Shikha Pandey
Shikha Pandey@ShikhaPy36·
@0xlelouch_ Single gateway = single point of failure. Add multi-provider failover, idempotent retries, and circuit breakers. And don’t show “payment failed” instantly. Graceful degrade the UX. Resilience is tech + user experience.
English
0
0
0
141
Abhishek Singh
Abhishek Singh@0xlelouch_·
Your payment service goes down for few minutes during Christmas sale, and you lost $200K in sales. Users saw "payment failed" and abandoned carts. How will you make this more resilient? [Real incident at Shopify]
English
9
5
82
30.7K