Edgex

21.3K posts

Edgex banner
Edgex

Edgex

@SahilExec

Backend engineer and lurker | DSA | CSE'28 | DM for collab

127.0.0.1 Katılım Ekim 2025
589 Takip Edilen2.8K Takipçiler
Sabitlenmiş Tweet
Edgex
Edgex@SahilExec·
A junior dev wrote this to check if a username already exists before signup: SELECT * FROM users WHERE username = 'rahul' App works fine. But at 100,000 users, signups start failing randomly. What's the flaw and how do you fix it at the database level?
Edgex tweet media
English
84
22
722
440K
Edgex
Edgex@SahilExec·
Interviewer: Redis is single threaded, then how does it handle millions of requests per second
English
11
1
40
14.3K
Anaya
Anaya@Anaya_sharma876·
Why my engagements are dropped... it was almost 70k in the early morning today and now Its 18k only... any reason update or glitch??
Anaya tweet mediaAnaya tweet media
English
20
0
23
497
om mishra
om mishra@BuildWithOm·
Today morning, my analytics dashboard showed 35K engagements for the past month. By evening, it had randomly dropped significantly to 20K engagements. I asked Grok about it, and it basically said smaller accounts can lose engagements due to algorithmic recalculations/removals. I honestly don’t even know what to say anymore. The funniest part? You can’t verify it, appeal it, or do anything about it. You just accept it and move on.
English
2
0
9
170
Edgex
Edgex@SahilExec·
GitHub Actions secrets and Vercel env vars are genuinely good enough for most small teams starting out people jump straight to Vault and then spend 2 weeks setting it up for a 6 person team that ships a SaaS the progression makes sense though local .env → platform secrets → proper secret manager as you scale just don't stay on "GitHub Actions is fine" when you hit prod with real user data
English
0
0
2
53
Dark Coder
Dark Coder@dark_coderz·
@SahilExec For a 5–10 person team, the setup that usually works is local .env files for dev, secrets we can store in GitHub Actions / Vercel / Docker / cloud env vars and for production secrets we can manage through something like AWS Secrets Manager or HashiCorp Vault.
English
1
0
1
72
Edgex retweetledi
Edgex
Edgex@SahilExec·
How do you actually handle API keys and secrets in a team because what I see is: - hardcoded in the codebase (we've all seen it) - .env file committed to git by accident - shared over WhatsApp (yes, really) - AWS Secrets Manager or HashiCorp Vault - something I'm missing what's the setup that actually works at a 5–10 person engineering team
English
17
1
19
1.8K
Edgex
Edgex@SahilExec·
the "inject at runtime, never at build" part is something most teams miss early on secrets ending up baked into docker images is a silent problem image gets shared, pushed to a registry, and suddenly the secret is in 5 places nobody's tracking the git-secrets pre-commit hook is a great guardrail too, catches it before it even becomes a problem personal scoped dev creds per developer is the move most teams skip this and everyone shares one dev key which defeats the whole point solid real world setup
English
0
0
1
32
Anirudh Sharma
Anirudh Sharma@anirudhology·
I had the following setup in one of my previous orgs: 1/ store secrets in a managed vault (AWS secrets manager). Strict no to .env files, code, or chat. 2/ they must be injected only at runtime, never at build: deployment pipeline fetches secrets and injects them as env variables - no secrets in docker images/ci logs. 3/ local dev uses personal, low-risk creds where each dev has their own API keys scoped to dev envs only, stored in .env.local and added to .gitignore - use git-secrets pre-commit hook to block accidental commits. 4/ enable access logging on the vault - rotate prod secrets on a schedule. This setup is simple and that's why it's easy to understand and follow.
English
1
0
0
74
Edgex
Edgex@SahilExec·
@SourabhGurwani it's basically a rite of passage at this point first time it happens everyone panics, rotates the key, adds .env to gitignore, and swears it'll never happen again then a new dev joins 6 months later 💀
English
0
0
0
47
Sourabh Gurwani
Sourabh Gurwani@SourabhGurwani·
@SahilExec Every engineering team eventually has a “who pushed the API key to GitHub?” incident 😭
English
1
0
0
91
Edgex
Edgex@SahilExec·
@jahirsheikh8 clean and simple Doppler especially is slept on for small teams way less setup than AWS Secrets Manager and the DX is actually good the "never in code or chat" rule sounds obvious until you're debugging at 2am and someone just pastes the key in Slack
English
0
0
0
60
Jahir Sheikh
Jahir Sheikh@jahirsheikh8·
@SahilExec Use cloud secret manager (AWS/GCP/Doppler). Gitignore .env for local. Inject secrets via CI/CD only. Never in code or chat.
English
1
0
0
110
Edgex
Edgex@SahilExec·
the "single source of truth" point is underrated most team chaos comes from secrets living in 4 different places and nobody knows which one is current least privilege is where teams get lazy though everyone ends up with prod access "just in case" and that's how things go wrong secret scanning as a guardrail should honestly be step 1 before anything else, catch it before it even hits the repo solid breakdown
English
0
0
0
51
Sid
Sid@SidJain_80·
For a 5–10 person team, you don’t need enterprise complexity 1. Single source of truth (managed secrets) Pick one and stick to it like AWS Secrets Manager Store all production secrets there. 2. Local dev .env Each dev have their own and if they want then they can pull by getting proper access 3. Access control (least privilege) Dont give everyone everything Backend getsDB creds Frontend get public keys only CICD gets deploy specific secrets 4. CI CD Secrets injection Pipeline pulls from secret manager 5. Rotation and audit 6. Guardrails (secret scanning, alert on leaked keys)
English
1
0
1
94
Edgex
Edgex@SahilExec·
this is the right setup honestly the gitignored .env + CI/CD injected secrets combo is where most 5–10 person teams should land first audit logs and rotation get skipped until something breaks and that's usually when everyone panics the "never share in chats" part is doing heavy lifting though, because someone always does it anyway
English
0
0
1
67
Akshay Shinde
Akshay Shinde@ConsciousRide·
For small engineering teams, a practical setup would be this: - .env locally but gitignored - separate secrets per environment - restricted production access - secret rotation process - CI/CD injected secrets - cloud secret managers like AWS Secrets Manager or Vault - audit logs/access controls - never sharing secrets in chats or screenshots The important part is reducing accidental exposure while keeping developer workflow manageable.
English
1
0
0
105
Edgex
Edgex@SahilExec·
You add the window by storing a timestamp alongside the count. Simple fixed window: -Key: rate:user:123 -Value: { count: 5, windowStart: } -On each request, if now - windowStart > 60s, reset the counter -If count > limit within window, return 429 For production you'd use Redis with TTL so the key auto-expires after the window. Sliding window is more accurate but this covers 90% of cases.
English
2
0
14
9.5K
Amin Tai
Amin Tai@aminnnn_09·
@SahilExec 1. there's no time window so the counter just climbs forever. 2. 200 OK for a rate limit is confusing 429 is the correct signal. how would you add the window?
English
2
0
22
12.9K
Edgex retweetledi
Edgex
Edgex@SahilExec·
A junior dev built a rate limiter to block users making too many API requests: if (requestCount > 100) { return 200, { "error": "rate limit exceeded" } } It's deployed. It works. What are the two flaws in this implementation and how would you fix both?
Edgex tweet media
English
86
24
709
265.4K
Edgex
Edgex@SahilExec·
@knowRowan The most dangerous kind of bug it works in dev, passes QA, ships to prod. No errors in logs. Monitoring looks clean. Meanwhile every client thinks their requests are succeeding. 😭
English
0
1
2
16.6K
Rowan
Rowan@knowRowan·
@SahilExec Lol, that’s a classic “deployed but broken” moment.
English
1
0
0
19.3K
Edgex
Edgex@SahilExec·
@AnupamHaldkar Yes 4XX is the right family. Specifically 429 Too Many Requests. It tells the client exactly what happened and good SDKs will auto-handle retry logic on 429. 200 with an error body? Client has no idea it was rejected.
English
0
0
2
2.7K
Edgex
Edgex@SahilExec·
@AnupamHaldkar That's exactly the gap. You identify the user via API key in headers, user ID from auth token, or IP address as fallback. Each user gets their own counter key in Redis like rate:user:123 and you track hits against that. No identity = no real rate limiting.
English
1
0
5
8K
Edgex
Edgex@SahilExec·
@CaptainInsightX Short and correct. 429 + sliding window is the minimum viable rate limiter. Anything less is just a counter with a deadline.
English
0
0
1
6.8K
Edgex
Edgex@SahilExec·
Exactly this. 200 OK on a rejected request is a lie your server tells the client. And clients trust that lie completely. The retry storm is the part nobody thinks about until it happens in prod. You rate limit to reduce load wrong status code + no Retry-After turns it into a DDoS you built yourself.
English
0
0
0
115
akhilesh kumar ojha
akhilesh kumar ojha@kumarakh·
@SahilExec A rate limiter returning 200 OK is already broken. If the request was rejected, the response should be 429 Too Many Requests. And without retry headers like Retry-After, clients may retry aggressively and amplify the overload even further.
English
1
0
0
8.6K
Edgex
Edgex@SahilExec·
Nailed both. 💯 Most people catch the status code but miss the counter logic entirely. The 200 with error body is what kills me — clients will cache it, retry it, log it as success. Silent failure at scale. And yeah, requestCount with no window or identity is basically just... a variable. Means nothing in production. Retry-After header is the underrated one barely anyone adds it but it's what separates a real rate limiter from a toy.
English
7
0
47
36.2K
KrunalSinh Sisodia
KrunalSinh Sisodia@krunalbuilds·
Two major flaws 1. Wrong HTTP status code Rate limiting should return: http id="k8m2qp" 429 Too Many Requests not 200 OK. Otherwise clients think the request succeeded. 2. Counter logic is incomplete requestCount alone is meaningless without: • time window • user/IP identification • distributed/shared storage (Redis etc.) Correct approach: • sliding window/token bucket • centralized counter store • return retry headers (Retry-After)
KrunalSinh Sisodia tweet media
English
5
3
257
39.7K