Sabitlenmiş Tweet
Edgex
21.3K posts

Edgex
@SahilExec
Backend engineer and lurker | DSA | CSE'28 | DM for collab
127.0.0.1 Katılım Ekim 2025
589 Takip Edilen2.8K Takipçiler

Today morning, my analytics dashboard showed 35K engagements for the past month.
By evening, it had randomly dropped significantly to 20K engagements.
I asked Grok about it, and it basically said smaller accounts can lose engagements due to algorithmic recalculations/removals.
I honestly don’t even know what to say anymore.
The funniest part?
You can’t verify it, appeal it, or do anything about it.
You just accept it and move on.
English

GitHub Actions secrets and Vercel env vars are genuinely good enough for most small teams starting out
people jump straight to Vault and then spend 2 weeks setting it up for a 6 person team that ships a SaaS
the progression makes sense though local .env → platform secrets → proper secret manager as you scale
just don't stay on "GitHub Actions is fine" when you hit prod with real user data
English

@SahilExec For a 5–10 person team, the setup that usually works is
local .env files for dev,
secrets we can store in GitHub Actions / Vercel / Docker / cloud env vars and for
production secrets we can manage through something like AWS Secrets Manager or HashiCorp Vault.
English
Edgex retweetledi

How do you actually handle
API keys and secrets in a team
because what I see is:
- hardcoded in the codebase (we've all seen it)
- .env file committed to git by accident
- shared over WhatsApp (yes, really)
- AWS Secrets Manager or HashiCorp Vault
- something I'm missing
what's the setup that actually works
at a 5–10 person engineering team
English

the "inject at runtime, never at build" part is something most teams miss early on
secrets ending up baked into docker images is a silent problem image gets shared, pushed to a registry, and suddenly the secret is in 5 places nobody's tracking
the git-secrets pre-commit hook is a great guardrail too, catches it before it even becomes a problem
personal scoped dev creds per developer is the move most teams skip this and everyone shares one dev key which defeats the whole point
solid real world setup
English

I had the following setup in one of my previous orgs:
1/ store secrets in a managed vault (AWS secrets manager). Strict no to .env files, code, or chat.
2/ they must be injected only at runtime, never at build: deployment pipeline fetches secrets and injects them as env variables - no secrets in docker images/ci logs.
3/ local dev uses personal, low-risk creds where each dev has their own API keys scoped to dev envs only, stored in .env.local and added to .gitignore - use git-secrets pre-commit hook to block accidental commits.
4/ enable access logging on the vault - rotate prod secrets on a schedule.
This setup is simple and that's why it's easy to understand and follow.
English

@SourabhGurwani it's basically a rite of passage at this point
first time it happens everyone panics, rotates the key, adds .env to gitignore, and swears it'll never happen again
then a new dev joins 6 months later 💀
English

@SahilExec Every engineering team eventually has a “who pushed the API key to GitHub?” incident 😭
English

@jahirsheikh8 clean and simple
Doppler especially is slept on for small teams way less setup than AWS Secrets Manager and the DX is actually good
the "never in code or chat" rule sounds obvious until you're debugging at 2am and someone just pastes the key in Slack
English

@SahilExec Use cloud secret manager (AWS/GCP/Doppler).
Gitignore .env for local.
Inject secrets via CI/CD only.
Never in code or chat.
English

the "single source of truth" point is underrated
most team chaos comes from secrets living in 4 different places and nobody knows which one is current
least privilege is where teams get lazy though everyone ends up with prod access "just in case" and that's how things go wrong
secret scanning as a guardrail should honestly be step 1 before anything else, catch it before it even hits the repo
solid breakdown
English

For a 5–10 person team, you don’t need enterprise complexity
1. Single source of truth (managed secrets)
Pick one and stick to it like AWS Secrets Manager
Store all production secrets there.
2. Local dev .env
Each dev have their own and if they want then they can pull by getting proper access
3. Access control (least privilege)
Dont give everyone everything
Backend getsDB creds
Frontend get public keys only
CICD gets deploy specific secrets
4. CI CD Secrets injection
Pipeline pulls from secret manager
5. Rotation and audit
6. Guardrails (secret scanning, alert on leaked keys)
English

this is the right setup honestly
the gitignored .env + CI/CD injected secrets combo is where most 5–10 person teams should land first
audit logs and rotation get skipped until something breaks and that's usually when everyone panics
the "never share in chats" part is doing heavy lifting though, because someone always does it anyway
English

For small engineering teams, a practical setup would be this:
- .env locally but gitignored
- separate secrets per environment
- restricted production access
- secret rotation process
- CI/CD injected secrets
- cloud secret managers like AWS Secrets Manager or Vault
- audit logs/access controls
- never sharing secrets in chats or screenshots
The important part is reducing accidental exposure while keeping developer workflow manageable.
English

You add the window by storing a timestamp alongside the count.
Simple fixed window:
-Key: rate:user:123
-Value: { count: 5, windowStart: }
-On each request, if now - windowStart > 60s, reset the counter
-If count > limit within window, return 429
For production you'd use Redis with TTL so the key auto-expires after the window. Sliding window is more accurate but this covers 90% of cases.
English

@SahilExec 1. there's no time window so the counter just climbs forever.
2. 200 OK for a rate limit is confusing 429 is the correct signal.
how would you add the window?
English
Edgex retweetledi

@knowRowan The most dangerous kind of bug
it works in dev, passes QA, ships to prod.
No errors in logs. Monitoring looks clean.
Meanwhile every client thinks their requests are succeeding. 😭
English

@AnupamHaldkar Yes 4XX is the right family.
Specifically 429 Too Many Requests.
It tells the client exactly what happened and good SDKs will auto-handle retry logic on 429.
200 with an error body? Client has no idea it was rejected.
English

@AnupamHaldkar That's exactly the gap.
You identify the user via API key in headers, user ID from auth token, or IP address as fallback.
Each user gets their own counter key in Redis like rate:user:123 and you track hits against that.
No identity = no real rate limiting.
English

@SahilExec How it is getting detected that same user is hitting the request
English

@CaptainInsightX Short and correct.
429 + sliding window is the minimum viable rate limiter.
Anything less is just a counter with a deadline.
English

Exactly this.
200 OK on a rejected request is a lie your server tells the client. And clients trust that lie completely.
The retry storm is the part nobody thinks about until it happens in prod. You rate limit to reduce load wrong status code + no Retry-After turns it into a DDoS you built yourself.
English

@SahilExec A rate limiter returning 200 OK is already broken.
If the request was rejected, the response should be 429 Too Many Requests.
And without retry headers like Retry-After, clients may retry aggressively and amplify the overload even further.
English

Nailed both. 💯
Most people catch the status code but miss the counter logic entirely.
The 200 with error body is what kills me — clients will cache it, retry it, log it as success. Silent failure at scale.
And yeah, requestCount with no window or identity is basically just... a variable. Means nothing in production.
Retry-After header is the underrated one barely anyone adds it but it's what separates a real rate limiter from a toy.
English

Two major flaws
1. Wrong HTTP status code
Rate limiting should return:
http id="k8m2qp" 429 Too Many Requests
not 200 OK.
Otherwise clients think the request succeeded.
2. Counter logic is incomplete
requestCount alone is meaningless without:
• time window
• user/IP identification
• distributed/shared storage (Redis etc.)
Correct approach:
• sliding window/token bucket
• centralized counter store
• return retry headers (Retry-After)

English






