Sabitlenmiş Tweet
K Manoj Kumar
2.7K posts

K Manoj Kumar
@kmanojkumar
Building @VegaStack
Bengaluru, India Katılım Temmuz 2014
256 Takip Edilen414 Takipçiler


@brankopetric00 Most OS resolvers prefer IPv6 when both A and AAAA records exist - it's the default in RFC 6724. Your app asks for the API's IP, DNS returns both, resolver picks IPv6 first. Old VPC can't route it, so socket fails.
English

Your application connects to an external API.
Suddenly, you get `SocketException: Address family not supported by protocol`.
Nothing changed in your code.
However, the external API provider just added an AAAA (IPv6) record to their DNS.
Your servers are on an old VPC that doesn't support IPv6.
Why did your application prefer the IPv6 address by default, and how do you force it back to IPv4 without code changes?
English

Approach A - your web server holds the connection open for the entire upload + processing time. 500MB file on a slow connection could be mins. That thread/worker is blocked, can’t serve other requests. 10 users upload simultaneously and your server is choking.
Approach B decouples it. Presigned URL means the upload bytes go straight to S3, your server just generates the URL. Lambda picks up processing async. Backpressure solved coz your web server never touches the heavy work - it stays free to handle normal traffic.
English

We need to process uploaded CSV files.
Approach A: User uploads file -> Server saves to disk -> Server processes it -> Returns response.
Approach B: User uploads to S3 (Presigned URL) -> S3 triggers Lambda -> Lambda processes it -> Updates DB.
Why is Approach A a 'Availability Risk' for your web servers if the files are large (e.g., 500MB), and how does Approach B solve the 'Backpressure' problem?
English

The tricky part is these two signals can contradict each other. Enthusiasm fading but still have ideas left. Or super curious but genuinely stuck with nothing new to try.
I’ve found the eec case is actually fine - take a break, read something unrelated, ideas come back. But low enthusiasm with plenty of options left.. that’s the real quit signal. You’re not stuck on the problem, you’ve just stopped caring about solving it.
English

There's this myth that you pound away forever and never give up—that if you just persevere long enough, success will come.
But that's just not right.
The question I get asked all the time and one I ask myself too is how do you know when "this isn't working" means stop doing this versus we just need to find the thing that unlocks it?
It's really hard.
There's no specific point.
What I look at is two things:
First, my enthusiasm level.
Usually a problem is so intriguing that I'm not caught up in whether it's working or not.
Each time I try something and it doesn't work, I've learned something more about the problem, and it becomes more and more fascinating. There's a positive curve in terms of wanting to get out of bed first thing in the morning and rush in to try something new. But eventually, that begins to fade.
The second thing is that you genuinely begin to run out of things to try. At that point, you start asking yourself: Am I spending my time chasing something I'm no longer that interested in—and that I'm increasingly unlikely to figure out?
Life is too short.
The time you spend chasing something you're slowly realizing isn't going to happen is opportunity cost—time you could be going after something new that excites you.
Finding that balance is the crucial part. And the bad news is there's no easy way to know. It's going to have to be a gut call.
English

Teams default to minimum memory thinking it’s cheapest. The math doesn’t work that way. Lambda charges per GB-second - if 8x memory cuts duration by 10x, you’re paying less overall.
The catch is this only applies to CPU-bound work. If your function spends most of its time waiting on an API or db, more memory just means paying more to sit idle.
English

You are optimizing an AWS Lambda function.
Memory: 128MB.
Duration: 10 seconds.
Cost: $[X].
You increase the Memory to 1024MB.
The Duration drops to 1 second (because Lambda allocates CPU proportional to Memory).
The total cost decreases.
Why is paying for more memory sometimes cheaper than paying for less memory in the serverless world?
English

I see this surprise ppl regularly. sensitive = true is cosmetic - hides the value in terminal output and plan logs. That’s it.
Terraform writes the actual value to state coz it needs it for diffing on every run.
Encryption at rest protects the file on disk. But if someone has read perms, they see everything plain.
Keep secrets out of Terraform entirely - use AWS Secrets Manager, Vault etc and store secrets there, reference them at runtime. Terraform manages infra, not secrets.
English

An attacker gains access to a read-only S3 bucket containing your Terraform State file.
The state file is encrypted at rest.
However, the attacker opens the JSON file and finds:
`"password": "super-secret-db-password"`
in plain text.
Why does Terraform store sensitive output values in plain text in the state file even if they are marked as `sensitive = true` in the code?
English

@namyakhann The problem is most designers built careers on execution. The strategic layer was someone else's job
The ones thriving rn were already operating at that level. For everyone else it's not keep doing what you're doing - it's a whole skill reset
English

What AI Will Never Replace:
- Design + strategy
- Problem-solving
- User understanding
- Human-centered brand work
- Product design that considers context
- UX that requires empathy
Employers don't want more AI-generated designs.
They want designers who can:
- Understand business problems
- Think critically about solutions
- Communicate with stakeholders
- Lead design decisions
- Direct and refine AI outputs
These skills are hard to automate. Roles that combine design + strategy are projected to GROW, not shrink.
English

@marcrandolph The bar for engagement dropped so low that basic thoughtfulness looks exceptional. Half the replies now are obviously AI or one-word reactions.
Actually reading the post before responding is a differentiator.
English

@peer_rich Also works for hiring. Oversell the role and they quit in 3 months when reality hits. Show the mess upfront and whoever joins is already committed to it
English

@gregisenberg The ones who survive won't be SaaS companies that added agents. It'll be agent-native companies that never thought in SaaS terms to begin with
Same thing happened with mobile a 10-15 yrs ago. The winners weren't desktop apps with mobile versions, it was mobile-first companies
English

@vitddnv Fire quickly if needed, is where most founders fail. The decision is usually obvious 3 months before you act on it
Every week you wait, your good people are watching
English

@sweatystartup The third option most managers pick - keep both and let the high performer carry the low performer's work. Until they burn out and leave anyway
English

@brankopetric00 The real reason is nobody wants to be on-call for a dead hard drive at 3am
You're paying AWS to hold the pager. That's the actual value proposition
English

@livingdevops The irony is job posts still say "5 yrs Kubernetes experience required" then wonder why they hired someone who only knows Kubernetes
Companies create the problem then complain about it
English

There are zero successful DevOps engineers who have only used one tool per use-cases in their careers.
Zero.
Most have used dozens of tools across the stack.
Tools don’t matter. They have no bearing on how valuable you are.
More importantly, companies don’t care if you’re a Terraform expert or an Ansible master.
They don’t care if you swear by Jenkins or GitHub Actions.
They care about
- Can you solve their problems?
- Can you automate their deployments? - Can you reduce their costs?
- Can you make their infrastructure reliable?
That’s it.
I’ve seen DevOps engineers obsess over Kubernetes, Jenkins, Build entire identities Ansible.
Then the company switches to ECS.
Or migrates to GitLab CI.
Or decides serverless is the future.
And suddenly, their expertise feels worthless.
Things thar actually matters:
- Understanding systems to it’s core.
- Knowing how to troubleshoot.
- Being able to learn new tools quickly.
- The ability to figure things out, and ship solutions.
I started with Linux, shell scripts, Ansible and then moved AWS. Then Python, Terraform, Docker, Kubernetes.
Now I use whatever solves the problem.
Stop building your career around tools. Build it around problem-solving.
Because the tools will change.
But the ability to automate, optimize, and deliver never goes out of style.
English

@jjen_abel The riz folks close first meetings. The reading-between-lines folks close deals
Problem is the first type interviews well coz... riz. So they keep getting hired
English

@SahilBloom Energy is finite tho. You can't bring it to every room
Part of the skill is picking which rooms actually deserve it
English

@hnshah The tricky part is knowing which line to hold. Not everything is worth the fight
I've seen leaders burn goodwill defending standards that didn't matter, then have nothing left when the real ones came up
English

@LBacaj The hard part is AI makes you feel productive when you're just adding noise
Deleting AI-generated text feels wrong coz it took effort to prompt this. No it didn't. Cut it anyway
English

@asmartbear Now it's - tell the AI what to do, then verify the AI told the computer what you actually meant
The human-to-human part hasn't changed. Someone still has to read, review and maintain whatever the AI produced
English

@ishaansehgal Crashing is fine if you're crashing fast. The problem is spending 3 weeks on something then finding out a library does it in 2 lines
A quick scan of what exists before building saves more time than any course
English












