Dmytro Obodowsky 🇺🇲🇺🇦🇮🇱
2.1K posts


Dmytro Obodowsky 🇺🇲🇺🇦🇮🇱 retweetledi

@lawyer4SMBs If you are in good physical shape and you don’t hate it, do your chores. Especially if you are white collar worker sitting on your chair all day. Go out mow your lawn, enjoy outdoors
English

I remember a while back I said that if you make over $200k a a year and still mow your own lawn, you're doing it wrong.
Now that the average quote I'm getting for my lawn is $400 a month, I think I'm changing my mind on that statement.
What are folks running for lawnmowing equipment on a 1 acre mow these days?
English

@thdxr This is the first time imo, when cleaning up tech debt, made convincing execs optional. You see complexity and inefficiency: just spin up some agents and validate the result. Ship it.
English

@javarevisited Dude, stop it. “Only one with the encryption key” is far fetched
English

A Senior Lead was laid off after 8 years. "It’s just a numbers game," they told him.
He handed over his laptop, shook hands, and didn't say a word.
Two hours later, the CEO realized that the "silent" developer was the only one with the encryption keys to the production database.
The manager called him, expecting him to help for "the sake of the team."↓
English

@bluewmist Just convince yourself , the only commitment is cross the gym door. The rest is not a commitment. After 10 mins of warmup you are burning on all cylinders
English
Dmytro Obodowsky 🇺🇲🇺🇦🇮🇱 retweetledi

@Oblivious9021 One plausible answer would be to separate memory access. Threads can share memory within single process, interprocess memory sharing is possible too but different .
English

Here's the technical interview question I'd actually ask.
Suppose you have 5 MB of text, how would you count the words? What about 5 GB, TB, PB, EB? When would your approach change and why?
What if you had to do it once, or once a week, day, hour, or second?
Raj Dabre@prajdabre
Technical interview question: Suppose you have 5 TB worth of text data and you want to count the total number of words, how will you do this?
English

@0xlelouch_ If messages backlog already sitting in existing partitions, does adding more partitions reshuffle existing backlog across additional partitions, or new partitions utilized ONLY for new messages? If answer is 2nd option, you need to vertically scale your existing workers to catch.
English

@0xlelouch_ If you have huge backlog in this single partition , adding more partitions wouldn’t help. I haven’t worked with Kafka for a long time, but you need to repartition this single one into many and run parallel consumers.
English

@brankopetric00 From what I read here your HPA always scales up to the max since 70% is the threshold and with low rps it is still 85%.
I recommend to find rps level where your cpu consumption in low 40-ies.
Use Fortio for example to apply load to the single node.
Also check the Garbage Collecto
English

HPA is scaling but performance isn't improving.
The situation:
- HPA target: 70% CPU
- Current replicas: 15 (max: 20)
- CPU utilization: 85% average
- Request latency: still high
Observations:
- Pods scale up quickly when load increases
- Latency doesn't improve with more pods
- Individual pod CPU shows 85% even with low request count
- Application is CPU-bound computation
Pod resources:
- requests: 500m CPU, 512Mi memory
- limits: 500m CPU, 512Mi memory
Node capacity: 4 CPU per node, plenty available.
Why isn't horizontal scaling helping?
English

@SumitM_X Split the traffic between read and writes workloads . Scale them differently and off course read replicas for persistence layer
English

Quite frankly, the most effective solution is to host database in the client.
Let me elaborate. “Client” is single in the post, not plural. Client needs all data, 1M users. Assuming effective means, fast and cost effective.
SumitM@SumitM_X
Backend Interview question that is being asked a lot these days : You are implementing an API that returns users to the client, but the database has 1 M(1000000) users How will you return data to the client efficiently?
English
Dmytro Obodowsky 🇺🇲🇺🇦🇮🇱 retweetledi









