Vaibhav

243 posts

Vaibhav banner
Vaibhav

Vaibhav

@angrykabootar

Engineer, photographer, cricket buff | Principal Engineer @nielsen

Katılım Şubat 2012
319 Takip Edilen77 Takipçiler
Vaibhav
Vaibhav@angrykabootar·
I learnt this after getting burnt 2–3 times: truncate -s 0 <filename> Log rotation is the ideal solution, but truncate is handy in emergencies
Akhilesh Mishra@livingdevops

Early in my DevOps career, I deleted a 5GB log file from a production server that was running out of space. I ran df -h expecting to see the disk usage drop. It didn’t. Still showed 100% full. No errors, no warnings. Just the same disk usage as before I deleted anything. That’s when I learned that deleting a file doesn’t always free up space immediately. In Linux, what we think of as a “file” is actually two separate things: the filename (which is just a pointer) and the inode (which contains the actual data and metadata). When you delete a filename, you’re only removing the pointer. The inode and its data remain on disk as long as any process still has the file open. In my case, the web server was still writing to that log file. Even though I had deleted the filename, the server process kept its file handle open. The inode stayed alive, invisible to normal file listings but still consuming disk space. The space was only freed when I restarted the web server, which closed all its file handles. This is why you need different commands to see the full picture: # Check filesystem usage - df -h # Check actual directory sizes - du -sh /var/log/* # Find deleted files still open by processes - lsof +L1 The du command shows you what’s actually using space in directories, while df shows filesystem-level usage. When they don’t match, you often have deleted files still held open by running processes. This is also why proper log rotation doesn’t just delete files. Tools like logrotate rename files and send signals to processes so they can close and reopen their file handles cleanly. Three key takeaways: 1. Filenames are just pointers to inodes 1. Deletion only happens when no processes reference the inode 1. Always check both df and du when troubleshooting disk space It’s a small detail, but understanding it can save you from confusing production incidents.

English
0
0
0
30
Vaibhav
Vaibhav@angrykabootar·
At my first job we used to run a small 3 node kafka cluster on EC2s processing over 10B msgs daily. The whole setup costed us total of 5000$/yr. In 5 years we spent less than 48 hours in total on maintenance and upgrades with zero downtime. Confluent quoted 150k yearly.
Jeremy Howard@jeremyphoward

til some folks nowadays consider running a normal linux web server like we've all done for decades to be a "non-standard stack" feels like some kind of learned helplessness or something

English
0
0
1
99
Prathamesh Avachare
Prathamesh Avachare@onlyprathamesh·
If Virat Kohli hadn't gotten out of form, he would have had 100 international centuries alongside his name by now. 😕
English
134
92
1.8K
88.6K
Vaibhav
Vaibhav@angrykabootar·
What is with Sonnet 4 sneakily refactoring random logic while doing completely unrelated changes. Happening way too frequently @AnthropicAI
English
0
0
0
25
Vaibhav retweetledi
Saurav Varma
Saurav Varma@Saurav_Varma·
🚀 We're hiring EMs, SDE3 and SDE2! 🚀 Healthcare shouldn’t be a mystery. Your lab reports, prescriptions, and vitals should be understandable, actionable, and work for you. That’s what we’re building at Health Records & Insights—a personal AI-powered health system. 🔗Apply👇🏼
English
2
4
7
910
Vaibhav
Vaibhav@angrykabootar·
Here is an idea to put all the EVM manipulation debate to rest. @ECISVEEP should publish voting data with timestamp for all machines online(masked obv) Let independent analysts verify patterns & spot irregularities themselves. Transparency builds trust #MaharashtraElection2024
English
0
0
0
26
Vaibhav
Vaibhav@angrykabootar·
When and why did we as Indians accepted that having a bad drainage system is completely normal in the modern society? It impacts every class of people, then why there's no outrage around cities getting flooded after every rainfall. I am sure it's not rocket science.
English
0
1
1
62
Vaibhav retweetledi
Saurav Varma
Saurav Varma@Saurav_Varma·
For everyone who answered the surveys, thank you so much. Presenting the first version of our redesigned Health Records. Beta starts rolling out today. If you have more suggestions, features that you as a user would want; please let me know. I’d love to help you be healthier!
Saurav Varma@Saurav_Varma

If you're someone who prioritises your health whether its through sleep, diet or working out etc, then I'm building something just for you! I'd really like to know how we can help Please fill this (slightly long) survey for a chance to get early access: form.typeform.com/to/a1asqecq

English
1
2
8
2.1K
Vaibhav
Vaibhav@angrykabootar·
@sandeepssrin Customer care called and said they are blocking the card.
English
0
0
1
186
Vaibhav
Vaibhav@angrykabootar·
Disable all transactions on your Axis card immediately!! I had an international transaction attempted on my Magnus Card for 50₹ but failed because I had already disable it. Looks like a breach from some third party
Sandeep Srinivasa@sandeepssrin

Something is VERY WRONG at @AxisBank credit card. One of my cards was used for a fraudulent txn at Uber Eats Canada. Card was reissued - suddenly fraudulent txns on new card WHICH I DONT HAVE YET. Investigation SAK 0000 233 6061 raised at Axis Bank. They REFUSED to investigate.

English
1
2
3
1.4K
chronark
chronark@chronark·
Lots of people suggest or offer storing logs and stuff in s3 but can I actually query that without loading everything in memory first? Or is it just a cheap way to archive data?
English
9
0
12
7.6K
Vaibhav
Vaibhav@angrykabootar·
@championswimmer This thread makes me sad. How is someone using a piece of furniture is even a point of discussion. :/
English
0
0
0
108
Arnav Gupta
Arnav Gupta@championswimmer·
Oh so you ask your guests to UPI you Rs 2 as usage charge before they sit on your sofa ? So much mental gymnastics to propagate untouchability in 2023. Take your mind out of the gutter bro.
English
4
9
171
20.7K
Vaibhav
Vaibhav@angrykabootar·
@kozlovski Hey, been following your series. It’s really helpful and crisp. Would you be covering poison message and Dead letter queues?
English
1
0
0
120
Stanislav Kozlovski
Stanislav Kozlovski@kozlovski·
Kafka consumer groups are amazing. Kafka consumer groups suck. Both can be true at the same time. Nothing works for everything. The consumer group model works great for high throughput (scalability) and preserving message order per partition. 🔥 To ensure both, the consumer group protocol enforces a strict one-to-many mapping of consumer -> partitions. i.e a consumer in a group has exclusive access to a given partition. At best, you can do a one-to-one mapping. But. What happens if you have more consumers than partitions? You're stuck. 😬 Your consumer group scale-up runway is limited by the number of partitions you have for a given topic. You can never have more consumers than partitions, as any extra consumers won't have anything to read from! To prevent this, people usually over-partition their topics. This requires users to think about partitions. But some use cases literally don't care about ordering. Instead of having to hassle with how many partitions there are and who's consuming from where, they'd prefer to just read *any* new records. Further? What if your consumer fails processing a few select messages out of the batch it pulled? 🤨 You have two options: 🔶 - forget about them and advance the offset forward. This results in a lot of missed messages. 👎 🔶- keep the offset before the first failed record, and re-process. This results in a lot of duplicate processing. 👎 It just doesn't work. Kafka doesn't allow per-record consumption acknowledgement. ❌ It therefore doesn't easily allow per-record consumption retries. The solution? It will be in your email inbox tomorrow :) ✅ Not subscribed? Check my profile for the link.
Stanislav Kozlovski tweet media
English
7
38
210
31.5K
Vaibhav
Vaibhav@angrykabootar·
Continuing on my previous post, I have tried to explain the challenges of Ingesting data from IoT devices at Loconav. I have tried to keep things simple but might have lapsed a bit, let me know if you any feedback or questions around it. Happy Reading (: @vaibhav-vg/art-of-ingesting-data-at-loconav-94bdacc27d6f" target="_blank" rel="nofollow noopener">medium.com/@vaibhav-vg/ar…
English
0
2
2
152
Vaibhav
Vaibhav@angrykabootar·
@sandeepssrin We have been facing the same issue for us-west-2. But it was not specific to airtel. We drilled down to some local ISP level issues but never got a solid lead. I think it is fixed now. Can you help me with what was the actual issue?
English
0
0
0
32