@harshalstwt Exactly — it’s rarely one big mistake. Usually data transfer, background jobs, or RDS/storage changes quietly creeping up over time. How do you usually spot them?
@Alexand65601285 Same here. Most of the time it’s not one big mistake, but a few small changes that quietly add up. Finding the spike is easy, understanding why it happened takes the real effort.
Cloud cost problems usually are not about bad engineering they are about lack of visibility.
I wrote a short case study on how teams like Magnitt, Voltlines, etc. use #MilkStrawAI to make sense of AWS costs while scaling.
No hype. Just real lessons.
🔗 harshalr.hashnode.dev/how-milkstraw-…
@harshalstwt Exactly and the frustrating part is that AWS doesn’t really answer the “why” without a lot of digging.
Out of curiosity, what usually ends up causing the spikes when you look into them?
@sagarmenon98 Probably fewer than people think — but a lot were quietly slowed or forced to make bad tradeoffs because they didn’t understand where costs were coming from.
@CloudOpsStudio Orphaned snapshots show up a lot.
What surprises most people is how small things like this add up month over month without being obvious.
@tpschmidt_ This matches what I keep seeing — most “surprises” come from a small handful of services, not overall usage.
NAT Gateway and forgotten EBS volumes show up constantly.
My end-of-year AWS account cleanup checklist 🎄
December is when I take a few hours to audit my AWS organization and look for waste. This year I found some expensive surprises 😅
Here's what I check:
𝗖𝗼𝘀𝘁 𝗥𝗲𝗽𝗼𝗿𝘁𝘀 𝗮𝗻𝗱 𝗧𝗿𝗲𝗻𝗱𝘀
Introducing Spotlight 🔦
The best of Snapchat. Sit back and take it all in, or submit your video Snaps and you could earn a share of more than $1,000,000 a day. Happy Snapping!
click.snapchat.com/aVHG?pid=Twitt…