Ashish “Logmaster”

19.5K posts

Ashish “Logmaster” banner
Ashish “Logmaster”

Ashish “Logmaster”

@ashishlogmaster

Common sense, IT historian trying to prevent repeating mistakes from prior years... Lover of Logs….Tesla, Apple, BloombergTV fan

Boston Katılım Şubat 2011
1.2K Takip Edilen887 Takipçiler
Ashish “Logmaster” retweetledi
Rach
Rach@rachpradhan·
We replaced urllib3 inside boto3 with a Zig HTTP client. One import line. Same API. Upto 115x faster with TurboAPI. import faster_boto3 as boto3 Here's what happened..
Rach tweet media
English
17
36
540
47.8K
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Kubernetes is beautiful. Every Concept Has a Story, you just don't know it yet. In k8s, you run your app as a pod. It runs your container. Then it crashes, and nobody restarts it. It is just gone. So you use a Deployment. One pod dies and another comes back. You want 3 running, it keeps 3 running. Every pod gets a new IP when it restarts. Another service needs to talk to your app but the IPs keep changing. You cannot hardcode them at scale. So you use a Service. One stable IP that always finds your pods using labels, not IPs. Pods die and come back. The Service does not care. But now you have 10 services and 10 load balancers. Your cloud bill does not care that 6 of them handle almost no traffic. So you use Ingress. One load balancer, all services behind it, smart routing. But Ingress is just rules and nobody executes them. So you add an Ingress Controller. Nginx, Traefik, AWS Load Balancer Controller. Now the rules actually work. Your app needs config so you hardcode it inside the container. Wrong database in staging. Wrong API key in production. You rebuild the image every time config changes. So you use a ConfigMap. Config lives outside the container and gets injected at runtime. Same image runs in dev, staging and production with different configs. But your database password is now sitting in a ConfigMap unencrypted. Anyone with basic kubectl access can read it. That is not a mistake. That is a security incident. So you use a Secret. Sensitive data stored separately with its own access controls. Your image never sees it. Some days 100 users, some days 10,000. You manually scale to 8 pods during the spike and watch them sit idle all night. You cannot babysit your cluster forever. So you use HPA. CPU crosses 70 percent and pods are added automatically. Traffic drops and they scale back down. You are not woken up at 2am anymore. But now your nodes are full and new pods sit in Pending state. HPA did its job. Your cluster had nowhere to put the pods. So you use Karpenter. Pods stuck in Pending and a new node appears automatically. Load drops and the node is removed. You only pay for what you actually use. One pod starts consuming 4GB of memory and nobody told Kubernetes it was not supposed to. It starves every other pod on that node and a cascade begins. One rogue pod with no limits takes down everything around it. So you use Resource Requests and Limits. Requests tell Kubernetes the minimum your pod needs to be scheduled. Limits make sure no pod can steal from everything around it. Your cluster runs predictably.
English
75
263
2.3K
208.3K
Ashish “Logmaster” retweetledi
Richard Seroter
Richard Seroter@rseroter·
Here we go. How does MCP get deployed in the real world? Enough vendor chatter and hype ("10 public MCP servers that will MELT YOUR FACE!") stuff. Pinterest's eng team shares their "why", initial architecture, integrations, and security approach. medium.com/pinterest-engi…
Richard Seroter tweet media
English
7
16
93
5K
Ashish “Logmaster” retweetledi
the tiny corp
the tiny corp@__tinygrad__·
Few know this, but I (George) was the only person in history to get a perfect score in CMU compilers, which is likely the best compilers course in the world. Combine that with crazy low level knowledge of hardware from 10 years of hacking. Then add a team of people who are talented enough to push back on my dumb ideas and clean up the implementations of the good ones. The team who keeps this whole operation running, software, infrastructure, and product. I love how there's no hype in deep learning compilers. It was one of the most annoying things about self driving cars, all the noobs who burned through billions on crap that was obviously dumb, and the companies who deserved to go bankrupt years ago if not for government bailouts (Tesla and China will devour them all). In this space, the competition is @jimkxa at Tenstorrent, @clattner_llvm at Modular, and @JeffDean at Google. Three of the living legends of computer science. And companies like @nvidia and @AMD, who are definitely live players, making single chips that have more power than the whole Internet two decades ago. This space is so fun to play in. If you haven't, read the tinygrad spec. It's all coming together beautifully.
Tom Benadryl@olafwillocx

Tinygrad (and others) are so far ahead, it's becoming clearer why they are the path forward. What they don't expose yet though, what is very important imo, is the graph structure of the machines themselves. Still need to have this secret mental picture in your head.

English
20
45
1.3K
85.1K
Awni Hannun
Awni Hannun@awnihannun·
Every company needs a CTO A chief tokens officer
English
10
10
145
9K
Ashish “Logmaster” retweetledi
Allie K. Miller
Allie K. Miller@alliekmiller·
Yesterday, I met with Anthropic and OpenAI and Google. (Separately, of course.) And while the conversations were largely confidential, I do want to share some aggregated reflections on the day as well as general SF takeaways. ⬇️ 1) Competitive advantage as a solo practitioner really does come from taking action and finding an area with a bit of friction and doubling down. Ex: memory management right now isn’t perfect, but allocating an hour to improving that system gives you a ton of leverage over others 2) SF continues to be the number one place for AI work. I know that’s not surprising. I would put New York at a healthy second place. SF tends to be more about crazy agent experiments for the thrill of capability and discovery and NYC tends to be more about kinda crazy agent experiments to find new ways to make money. Not saying either is better. But I met several people renting two apartments to straddle these worlds. You want the frontier of SF and enterprise insights of NYC. It’s one reason I travel between them so much. 3) All AI labs want to hear more from people. All of them. What are you using it for, what do you like, what do you hate, what do you need. Users have a TON of power on the direction of these tools. Keep testing and tweeting at them!! 4) There is very clearly a third customer cohort that is bubbling and underserved. It’s not developers…it’s not the business professional basic users…it’s builders. Everyone can build now. It’s marketing and sales folks vibe coding. It’s legal folks building complex skills. It’s a finance expert building a side project. This is a really undertapped customer base. They feel the Cursors of the world are too complex and doc summarization tools of the world are too basic. 5) Not sure if it was just sample size, but far fewer people were wearing tech gear compared to when I lived in SF. Everyone was still dressed casually, but I used to see Splunk and Optimizely and Slack and VC gear everywhere. People seem more in stealth swag now. 6) We may soon have our world model moment. 7) Speed of iteration and shipping is faster than I’ve ever seen. We see the nonstop drops from Anthropic. We see that because of scale, providers can get a much faster feedback loop of products or features that aren’t hitting. A lot of 2025 was experimentation, but ever since the OpenClaw moment over the holidays, the releases from all three labs have been more concentrated on…things that sorta look and feel like OpenClaw. 8) Small teams can pull off more than ever before. Small teams are the powerhouses of innovation right now. This means that finding new ways to share knowledge, break silos, and remove duplicate work is going to be even more important. AI agents functioning as actually teammates that support an entire system is key. 9) Build more Skills. Build better Skills. 10) Misinformation on AI tools and leaks spread FAST. I’ve seen so many fake stories on these AI labs. Your company needs to actually TEST these tools on your actual use cases to know which models and tools are best and you need to not make large-scale snap decisions based on a rumor of a rumor of a rumor. We will see more volatility. Plan for it. 11) You can feel the seriousness of this moment. Even during random conversations I had in line at a cafe. Lots of folks worried about job loss and lack of meaning. 12) Mac minis were sold out ;)
English
76
52
517
80K
Ashish “Logmaster”
Ashish “Logmaster”@ashishlogmaster·
@sauravstwt Replatform to ECS. 22k RPS is like one ec2 server. Kube is massive over engineering The team deserves this pain.
English
0
0
0
157
Saurav Chaudhary
Saurav Chaudhary@sauravstwt·
You call yourself “Senior DevOps”? Then debug this 👇 Time: 02:17 UTC Region: us-east-1 Infra: EKS 1.28 + Cilium (eBPF) + Istio 1.20 + ALB → Ingress → Envoy → Go API → RDS + Redis Traffic: 22 k rps steady Symptoms 27 % of requests slowed from 200 ms → 6 s. No 5xx spike, 99 % success rate - just latency. Only api.example.com (external) affected, internal service is fine. Timeline 01:48 Karpenter added 6 new nodes (m6i.large). 01:56 Istio EnvoyFilter updated (log format). 02:05 Prometheus agent upgrade. 02:12 Latency spike, HPA silent. Key Findings 1. Slow pods run only on new nodes. 2. New nodes show lower CPU but higher latency. 3. Bimodal latency in istio_request_duration. 4. ss -s → hundreds of TCP connections in rto: 3–6 s. 5. tcpdump → SYN/ACK fast → DATA delayed + out-of-order. 6. ALB targets in subnet-b-az1 = slow ones. 7. cilium bpf metrics → CT_EVICTIONS, DROP_FRAG_NEEDED climbing. 8. ip link → old nodes MTU 9001 vs new 1500. 9. DNS, DB, Redis — normal. Clues Latency only between ALB → sidecar, never inside cluster. No packet loss, probes all green. New nodes = different AMI bootstrap. Your Challenge 1. What’s your first hypothesis and how do you disprove it fast? 2. Which one command per layer do you run next? • Network • Node • Mesh • App • ALB 3. Why didn’t HPA scale even as users suffered? 4. How would you rollback/contain latency in 15 minutes? Trap Answers Not DNS. Not DB. Not the code. If you can connect all dots, node MTU mismatch → packet fragmentation → TCP retransmits → tail latency → no HPA trigger (based on CPU) — you’re senior. Repost and Comment your first 3 steps and the quickest mitigation you’d attempt. #DevOps #SRE #Kubernetes #Networking #InfraThrone
English
6
11
101
10.8K
Ashish “Logmaster” retweetledi
Todd Saunders
Todd Saunders@toddsaunders·
A Fortune 500 exec who runs one of the biggest blue collar companies in the country DM'd me yesterday. Gave me an idea that I'm starting to get really excited about. Build a version of YC for blue collar builders who use Claude Code. Essentially an accelerator for blue collar founders building for trades, construction, fleet, field services, etc. Whatever their domain expertise is. They offered to help fund the first batch, and we started to put together a list of incredible mentors. It's crazy how fast the power dynamic in software has shifted. But this could be very big.
English
120
32
616
38.8K
Ashish “Logmaster” retweetledi
Dan-O’s Seasoning
Dan-O’s Seasoning@danosseasoning·
Crispy, cheesy, juicy... these Birria Phyllo Bombs are the ultimate snack! 🔥
English
13
71
784
18.3K
Ashish “Logmaster” retweetledi
DuckDB
DuckDB@duckdb·
We're excited to announce duckdb-skills, a DuckDB plugin for Claude Code! We think the embedded nature of DuckDB makes it a perfect companion for Claude in your local workflows. The skills supported include: + read-file and query – uses DuckDB's CLI to query data locally, unlocking easy access to any file that DuckDB can read. + read-memories – a clever idea to store your Claude memories in DuckDB and query them at blazing speed. These are powered by two additional skills: + attach-db – gives Claude a mechanism to manage DuckDB state through a .sql file linked to your project. + duckdb-docs – uses a remote DuckDB full-text search database to query the DuckDB docs and answer all of your (and Claude's own) questions. github.com/duckdb/duckdb-…
English
19
104
687
53.2K
Hot Aisle
Hot Aisle@HotAisle·
Nobody at Nvidia asked where the $2.5B worth of GPUs were going?
English
18
2
54
5.9K
Ashish “Logmaster” retweetledi
عبدالعزيز المقبل
Morgan Stanley published this chart about massive disruptions in industries across the board
عبدالعزيز المقبل tweet media
English
22
914
2.6K
207.4K
Ashish “Logmaster” retweetledi
Nacho Rovira
Nacho Rovira@GordoGeos·
🌎🔥 ARGENTINA LNG: LA LOGÍSTICA COMO VENTAJA REAL El mapa de YPF no es marketing. Es geopolítica aplicada a la energía. Argentina no solo tiene gas. Tiene posición + rutas + timing. Y eso, en LNG, vale millones. 🛢️ ⛴️ TIEMPOS DE ENVÍO (ida y vuelta) • 🇧🇷 Brasil: 10 días 👉 Mercado inmediato. Arbitraje regional casi instantáneo. • 🇪🇸 Europa Ibérica: 33 días • 🇪🇺 Noroeste europeo: 34 días 👉 Competitivo contra EE.UU. y Qatar. Ventana clara. • 🇮🇳 India: 44 días 👉 Mercado en crecimiento + demanda estructural. • 🇨🇳 🇯🇵 Asia: 54 días 👉 Más lejos, pero con rutas limpias y previsibles. 🌍 LA CLAVE NO ES SOLO EL TIEMPO ES POR DÓNDE NAVEGÁS: ❌ No Panamá ❌ No Suez ❌ No Ormuz ❌ No chokepoints geopolíticos 👉 Resultado: ✔ Menor riesgo ✔ Menores seguros ✔ Mayor previsibilidad ✔ Menos volatilidad logística ⚡ VENTAJAS QUE EL MERCADO SÍ MIRA • 🛢️ Gas abundante (Vaca Muerta) • 🌊 Salida dual Atlántico–Pacífico • 🌤️ Clima operativo estable • 🚢 Rutas seguras, sin cuellos de botella 🧠 TRADUCCIÓN A NEGOCIO Mientras otros dependen de: • conflictos en Medio Oriente • congestión en canales • costos logísticos impredecibles Argentina juega con: 👉 flujo continuo + rutas limpias + previsibilidad Y en LNG, eso se traduce en: 💵 contratos 💵 primas de confiabilidad 💵 market share 🎯 CONCLUSIÓN 👉 Argentina está geográficamente bien parada para vender energía al mundo sin pasar por el caos. Y en un mundo cada vez más caótico esa es la mejor ventaja estructural posible.
Nacho Rovira tweet media
Español
37
410
2K
50.2K
Ashish “Logmaster” retweetledi
ALI TAJRAN
ALI TAJRAN@alitajran·
Microsoft introduces Backup and Recovery for Microsoft Entra ID! Entra Backup and Recovery solution enables you to quickly recover from malicious attacks or accidental changes by reverting your core tenant objects to any previous state within the last 5 days. With automated backups and granular recovery capabilities, it ensures minimal downtime and supports your business continuity in the face of unexpected disruptions. Entra automatically generates one backup per day, retaining the last 5 days of backup history. You can recover key properties of the following core tenant objects: - Users - Groups - Applications - Conditional access policies - Service principals - Organization - Authentication methods - Authorization policy - Named locations #EntraID #Microsoft365 #Microsoft
ALI TAJRAN tweet media
English
13
135
607
85.5K
Ashish “Logmaster” retweetledi
Millie Marconi
Millie Marconi@MillieMarconnni·
Holy shit...AI search is eating Google's traffic and most websites have zero idea why they're invisible to ChatGPT and Perplexity. A developer just built geo-seo-claude to fix that. Point it at any URL. It runs a full GEO audit, scores your AI citation readiness, checks which AI crawlers can even access your site, and generates a client-ready PDF report. AI-referred traffic converts 4.4x higher than organic. Traditional SEO agencies haven't figured this out yet. This repo has. 100% Opensource. MIT License. Link in comments.
Millie Marconi tweet media
English
81
197
2.3K
290.4K