ทวีตที่ปักหมุด
clovis
2.1K posts

clovis
@clovistb
Chasing Kubernetes wisdom • DevOps Engineer • laC addict • Building platforms
Texas เข้าร่วม Mart 2011
1.3K กำลังติดตาม1.7K ผู้ติดตาม

Since morning yesterday. 22hrs here
Oluwatobi Bamidele 😍❤️@callmetobiloba
Light has not blinked here in 24 hrs...this country is a mess
English

@clovistb Well Route 53 will create multiple DNS records for same domain...one per AWS region. On every query it will select the lowest latency from users location.
I could go into more detail but I'm not a paid X user so my character limit hits quick. Good question though. 😉
English

@CPUFXRnDMV Deal 🤝
I’ll bring the beers if you bring lower latency for Brazil
English

@clovistb Us network guys will do the plumbing but you DBs guys need to bring us a few beers and ask nicely how to fix it. 🤣🤭
English

@clovistb #Latency
The application server might be sitting right in 🇩🇪 with a fast latency through-put or responses of 80ms and 520ms in 🇧🇷 due to the above aforementioned…likewise, #Traffic #Distances
🇩🇪 > 🇧🇷 = Traffic Distances Covered
I want to learn Snr.
English

@clovistb Latency..
Your backend is hosted in server very far from users in Brazil..
To fix it, spin up an instance of your backend close to users in Brazil..?
English

Geographic latency and the physical limitations of long-distance data transmission.
- Use a CDN with edge caching using Cloudflare, for example.
- Deploy a regional backend and run your app in AWS sa-east-1.
- Use a global load balancer to route users to the nearest healthy backend automatically.
- Place database read replicas in the region so the Brazil backend doesn't need to reach back to Germany for data.
English

@clovistb That's easily network latency caused by routing and distance.
Implement CDN caching (CloudFront or Cloudflare) and use DNS routing...aka Route 53.
You should see much improved responses.
English

@clovistb Exactly for long term, however immediate fix would be add a mini CIDR block to meet the requirements and plan for IPv6 accordingly
English

I recently asked:
“What would you do if your VPC is running out of IPs?”
A lot of answers were: “switch to IPv6”
Lets be clear: IPv6 is NOT a quick fix.
Switching to IPv6 is a full transformation 👇
1️⃣Upgrade your network
Routers, firewalls, load balancers, VPNs must support IPv6
2️⃣Redesign IP addressing
IPv6 is huge, but you still need structure.
Plan CIDR blocks (/56, /64)
3️⃣Validate OS & systems
Your servers, containers, and nodes must support IPv6
4️⃣Enable IPv6 in cloud
VPC, subnets, ALB/NLB must support IPv6
5️⃣Update DNS
Add AAAA records.
No AAAA = no IPv6 traffic
6️⃣Rethink security
No NAT in IPv6
- Everything becomes publicly reachable
- Rewrite firewall & security rules
7️⃣Fix your applications
Update configs, APIs, DB connections
8️⃣Choose a transition strategy
Dual-stack (most common)
NAT64 / DNS64
IPv6-only (rare)
9️⃣Upgrade observability
Logs, metrics, tracing must support IPv6.
Many tools still assume IPv4
1️⃣0️⃣Test everything
Connectivity, latency, failover, DNS.
Expect surprises
English

@myonlinetrust @apparentorder IPv6 looks great but reality is still IPv4.
most teams stay with dual-stack and NAT for now.
English

@apparentorder @clovistb I’ve been switching to IPv6-only (at least for public internet access), and doing *that* is kind of a pain.
But if I was willing to set up a few NAT gateways (the expensive ones or just the cheap fck-nat ones) it wouldn’t be difficult.
English

@apparentorder still need CIDR expansion or better subnet planning for that.
English

Re. security, on AWS you can use the Egress-only Internet Gateway, which mimics the „security“ of a NAT Gateway (it does not allow inbound connections).
Configuring dual-stack gets you two important wins very quickly and with low risk: provide IPv6 to end users (better experience) and use IPv6 for (some) egress traffic, saving potentially a lot of NAT traffic charges.
Doesn’t help quickly with running out of VPC addresses though, admittedly.
English






