Atomikos

46.2K posts

Atomikos banner
Atomikos

Atomikos

@Atomikos

Reliability through Atomicity: manage your distributed transactions with our embeddable #java transaction manager - just drop it in your classpath and transact.

Belgium, Europe 가입일 Aralık 2008
3.4K 팔로잉5.8K 팔로워
고정된 트윗
Atomikos
Atomikos@Atomikos·
FREE Training Session: Microservice Data Consistency Pitfalls Trying to say "no" to distributed transactions? Watch this to learn about the distributed transactions you did not know you were doing atomikos.teachable.com/p/microservice…
English
0
0
2
740
Atomikos
Atomikos@Atomikos·
Stop choosing between consistency and scalability. With Atomikos, you get both. Reliable distributed transactions, without heavyweight app servers. Did you wonder why suddenly you have to understand complex patterns like Sagas or eventual consistency? atomikos.com/Blog/Distribut…
English
0
0
1
8
Atomikos
Atomikos@Atomikos·
@practice108om Exactly. The hardest cases are when a request times out and the system can’t tell if the state transition already happened. That’s where even “idempotent” designs get tricky.
English
0
0
1
11
PracticeOverflow
PracticeOverflow@practice108om·
Idempotency is not return 200 on duplicates. It means 1 request or 10 retries produce the same final state. Dedup keys help, but atomic state transitions are what prevent double charges, duplicate orders, and 3 AM incidents. #DistributedSystems #Payments #Backend
English
1
0
1
42
Atomikos
Atomikos@Atomikos·
@milan_milanovic The choice isn’t just monolith vs microservices. It’s whether you’re ready to handle what happens when parts of the system fail independently. That’s where most complexity actually shows up.
English
0
0
0
30
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
𝗬𝗼𝘂 𝗱𝗼𝗻'𝘁 𝗵𝗮𝘃𝗲 𝗮𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. 𝗬𝗼𝘂 𝗵𝗮𝘃𝗲 𝗮 𝘀𝘁𝗮𝗴𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. What I've seen in my career is that most teams pick architecture based on hype, not real needs. They see how Netflix runs microservices and assume that's the answer. A 5-person startup running microservices isn't engineering. It's cosplaying. Here's what you need to know about each one: 𝗠𝗼𝗻𝗼𝗹𝗶𝘁𝗵 → You're still figuring out what to build. A monolith lets you move fast, refactor easily, and deploy one thing. Start here, especially if you're a startup. And make it modular. 𝗠𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀 → Multiple teams stepping on each other. Deployments take hours. One bad change breaks everything. That's when you split, and only along real team boundaries. If you can't afford a platform team, you can't afford microservices. 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 → You want to stop managing infrastructure for workloads that don't need it. Works well until vendor lock-in creeps in. By the time you notice, migration costs are enormous. A well-structured (modular) monolith beats poorly designed microservices every time. You can mix them too. Monolith core with serverless for specific workloads is often the right call. Stop choosing architecture for where you want to be. Choose it for where you are.
Dr Milan Milanović tweet media
English
11
34
194
9.3K
Atomikos
Atomikos@Atomikos·
Sagas break. Kafka doesn't handle your database transactions. Atomikos does - with real exactly-once semantics.
English
1
0
0
23
Atomikos
Atomikos@Atomikos·
You can't eliminate partial failures. You can only design systems that stay consistent through them.
English
0
0
0
12
Atomikos
Atomikos@Atomikos·
Systems with no global transaction model still have transactions. They just hide them in your logic.
English
0
0
0
33
Atomikos
Atomikos@Atomikos·
A system that works 99.9% of the time is easy. A system that behaves predictably the other 0.1% of the time is where the engineering starts.
English
0
0
0
13
Atomikos
Atomikos@Atomikos·
Retries help keep you running, but they don't guarantee correctness and often they hide the very ambiguity you are trying to debug later.
English
1
0
1
23
Atomikos
Atomikos@Atomikos·
@Alacritic_Super Idempotency protects against duplicates. It doesn’t solve the “did it happen or not?” problem.
English
1
0
0
8
Praveen Kumar Verma
Praveen Kumar Verma@Alacritic_Super·
Scaling payments requires idempotency. Duplicate requests can cost real money. Every transaction must be safely repeatable.
English
1
0
1
26
Atomikos
Atomikos@Atomikos·
@jgcse11 Idempotency solves duplicates. The harder problem is knowing whether the first attempt actually succeeded when the system can’t tell you for sure.
English
0
0
0
9
Jatin Gupta
Jatin Gupta@jgcse11·
I have seen payment systems charge users twice than fail to process a payment at all. The fact remains, most engineers do not think about idempotency until a customer calls their bank. What happens when a user clicks Pay twice? * Let's say the network is unreliable, requests can time out. * Pay button gets tapped again. * Your backend has no way to tell a retry from a new request unless you build that in explicitly. This is where idempotency comes into picture. Objective - No matter how many times the same request is sent, the result must stay the same. How this works in the payment ecosystem. Before hitting Pay, the frontend generates a unique key tied to that order and sends it as a header. The backend checks if it has seen that key before. If not, it processes the payment and stores the result against that key. If yes, it skips processing entirely and returns the stored result. The part most people miss is what happens when the second request arrives while the first one is still being processed. The key exists in the store but there is no result yet. You cannot process it again. So the moment request one starts, you write the key with a status of Processing. If request two arrives and sees that, you return a 409 and tell the client to retry in a moment. Once request one finishes, you update the status to Success or Failed. Every retry after that gets the stored response. Divided the complete flow into three states: Not found -> Processing -> -> Success or Failed. One more thing worth knowing. The key must be tied to the payload, not just the order. If someone sends the same key with a different amount, you reject it. This one pattern actually solves double charges from impatient users, safe client retries on network failure, race conditions mid-processing. #Payments #SystemDesign #BackendEngineering #APIs #DistributedSystems #Fintech #LearningInPublic #SoftwareEngineering
Jatin Gupta tweet media
English
1
1
2
18
Atomikos
Atomikos@Atomikos·
@_trish_xD Most of these point to the same problem: the system can’t distinguish between “didn’t happen” and “happened twice.” That’s where reliability breaks.
English
0
0
0
43
trish
trish@_trish_xD·
too many devs write distributed systems before they understand what "reliable" even means. 7 things you need to internalize before you architect anything at scale: - a system that "usually works" is a broken system - retries without idempotency are just scheduled corruption - your p99 latency matters more than your average - the database is not a message queue, stop using it like one - timeouts are not optional, they are the contract - you don't understand your bottleneck until you've profiled it - "it works on my machine" is not an architecture most outages aren't bad luck. they're deferred decisions.
English
12
10
123
7.4K
Atomikos
Atomikos@Atomikos·
@TombStoneDash The hard part isn’t running without supervision. It’s knowing the system can still prove what happened when something goes wrong at 3am.
English
0
0
0
3
Tombstone Dash
Tombstone Dash@TombStoneDash·
why overnight execution changed everything: You can't manage tasks at scale. You can architect them. Checkpoints. Recovery paths. Idempotency. Design for zero-supervision. Then sleep.
English
1
0
1
5
Atomikos
Atomikos@Atomikos·
@PsudoMike Most of these come down to one thing: the system can’t clearly define what “already happened” means under failure. Without that, retries and reconciliation just paper over ambiguity.
English
0
0
1
27
PsudoMike 🇨🇦
PsudoMike 🇨🇦@PsudoMike·
Most payment failures aren't network problems. After 12 years building payment systems, the real culprits are: No idempotency keys on retry paths. Reconciliation gaps nobody catches until month end. State machines that don't account for partial settlements. The boring infrastructure is where the money actually gets lost.
English
2
1
13
348
Atomikos
Atomikos@Atomikos·
@RaulJuncoV High cache hit ratios look great — until stale data becomes a correctness issue. Caching is easy. Knowing when cached state is still valid across services is where it gets tricky.
English
1
0
1
16
Raul Junco
Raul Junco@RaulJuncoV·
Every cache miss is a tiny tax on your performance. Let’s talk about the Cache Hit Ratio. The ratio is calculated by dividing the number of cache hits by the total number of cache lookups (hits and misses, total cache requests), then multiplying by 100 for a percentage. It measures how often requests are served from the cache instead of hitting the primary store, reflecting how well the cache performs. Cache Hit Ratio (%) = (Cache Hits / Total Lookups) x 100 Simply put, it’s a percentage that tells us what fraction of total requests are satisfied by the cache instead of going to the primary store (e.g., database, API, etc.). 𝗙𝗼𝗿 𝗰𝗼𝗻𝘁𝗲𝘅𝘁, 𝗹𝗲𝘁’𝘀 𝗰𝗼𝗻𝘀𝗶𝗱𝗲𝗿 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝗲𝘅𝗮𝗺𝗽𝗹𝗲: • Your service receives 10 requests. • The first request queries the cache, finds nothing (cache miss), queries the database, and then stores the result in the cache. • The following 9 requests hit the cache and get served immediately (cache hits). In this scenario, you have 9 cache hits out of 10 total lookups, resulting in a 90% cache hit ratio. 𝗪𝗵𝗶𝗹𝗲 𝘁𝗵𝗲𝗿𝗲'𝘀 𝗻𝗼 𝗼𝗻𝗲-𝘀𝗶𝘇𝗲-𝗳𝗶𝘁𝘀-𝗮𝗹𝗹 𝗻𝘂𝗺𝗯𝗲𝗿, 𝗵𝗲𝗿𝗲’𝘀 𝗮 𝗴𝗲𝗻𝗲𝗿𝗮𝗹 𝗴𝘂𝗶𝗱𝗲𝗹𝗶𝗻𝗲: • Database query caches: 85-95% • API response caches: 95-99% • Large-scale CDNs: 99%+ 𝟯 𝘄𝗮𝘆𝘀 𝘁𝗼 𝗕𝗼𝗼𝘀𝘁 𝗬𝗼𝘂𝗿 𝗖𝗮𝗰𝗵𝗲 𝗛𝗶𝘁 𝗥𝗮𝘁𝗶𝗼 • Pre-load popular cache entries during deployments or cold starts to avoid an initial spike in cache misses. • Tune your time-to-live (TTL) value. Too short, and items expire quickly, causing misses. Too long, and you risk serving stale data. • Make sure keys are unique but predictable to prevent missed cache entries. 𝗢𝗻𝗲 𝗳𝗶𝗻𝗮𝗹 𝘁𝗵𝗼𝘂𝗴𝗵𝘁: A high cache hit ratio is great, but not at the expense of serving outdated information. You need a balance between cache effectiveness and data freshness. Cache me if you can!
Raul Junco tweet media
English
12
20
142
4.9K
Atomikos
Atomikos@Atomikos·
Get our new ebook + learn the following: The problems with existing approaches towards microservice transactions What can go wrong with your data consistency Our simple patterns to fix things without extra coding dld.bz/jfYQZ
Atomikos tweet media
English
0
0
1
33
Atomikos
Atomikos@Atomikos·
Get our new ebook + learn the following: The problems with existing approaches towards microservice transactions What can go wrong with your data consistency Our simple patterns to fix things without extra coding dld.bz/jfYQZ
Atomikos tweet media
English
0
0
1
19