Slagar

576 posts

Slagar banner
Slagar

Slagar

@Slagar

Systems programming for fun and profit. Mastodon: @[email protected]

Chicago Katılım Aralık 2008
687 Takip Edilen80 Takipçiler
Slagar retweetledi
solst/ICE of Astarte
solst/ICE of Astarte@IceSolst·
- XZ utils backdoor: found by guy debugging 200ms latency - LiteLLM hack: found by guy debugging oom issue These could have been the most impactful compromises ever. Forget security vendors, weaponize your engineers’ autism.
English
56
479
4.3K
148.1K
Slagar retweetledi
Cloudflare
Cloudflare@Cloudflare·
Cloudflare’s Gen 13 servers double our compute throughput by rethinking the balance between cache and cores. Moving to high-core-count AMD EPYC ™ Turin CPUs, we traded large L3 cache for raw compute density. By running our new Rust-based FL2 stack, we completely mitigated the latency penalty to unlock twice the performance. cfl.re/4uKJKp9
English
10
22
236
25.9K
Slagar retweetledi
dex
dex@dexhorthy·
Here’s what’s gonna happen: - you replace your code review with feedback loops (sentry, datadog, support tickets, etc) - you stop reading the code - software factory fixes everything - one day something breaks at 3am, agent can’t fix it - nobody’s read the code in 3 months - you have 3 weeks of downtime trying to re-onboard and fix it - you lose significant % of your contracts and users - your company is now dead
dex@dexhorthy

@gregpr07 this may surprise you that thus is coming from me but I think we’re in for a 1-3 year period where stuff might break at 3am and if you’re relying on loops to fix it and nobody understands what’s under the hood, you’re looking at an existential threat to your company

English
258
565
6.8K
594.8K
Slagar retweetledi
shaurya
shaurya@shauseth·
there is a rhetoric in ai rn that vibing and half-assing is the future of technology. do not fall for this psyop. the future is deep understanding and mastery. always has been
English
101
902
7.9K
138.2K
fforres
fforres@fforres·
Average latency in @Cloudflare workers from our previous AWS DB, to @PlanetScale From 255ms, to 10ms (connection latency is also reduced, from 151 to 3.7ms) Combined with @zero__ms, it's chef's kiss
fforres tweet mediafforres tweet media
English
18
7
260
35.4K
Slagar
Slagar@Slagar·
@stokry_45 @elithrar My team built this feature and I want to know in which way it's a "mess" so we can fix that.
English
0
0
0
51
Slagar retweetledi
Matt Silverlock 🐀
Matt Silverlock 🐀@elithrar·
You can now tell us exactly where your existing cloud infra is and we'll place your compute as close as possible. single-digit latency to your DB and legacy cloud infra. no guessing.
Matt Silverlock 🐀 tweet media
English
29
37
622
58.2K
Stokry
Stokry@stokry_45·
@elithrar Yeah, low latency is key. But AWS regions are still a mess for some. Better check that
English
1
0
0
416
Slagar retweetledi
Matt Silverlock 🐀
Matt Silverlock 🐀@elithrar·
winning combination: @cloudflare Worker placed right next to my @planetscale Postgres DB so that queries are fast (and connection pooled, for free):
Matt Silverlock 🐀 tweet media
English
9
11
258
20K
Slagar retweetledi
Evgenii Ivanov
Evgenii Ivanov@eivanov89·
I fixed ann-benchmarks and got ~20× higher QPS — without touching the database. Here’s why many published DBMS results are unreliable 👇 blog.ydb.tech/are-published-…
English
1
4
11
2.3K
Slagar retweetledi
Gwen (Chen) Shapira
Gwen (Chen) Shapira@gwenshap·
Do you know about Postgres' HOT chains optimization? One more reason to be mindful about index creation. When you UPDATE a row, Postgres creates a new version of the row and marks the old version as dead. This is fundamental to Postgres's MVCC. The dead rows will eventually get cleaned up by VACUUM. Naturally, Postgres also has to update all the indexes pointing to the dead row. Adding overhead to the update operation and also resulting in more dead tuples (in the index) and more work for VACUUM. Already, this is a good reason to be mindful about index creation. HOT chains, or Heap-Only Tuple chains, are an optimization introduced to reduce index bloat during updates. When you update a row AND the new version can fit on the same page as the old version, AND none of the indexed columns changed, Postgres can create a heap-only tuple. This new tuple is linked to the old one via a chain pointer. Crucially, the indexes don't need to be updated. The index still points to the old tuple location, and Postgres follows the chain to find the current version. This dramatically reduces write amplification since updating one row doesn't require updating every index on that table. HOT chain pruning is the process of cleaning up these chains. When Postgres needs space on a page or when VACUUM runs, it can prune HOT chains by removing intermediate dead tuples that are no longer visible to any transaction. This is much cheaper than full VACUUM because it doesn't need to touch indexes at all - it just compacts the heap page and updates the chain pointers. However, there's a catch. For HOT to work, the new tuple must fit on the same page AND no indexed columns can change. Every index you add to a table is another column that, if updated, will not allow the use of HOT chains optimization. If you have an index on column A and you update column B, that's fine for HOT. But if you have indexes on both A and B, updating either breaks HOT optimization. Many developers reflexively create indexes on every column that appears in a WHERE clause, but each index has a hidden cost beyond storage and insert performance. Every additional index reduces the likelihood that updates can use HOT optimization. In workloads with frequent updates, this can lead to severe index bloat as each update creates new index entries. Best practices: - Consider your query and update patterns when choosing what to index. If a table is frequently updated but rarely queried on certain columns, those columns are poor index candidates. - Consider using covering indexes strategically rather than multiple separate indexes. - Monitor HOT efficiency by querying pg\_stat\_user\_tables to see the ratio of hot\_update to n\_tup\_upd - a low ratio indicates HOT isn't working well. - Use pg_stat_user_indexes to find unused indexes. pg_qualstats to find popular WHERE columns. And HypoPG to make sure new indexes will be useful.
English
8
19
175
16.1K
Slagar retweetledi
zeb
zeb@zebassembly·
My team is hiring! We're looking for a full-stack engineer that can help us make observability for Workers better. If you're in the austin/london/lisbon area and looking to build the Cloudflare developer platform on the Cloudflare developer platform please check the replies!
English
21
28
472
33.1K
Slagar
Slagar@Slagar·
@cakemakerjake @CloudflareDev Hyperdrive eng here. We would love to take a look. If you'd like please DM me your Hyperdrive ID, or head on over to the Hyperdrive Discord channel with some details?
English
0
0
2
65
Jakob Norlin
Jakob Norlin@cakemakerjake·
@CloudflareDev We followed that flow, and ended up with a warning about a drained connection pool. I don't think Hyperdrive is tuned for the new instances? We had to manually cap Hyperdrive connection limit afterwards
English
2
0
0
628
Slagar retweetledi
Brendan Dolan-Gavitt
Brendan Dolan-Gavitt@moyix·
Some sites ask for 2FA but then don’t actually validate the second factor, leaving you with effectively one factor. Computer scientists will immediately recognize this as an example of an auth-by-one error
English
1
4
36
7.1K
Slagar retweetledi
Dane Knecht 🦭
Dane Knecht 🦭@dok2001·
. @Cloudflare has deployed a new WAF rule to protect customers from a Remote Code Execution vuln that impacts React Server Components (CVE-2025-55182) used in frameworks like @nextjs . No action needed; the rule is enabled by default. You can learn more in our blog post blog.cloudflare.com/waf-rules-reac…
React@reactjs

There is critical vulnerability in React Server Components disclosed as CVE-2025-55182 that impacts React 19 and frameworks that use it. A fix has been published in React versions 19.0.1, 19.1.2, and 19.2.1. We recommend upgrading immediately. react.dev/blog/2025/12/0…

English
33
159
1.6K
232.7K
Slagar retweetledi
rare.jpg
rare.jpg@rare_jpg·
rare.jpg tweet media
ZXX
77
7.1K
71.7K
1.3M
Slagar
Slagar@Slagar·
@Mindgamesnl Hyperdrive Tech Lead here. Love the blog! If it gives you any problems, please hit up the discord, we're happy to help.
English
1
0
1
29