Ubuntu HWE (Hardware Enablement) is a big turd: you think you're on a stable Ubuntu release, but you're silently getting the latest kernels. Bad news if you care about a stable environment or reproducible performance.
TIL: if you have isolated cores (isolcpus) and use numactl --cpunodebind without specifying --physcpubind, then the affinity mask of the created threads includes the isolated cores!
Looking at a single histogram is useful to get an overall impression of latency behavior.
But interval histograms (for example, one every second) are far more useful because they reveal how latency evolves over time.
Luckily, HdrHistogram provides built-in support for this.
@epragt Can't you ask Claude to work on the plugin, look out for bugs, create a github ticket, reply to Red Hat, make the fix and test it and make a PR and post it on twitter? Or could it be that is what happened?
Working on IntelliJ Astro plugin, which is using Redhat's LSP4IJ. While upgrading to the latest, Claude Code found a bug. It then created a Github issue. Redhat asked if I could send a fix. So Claude forked the LSP4IJ repo, fixed the bug, tested it with my project, and made a PR to address it.
I find AI quite useful for exploratory redesign. It is quite easy to change large sections of code and see the outcome of the redesign and figure out the roadblocks. And when actually doing the redesign, I can peek at the experimental branch.
For deep down OS and CPU-related design and debugging sessions, I find Claude significantly better than ChatGPT. It shows a much better understanding and a much lower false cause rate. The primary advantage of ChatGPT is lower limits.
@epragt It depends. On my HQPlayer frontend, I don't really care much. I just want something I can play music with. But on my toy Unix, I'm fully in charge of all aspects. I do enjoy debugging sessions and design discussions.. but in the end, nothing gets made without me understanding.
I'm not sure what classifies as small project, and what your usecase is, but it's probably less suitable for building a compiler than building a React/Tailwind frontend, which is exactly what I use it for (plus backend). You're welcome to review the output, but I think it's on the level of most ok engineers, just 100x faster. I tell it to refactor bits here and there, but in my experience, it's quite consistent, which is quite important for me.
LLMs completely change the economy of software development. Instead of creating AI slop, it allows structural improvements where normally the cost wouldn't outweigh the benefits, but improvements in software design are now basically "free".
This creates massive opportunity in terms of code quality, which in turn makes it cheaper to deliver additional features.
As an example, today I migrated from UUID to autoincrement ids + base58 encoded public IDs (I use those to expose data externally). The migration from UUID to BIGSERIAL was done to lower the storage requirements (-40%), and increase performance (about 5%). Not huge numbers, but refactoring using an LLM makes this effort a net positive, despite changing over 80 files and several hundreds of lines of code.
I finally have the kernel up and running using virtual memory. I tried segmentation based 'virtual addresses' first, but that was a never-ending disaster, so I decided to take the leap and switch to proper paging.
Is it me, or are modern game consoles (PS5) just insanely complicated to use? Just enabling 2 players is ridiculously complex. I don't understand why anyone wants to pay for this garbage. Game consoles used to be extremely simple.
@jerrinot Terrible place. I recently helped someone with an Aeron question related to BDP; it took me a bit of time to decipher the question, but all the info was there. Already downvoted multiple times in the meantime.
My article dissecting a kernel bug is on the Hacker News frontpage and the very first comment happens to be positive! What happened to the famously mean and cynical HN crowd? 🤯
Do you know how even very smart people fail at performance testing? They often underestimate how many things — sometimes obvious ones — they simply don’t know. Active benchmarking helps, but it requires humility and a scientific mindset.
Two 'big' tasks I want to focus on during the Christmas holidays are (1) adding preemptive scheduling instead of (only) cooperative (2) switching to Musl/Busybox.
I got a ton of stuff added to my toy Punix implementation: brk/sbrk, blocking (wait queues) so no need for spinning, chdir/getcwd and a ton of fixes and cleanup in the scheduling logic. Shrinking of the binaries etc.