

SvenCodes
12.7K posts

@killroy42
Weekly coding live streams on https://t.co/KbP5WVI4Hk Sven's SudokuPad available here: https://t.co/P1WGXm4WgM https://t.co/prAUHmqVQV https://t.co/qxDrrVdB0Q





📡 Attention, Raiders! The ice has thawed - Unprepared Raiders have fallen, the hardy survivors must press on. Here’s what’s coming tomorrow when Headwinds launches! Solo VS Squads • A new matchmaking option for LV40+ Raiders, take on full squads as a solo Raider for higher risk and higher XP. Trophy Display Project • Turn the tables and hunt down the ARC, fill your trophy case, unlock valuable rewards. Bird City Map Condition • Loot-packed nests appear as flocks of birds return to Buried City, watch your back on the rooftops. Read the latest blog here - arcraiders.com/news/headwinds…



Recently I had the opportunity to try Claude Opus 4.5 through the Copilot CLI. To be honest I'm biased towards AI models not being that useful, but inspired by some people I follow like @mitsuhiko and @SebAaltonen I gave it a go anyway. The task I wanted to try was optimizing a certain complex function in a C++ codebase (scientific code so not very OOP). I instructed the agent on how to build and run tests, and then I instructed it to write a single-header instrumentation profiler. I have written one myself using zones so I told it to follow the material I followed and it mostly got it right (previously Claude Sonnet 4.5 was not able to properly write the profiler). So with the profiler written, I told him to instrument the specific function. This was all quite smooth, although I had constantly this feeling I was not sure I could trust it. The only downside here was that I constantly had to instruct it to dig deeper. With the function instrumented we identified one particular function that took the most time. It was a "blur" with a kernel width dependent on the bin position. Basically a 1D convolution but instead of a constant kernel the kernel width was variable. The implementation uses expensive exponential and sqrt functions. The AI suggested first some easy optimizations to avoid repeating calculations of the kernel, but at some point we needed to move to more advanced optimizations. Because we are using IPP, I suggested it tried using that to improve performance. It did and the particular test we were focusing on passed and the performance improved significantly. I then also asked it to try to use convolutions instead. It came up with a way of segmenting the range in regions where the kernel width does not change more than 5% and did convolutions with a constant kernel width. The test passed and we ended up with a whopping 8x performance improvement. I was really happy! One thing worth mentioning is that I get why people get hooked on agentic coding; it's the same feeling I got when waiting of physics simulations to finish. You want to see the results so badly and when you do want to continue to the following iteration. This loop is very addicting! After a night's sleep I ran the rest of the tests and, oops, some of them failed. I continue my previous session with the AI and told it about the issues. This is where things started breaking apart. First it started coming up with all kinds of reason why it could have failed and at some point it felt like it was going in loops. It was also confused because I committed the changes from another terminal. I was wondering if maybe the context had become too large, so I started a new session and summarized what we did. At first it seemed it was doing better, but quickly it started going into similar loops. I had to explicitly tell it to add asserts and run the tests in Debug mode to finally get it to find the issues. In many cases I was looking at it doing obvious mistakes and wanted to scream at the screen to tell it what it should actually do. I think you can type things while it's working. What's your experience with that? In the end, the non-convolution approach was passing all tests with a moderate speedup of ~3x, while the convolution-based approach was still failing. The AI actually gave up looking for any bugs because the approach is approximate and suggested I use the non-convolution approach as this was already much faster than the original. So it basically gave up 😅. To be honest, this whole process took many hours and in this time I could have probably built something more performant myself. So for now my conclusion is that the models feel like they're nearly there but not all the way, which makes it really difficult for me to recommend for serious work if you are already experienced in what you are doing. On the other hand, you do need the experience to guide the AI. So I would say, the biggest benefit for now is for menial tasks and for doing work in-between meetings!🤡 I guess many people will suggest I didn't use the AI properly so if you have any suggestions, do tell and I'll try them out.








@persianmama111 They are only afraid of socialism because Hitler was one.

Every engineer should be able to implement their own authentication. The fact that this is even a conversation is scary. What do you mean you can’t implement your own auth and must use an external or third-party service because some tech influencer told you it’s best practice? Are you stooopid or something?


On macOS, you can now set `background-blur = macos-glass-regular` (or clear) to use the macOS 26 native "glass" style for windows. Definitely not for everyone, I don't particularly like the aesthetic, but some folks in the community are wild for it. Screenshot from the contributor who did most of the work.



Nvidia, $NVDA, CEO says he sleeps "6-7 hours daily."

















If current trends continue, Whites will go from being a small minority of world population today to virtually extinct!
