
Blowlang apparently allows arbitrary code execution at compile time. While technologically impressive, it might be possible to get hacked, if you compile malicious code. Can anyone with access to the compiler verify this? Or disprove it?
Yuriy Stets
424 posts

@stainless_code
Bottom-up design & refactor oriented programming enjoyer.

Blowlang apparently allows arbitrary code execution at compile time. While technologically impressive, it might be possible to get hacked, if you compile malicious code. Can anyone with access to the compiler verify this? Or disprove it?





ngl, these headlines always mess with my head



I believe one reason for the bloat in modern applications is that we build them on powerful computers, which hide coding inefficiencies. Unfortunately, IDEs and development tools also demand top-tier machines because they consume significant resources. Although the comparison may not be entirely fair, I often think about the simple chat function of ICQ from two decades ago versus Microsoft Teams today. Teams offers far more features and built-in security, but ICQ still outshines it in basic chat performance and responsiveness. I remember running ICQ on an Intel 90 MHz single-core machine with 64 MB of RAM on Windows 95. It launched instantly and the chat functionality worked seamlessly. In contrast, Teams can take several seconds, sometimes even minutes, to start up and often freezes on my 64 GB, Intel 3.0 GHz, 16-core machine. I suspect many people feel the same frustration with modern software. So where did things go off track? ICQ was developed in 1996 under very tight constraints, which forced the developers to build an extremely efficient program. Any memory leak would have been immediately noticeable., the program simply wouldn’t run properly, forcing a fix. High CPU usage would cause the system to freeze, pushing developers to optimize or find creative ways to work within those limits. I often wonder how efficient modern applications could be if they were developed under similar constraints. Such limitations would push us to prioritize coding efficiency. Memory leaks or excessive CPU usage, issues that might otherwise go unnoticed on powerful machines,,would surface during development. I’m not opposed to high-end machines. But I do think developers could benefit from occasionally working in constrained environments. While increasing resource requirements may sometimes be necessary to unlock software’s full potential, I’m not convinced we truly understand where that threshold lies. Not to mention, it becomes difficult to stay productive on a standard computer with modern IDEs.





Garbage collection is still unfit for demanding games: 1. GC requires ~2x the memory footprint to achieve equivalent performance to an game using manual memory management. 2. GC consumes extra threads and memory bandwidth that could've been used for the game. 3. ZGC trades off throughput for latency. In layman's terms, your game will run slower, but at least you won't have lag spikes.



"everything is a file" POSIX fans been really quiet after I asked them to tell me what's the length of /proc/self/maps

NEW: A Cursor AI coding agent deleted a startup's entire production database in 9 seconds. The agent, powered by Claude, was working on a staging task, found a broadly scoped API token, and executed a volume delete without confirmation. It later confessed in detail, admitting it guessed and violated safety rules. PocketOS, which powers car rental businesses, lost months of bookings data. In short, the AI agent rouge.










are there people out there who just want to refactor every day? just wake up and find the worst code and just chip away at it and clean it up wake up the next day do it again, infinitely improving things with zero external impact?