KSE
5.9K posts

KSE
@semanticbeeng
Shipping/bridging Engineering ⇆ Science #SoftwareArchitecture #FunctionalProgramming #MachineLearning #BigData #MachineLearningEngineering #CompilerDesign

Was sick of cloud hosting so I repurposed an old NUC to have Proxmox Now I have C2 redirectors (Cloudflared) RedELK, C2 servers, Phishing VM, Windows Dev VM and a Kali instance all on one proxmox which I can access by having a Tailscale LXC Thanks to community-scripts.github.io/ProxmoxVE/

I’m compiling together a list of which Open Source Operating Systems (Linux, BSD, etc.) do (or do not) plan to comply with new Age Verification laws. I need to track these for the purpose of reporting on the story, and I figured others would find having a list handy as well. If you’d like to contribute, feel free. Will be adding to the list as more systems are confirmed as either implementing or opposing Age Verification. github.com/BryanLunduke/D…






SN85 @vidaio_ >> Decentralized video upscaling + compression Powered by SN75 @hippius_subnet >> Cheap, always-on distributed data storage. #Bittensor The AI infra stack. τ #SN75 #SN85




















ProbeLab studied the Monero network and found most public nodes run on one cloud provider, many acting like “spy” nodes. While Monero’s payments stay private, its network layer shows centralization and possible surveillance risks. probelab.io/blog/peering-i…

Vibe-coding is not the same as AI-Assisted engineering. A recent Reddit post described how a FAANG team uses AI and it sparked an important conversation about semantics: "vibe coding" and professional "AI-assisted engineering". While the post was framed as an example of the former, the process it detailed - complete with technical design documents, stringent code reviews, and test-driven development - is a clear example of the latter imo. This distinction is critical because conflating the two risks both devaluing the discipline of engineering and giving newcomers a dangerously incomplete picture of what it takes to build robust, production-ready software. As a reminder: "vibe coding" is about fully giving in to the creative flow with an AI (high-level prompting), essentially forgetting the code exists. It involves accepting AI suggestions without deep review and focusing on rapid, iterative experimentation, making it ideal for prototypes, MVPs, learning, and what Karpathy calls "throwaway weekend projects." This approach is a powerful way for developers to build intuition and for beginners to flatten the steep learning curve of programming. It prioritizes speed and exploration over the correctness and maintainability required for professional applications. There is a spectrum between vibe coding and doing it with a little more planning, spec-driven development, including enough context etc and what is AI-assisted engineering across the software development lifecycle. In stark contrast to the post, the process described in the Reddit post is a methodical integration of AI into a mature software development lifecycle. This is "AI-assisted engineering," where AI acts as a powerful collaborator, not a replacement for engineering principles. In this model, developers use AI as a "force multiplier" to handle tasks like generating boilerplate code or writing initial test cases, but always within a structured framework. Crucially, the big difference here is the human engineer remains firmly in control, responsible for the architecture, reviewing and understanding every line of AI-generated code, and ensuring the final product is secure, scalable, and maintainable. The 30% increase in development speed mentioned in the post is a result of augmenting a solid process, not abandoning it. For engineers, labeling disciplined, AI-augmented workflows as "vibe coding" misrepresents the skill and rigor involved. For those new to the field, it creates the false and risky impression that one can simply prompt their way to a viable product without understanding the underlying code or engineering fundamentals. If you're looking to do this right, start with a solid design, subject everything to rigorous human review, and treat AI as an incredibly powerful tool in your engineering toolkit - not as a magic wand that replaces the craft itself.



Can AI learn what to remember and when to update its memory? Mem-α uses reinforcement learning to teach LLM agents how to manage complex, multi-component memory systems—without relying on hand-crafted rules. Trained on diverse multi-turn interactions, the agent learns to extract, store, and update information, with rewards tied to downstream QA accuracy. Results: strong gains over existing memory-augmented agents and impressive generalization—handling 400k+ token histories despite training only on 30k-token examples. Mem-α: Learning Memory Construction via Reinforcement Learning Anuttacon, UC San Diego, Stanford Paper: arxiv.org/abs/2509.25911 Code: github.com/wangyu-ustc/Me… Model: huggingface.co/YuWangX/Memalp… Our report: mp.weixin.qq.com/s/O9vmwD_khWfM… 📬 #PapersAccepted by Jiqizhixin














