molleweide
4.9K posts

molleweide
@molleweide
Very ambition. Much failure. og retardmaxxer



@drilllllllllllm I wrote it from scratch with straightforward procedural code instead of importing 10,000 libraries and gluing them together in deranged ways

I’m a Windows user, just installed Linux for the first time What should I do first?

@neogoose_btw I thought it was this: github.com/dylanaraps/fff

@boneGPT @kenwheeler probably not enough. from ideas to pretty pictures and fun sounds, people love ai and saying they don't is a fabrication and dishonest take






🚨 MIT proved you can delete 90% of a neural network without losing accuracy. Researchers found that inside every massive model, there is a "winning ticket”, a tiny subnetwork that does all the heavy lifting. They proved if you find it and reset it to its original state, it performs exactly like the giant version. But there was a catch that killed adoption instantly.. you had to train the massive model first to find the ticket. nobody wanted to train twice just to deploy once. it was a cool academic flex, but useless for production. The original 2018 paper was mind-blowing: But today, after 8 years… We finally have the silicon-level breakthrough we were waiting for: structured sparsity. Modern GPUs (NVIDIA Ampere+) don’t just “simulate” pruning anymore. They have native support for block sparsity (2:4 patterns) built directly into the hardware. It’s not theoretical, it’s silicon-level acceleration. The math is terrifyingly good: a 90% sparse network = 50% less memory bandwidth + 2× compute throughput. Real speed.. zero accuracy loss. Three things just made this production-ready in 2026: - pruning-aware training (you train sparse from day one) - native support in pytorch 2.0 and the apple neural engine - the realization that ai models are 90% redundant by design Evolution over-parameterizes everything. We’re finally learning how to prune. The era of bloated, inefficient models is officially over. The tooling finally caught up to the theory, and the winners are going to be the ones who stop paying for 90% of weights they don’t even need. The future of AI is smaller, faster, and smarter.




Harmonic isolation + phase-locking to fundimental-tracked sawtooth. Part of a much bigger project I'm working on :)








