∿spencer.
13K posts

∿spencer.
@_ontologic
vice president of @conceptcountry // cohost of @ai_rebels // programmer, poet, poster







It's been almost 5 years. I'm still waiting for the copyright laundering scheme to somehow magically cure cancer.

i respect all the people who have never vibe coded ever they know their place; they know they're not technical and they're not a sheep that just blindly follows everyone else




Uh ok this is bad. Really bad tbh. Not for what it is now, but the inevitable result. AI that can generate videos that a human could never conceive of, are probably unrecognizable in content, but are inexplicably addictive. I mean something like literal hypnosis.


Entropy is H(p) = E_p[-log p(x)], your optimal expected code length when you know p. Shannon coding assigns symbol x a length of -log p(x), and on average you can't beat it. Now suppose you don't know p. You believe it's q, so you build a code where x has length -log q(x). But the data is actually drawn from p, so your expected code length is: H(p, q) = E_p[-log q(x)] = -sum over x of p(x) * log q(x) That's cross-entropy: the bits you actually pay using q's codebook on p's data. KL is just the gap between what you pay and the optimum: KL(p || q) = H(p, q) - H(p) = sum over x of p(x) * log( p(x) / q(x) ) Or as a one-liner you could literally run: kl = sum(p[x] * log(p[x] / q[x]) for x in support) The story in three lines: H(p) is the floor, H(p,q) is what you actually pay with the wrong codebook, KL is the penalty — and it's always ≥ 0 because you can't beat the optimal code. - Will Brown & Claude Opus 4.7









