
q(Alex Kiefer | everything else)
3.4K posts

q(Alex Kiefer | everything else)
@exilefaker
I'm nothing i̵f̵ ̵n̵o̵t̵ ̵a̵ ̵m̵i̵n̵i̵m̵a̵l̵i̵s̵t̵ • ML / philosophy research @ M3CS • 🎶 @ exileFaker • dad




The bottleneck of current AI is simple: the techniques we use are still predicated on pattern memorization and retrieval, and thus they need *someone* to tell them which patterns to memorize (training data, RL envs...) That role cannot yet be played by AI in a truly open-ended and autonomous way. We can't yet remove the humans in the loop. In that sense, current AI is still purely a reflection of human cognition (both in terms of which tasks/goals it pursues and the patterns it uses to solve them). It isn't yet its own thing.









i knowww this take will be universally hated but i negatively update on the iq of anyone who believes in qualia or the hard problem of consciousness

Reading the paper, I now understand how their approach forms a thermodynamic computer of sort. It literally thermalizes towards low energy states via Gibbs dynamics! So the whole chip is effectively a massively parallel Gibbs sampler that continuously relaxes toward the equilibrium distribution of the energy function encoded by its weights!

Ultra-short thread on LaMDA because I don't have time for anything else. 1. The problem of other minds (inferring consciousness/sentience from behavior) is one of the oldest in philosophy; it's astonishing that @Google, @CNN and @GaryMarcus have all managed to solve it easily



