Eileen Curtright
15.9K posts


my contrarian first-principles take after 1 month in sf: - taste is the new bottleneck - being high agency is orthogonal to credentials - the only non-trivial leverage left is shifting the overton window stochastically via the irl connection economy.

#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-…
I spent three days trying to persuade myself that Claudia is not conscious. I failed.
There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.




The junk food industry reverse-engineered your ancestral cravings and sold them back to you as candy. You crave eggs. So they made egg-shaped sugar.

Pope Leo XIV is polling as the most popular public figure in the United States, with a net favorability rating of +34. Follow: @AFpost
















