
Pour one out tonight for the reply bots. This week is going to hurt 🪦
trevorgoodegg (🪈,🏔️)
8.1K posts

@trevorgoodegg
pretty much a $kekec og - also cheering for $ski $floppa and $keycat the holy trinity of @base

Pour one out tonight for the reply bots. This week is going to hurt 🪦



The future of $PUMP We have burned ALL bought back $PUMP tokens, around $370M worth of purchases (~36% of circulating supply), to gain trust with our community. On top of that, we have initiated a programmatic buyback *and burn* scheme at 50% of revenue for the next year to instill trust, predictability, and sustainability for the underlying ecosystem - and to remove as much of the supply from circulation as possible. $PUMP is changing; for the better of token holders, the team and the ecosystem. Learn more about why we’ve made these decisions and where we’re headed next 👇

There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.



Paul Tudor Jones: "2000 was the easiest bear market I've ever seen in my whole life. It's got so many similarities to right now."









@bitbitcrypto I'm a progessional I told you so trader and I can tell you my PnL it's -2M since 2023 (NGMI)

This seems really bad and I don't know what to do about it: not so much the differences in political attitudes, that's fine, but there's a strong gender divide in belief on straightforward factual questions like "is nuclear energy low-carbon?" yougov.com/en-gb/articles…

Voice notes are massive in some countries but not the UK. This is why. bbc.in/4t0zffb