alex rudloff
401 posts

alex rudloff
@alexrudloff
Business + Product + Tech + Art

Much of the disillusionment in crypto is the process of realizing they did the 5% of the work that was easy (whitepapers, marketing to retail), and the other 95% of the work (building a secure, well-governed, well-designed, usable financial system) is really fucking hard.



Reid Hoffman, co founder of LinkedIn on AI-driven meeting analysis. "Basically every organization should be saying, we're recording all of our meetings, and we're running an AI on the recording of the meetings, not just for the transcript, but also to do all of the suggested follow-ups. It's like, hey, did you mentioned this, you should probably let Nikolai know and make sure that that's the case, or, you should make sure that you get approval from Satya on the following thing, or this other group is doing this. All of that kind of thing is already here the technology is there to go." --- From "Norges Bank Investment Management" YT Channel (link in comment)

we want to build tools to augment and elevate people, not entities to replace them.


Sorry but that just isn’t true—distillation attacks are illicit activity, not an industry standard. They are against the terms of service of all frontier AI labs. There is a reason OpenAI, Anthropic, and Google all put out reports warning about it: none of them do it.

Is AI actually helping us solve problems, or are we just addicted to the slot-machine dopamine hit of the prompt box?






$AMZN AWS CEO pushed back on the idea that AI is killing software jobs by saying Amazon is hiring as many developers as ever. He said AI agents are “exploding” across every industry & moving faster than expected changing the developer job rather than eliminating it.

There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.







