
Andy
6.5K posts

Andy
@AndyFJD
Love, live, life, proceed, progress. KGG 💕💍


Sure. We found out something crazy in 2017. The majority of what we thought was human intelligence doesn’t involve human intelligence much at all. It’s mostly fancy inner products and tokenized linear algebra on spaces of modest size. That was **AMAZING** But human genius was not captured. So we had a choice. Should we scale what we learned or find out if more of human Intelligence is also easy. And mostly we did the scaling thing. And the scale worked very well in many ways. But then a terrible thesis was born: “We can probably just scale away humans if we throw more energy money and compute at the problem.” And that investment thesis is at the core of this hype cycle: “This type of scaled AI is better at everything bitches …or will be by 2027.” And that likely isn’t true. We are still better at a lot of stuff. And this truly totally amazing product doesn’t really work. It barely works at the highest level. But it is unbelievable nevertheless.




You have too many opinions on them for a non power user. You are not at the cutting edge of LLM usage. Your comments make sense for basic llm usage (the most expensive models) but you're not building powerful recursive harnesses and back pressure into them that gets the AGI results they are capable of













