

David Poole
282 posts

@davpoole
Scientist, Educator, Artificial Intelligence Researcher









The only perspective on AI that matters

Yann LeCun tells PhD students there is no point working on LLMs because they are only an off-ramp on the highway to ultimate intelligence



Spoke to @geoffreyhinton about OpenAI co-founder @ilyasut's intuition for scaling laws👇. "Ilya was always preaching that you just make it bigger and it'll work better. And I always thought that was a bit of a cop-out, that you're going to have to have new ideas too. It turns out Ilya was basically right." Link to full interview in 🧵.

📢Thanks to @karthikv792 and @kayastechly's tireless efforts, here is the paper analyzing the (in)effectiveness of Chain of Thought prompting. The good news is that everything I said here and in my talks about CoT delusions still holds. The better news is that Karthik and Kaya have done more extensive experiments both with GPT4 and Claude 3 Opus. 👉 arxiv.org/abs/2405.04776 tldr; LLMs may well be smarter than that dog in the Farside cartoon (although I am sure @ylecun will pushback vociferously😋), but there is little reason to believe that we can advise them the way we advise our friends--and expect them to operationalize that advise..

What’s a common misconception about machine learning that you wish more people understood?

We really haven't thought through the long-term negative externalities of LLMs.

But we don't know how this is adding up with the massive genAI acceleration. Only that Microsoft is now funneling >$10B into infrastructure expansions every *quarter* to support this growth. @dylan522p calls this “the largest infrastructure buildout that humanity has ever seen.”



Our update to what happened here: openai.statuspage.io/incidents/ssg8…