Philip Bell
388 posts

Philip Bell
@PhilipfvBell
Making Learning Visual | previously @georgiatech @harvard








jaw dropped at the sight of this picture in the excellent WaPo Anthropic story from the other day: " ... a book warehouse that was alleged to play a role in Anthropic’s Project Panama, its project to scan, digitize and destroy millions of books"









Anthropic Fellows Program 4-month program provide funding, compute & direct mentorship to work on real AI safety and security (London, San Francisco or Remote) Includes weekly stipend of $3,850, $15k per month compute funding & benefits Deadline: January 20 Safety Track: job-boards.greenhouse.io/anthropic/jobs… Security Track: job-boards.greenhouse.io/anthropic/jobs…


"Before Transformers, RNNs were the thing. These were a big breakthrough. Suddenly, everyone started to work on improving RNNs. But the results were always these slight modifications on the same architecture, like putting the gate in a different spot, with improvements to 1.26, 1.25 bits per character on language modeling." "After the Transformer, when we applied very deep decoder-only Transformers to the same task, we immediately got 1.1 bits per character. So all that research on RNNs suddenly seemed a waste of time". "We're currently in the same situation where a lot of papers are taking the same architecture (Transformer) and making these endless tweaks, in a local minimum, and we might be wasting time in exactly the same way." - Llion Jones, co-author of the Transformer on @MLStreetTalk












Blaise Pascal on the lack of a fixed point in our lives: