Philip Bell

388 posts

Philip Bell

Philip Bell

@PhilipfvBell

Making Learning Visual | previously @georgiatech @harvard

London Katılım Temmuz 2012
1.4K Takip Edilen260 Takipçiler
Philip Bell
Philip Bell@PhilipfvBell·
@adam_tooze this might interest you. I have taken your LRB article argument and extended it by looking at the technicalities of the GPU and the transformer architecture using @sarahookr's wonderful article 'The Hardware Lottery'
English
0
0
0
21
Philip Bell
Philip Bell@PhilipfvBell·
The human brain is roughly a million times more energy-efficient per bit of sensory information processed than current frontier AI models. A child learns language from roughly 10⁷–10⁸ words of input by age 10. GPT-4 class models were trained on ~10¹³ tokens. This is a 100,000× gap in data efficiency. How did we end up with AI that is incredibly powerful but inefficient? Fracked gas, quantitative easing and the GPU all played a role. genfutures.substack.com/p/why-is-ai-sh…
English
1
0
0
36
Philip Bell
Philip Bell@PhilipfvBell·
It was a real pleasure to speak to @carlbfrey at the @oiioxford, University of Oxford and author of How Progress Ends. He makes a really powerful case that technology doesn't guarantee progress. The same society can thrive in one technological era and stagnate in the next if its institutions don't adapt. Europe caught up in mass production but has struggled in digital. Are our current institutions suited to emerging technologies? genfutures.substack.com/p/can-europe-c…
English
1
0
1
126
Richie Allen
Richie Allen@Richie_Allen·
Nick Timoney has done more in 10mins than the whole of Ireland's back row in the first half. A player who should have x2 or x3 more caps. Disgraceful how good he is, yet has never got a shot. #FRAvIRE #SixNationsRugby
English
19
8
499
55.5K
Philip Bell
Philip Bell@PhilipfvBell·
@birchlse Would you consider starting a humanities AI org?
English
0
0
0
85
Philip Bell
Philip Bell@PhilipfvBell·
Given Mark Carney’s recent speech about the rupture in US-led globalization, this conversation with Hamish feels more relevant than ever. He has a really interesting perspective on how middle powers should be pursuing digital sovereignty through middleware. They should primarily pursue interdependence not independence. techfuturesproj.substack.com/p/compute-is-n…
English
0
0
0
26
Philip Bell
Philip Bell@PhilipfvBell·
I don’t necessarily disagree with the argument of the article and I think we should be super sceptical of edtech and anything we’re suggesting works in schools etc. But I think the analysis in this article is quite flawed. 1. The study cited by Silverman doesn’t say edtech doesn’t work. It is a meta analysis of digital literacy interventions for elementary students and finds mostly positive effects. I think the economist has misunderstood the actual study so we should be reading that and not the economist misrepresentation of the results. 2. Measuring educational impact is well-known to be very very hard. Look at the eef’s studies. Most interventions studied show no impact. Does that mean we should stop doing anything for which we’re unsure whether there’s an impact? In that case we wouldn’t have many things to do since the evidence base is very bare. 3. How do we define edtech? Is writing an edtech? Projectors? Blackboards? Exercise books? The title is a bit misleading since I think theyre talking about using digital devices for literacy interventions. Either way we need to be more clear about what we’re talking about otherwise it confuses the situation. Again, there is a kernel of truth. But in order to help push the convo forward we need to recognise the unique context of educational interventions (they are extremely complicated and RCTs struggle to capture impacts at the best of times.) As I said before, this is why most EEF studies show little impact - but it doesn’t mean there is no impact, just that the studies haven’t found an impact - which is the key distinction. For more on that look at the fade out effects Chetty identified on the STAR experiment.
English
1
0
4
413
Karen Vaites
Karen Vaites@karenvaites·
‘Although ed-tech companies tout huge learning gains, independent research has made clear that technology rarely boosts learning in schools—and often impairs it. A 2024 meta-analysis of 119 studies of early-literacy tech interventions, led by Rebecca Silverman of Stanford University, found the studies described programmes that delivered at best only marginal gains on standardised tests. The majority had little effect, no effect or harmful ones. Jared Horvath, a neuroscientist and author of a book called “The Digital Delusion”, has reviewed meta-analyses covering tens of thousands of studies. His verdict: “In nearly every context, ed tech doesn’t come close to the minimum threshold for meaningful learning impact.”’
Karen Vaites tweet media
English
64
537
2.3K
155.1K
Philip Bell
Philip Bell@PhilipfvBell·
I don’t necessarily disagree with the argument of the article but I think your thinking is quite flawed here. 1. The study doesn’t say edtech doesn’t work. It’s also specifically about literacy. 2. Measuring educational impact is well-known to be very very hard. Look at the eef’s studies. Most interventions studied show no impact. Does that mean we should stop doing anything that shows no impact? This would mean we stop doing many things that take place in schools currently. 3. How do you define edtech? Is writing an edtech? Projectors? Blackboards? Exercise books? Your title is a bit misleading since I think you’re talking about using digital devices for literacy interventions. Either way you need to be more clear about what you’re talking about otherwise it confuses the situation. Again, there is a kernel of truth. But in order to help push the convo forward you need to recognise the unique context of educational interventions (they are extremely complicated and RCTs struggle to capture impacts at the best of times. As I said before, this is why most EEF studies show little impact).
English
0
0
0
26
The Economist
The Economist@TheEconomist·
The rise of in-class devices could be responsible for an alarming decline in performance in reading and other subjects, long-term trends suggest economist.com/united-states/…
English
8
9
15
15.5K
Monty Anderson
Monty Anderson@monty10x·
starting to feel london's velocity very deeply. not just because of the weekly coworking i run with @KindredSalway @_barneyhill, the essay written last night by @jamesrichards2/@_tobiaswagner, the two best language models being built in large part here (@demishassabis), or the most influential podcast in venture (@HarryStebbings) but because of the four weeks i spent in san francisco recently, the best single conversation was actually a video call back to london.
Imade.@ImadeIyamu

Anthropic Fellows Program 4-month program provide funding, compute & direct mentorship to work on real AI safety and security (London, San Francisco or Remote) Includes weekly stipend of $3,850, $15k per month compute funding & benefits Deadline: January 20 Safety Track: job-boards.greenhouse.io/anthropic/jobs… Security Track: job-boards.greenhouse.io/anthropic/jobs…

English
5
4
68
12.3K
Philip Bell
Philip Bell@PhilipfvBell·
‘Words, which are made for singing and seducing, rarely meet up with thought‘ Gaston Bachelard
English
0
0
0
29
Niels Rogge
Niels Rogge@NielsRogge·
"Before Transformers, RNNs were the thing. These were a big breakthrough. Suddenly, everyone started to work on improving RNNs. But the results were always these slight modifications on the same architecture, like putting the gate in a different spot, with improvements to 1.26, 1.25 bits per character on language modeling." "After the Transformer, when we applied very deep decoder-only Transformers to the same task, we immediately got 1.1 bits per character. So all that research on RNNs suddenly seemed a waste of time". "We're currently in the same situation where a lot of papers are taking the same architecture (Transformer) and making these endless tweaks, in a local minimum, and we might be wasting time in exactly the same way." - Llion Jones, co-author of the Transformer on @MLStreetTalk
Niels Rogge tweet media
English
42
177
1.9K
190.2K
Philip Bell
Philip Bell@PhilipfvBell·
@JonathanPJWhite's book 'In The Long Run' really impacted me. He argues that we live in a time of 'temporal claustrephobia'. While we face long-term challenges, we are pushed to react in the short-term. I have personally felt like the challenges we face (climate breakdown, AI risk) are overwhelmingly important and long-term, while the window to make change is increasingly short and the attention economy only makes us feel the contradiction more painfully. I found his work unpacks this combination of feeling and provides a useful way to consider how we can refocus on longer-term institutions. I really enjoyed talking to him about his work, and think it's very much worth reading and exploring. techfuturesproj.substack.com/p/the-age-of-e… He is Professor of Politics at @LSEnews and Political Science (LSE) Deputy Director of the @LSEEuroppblog
English
0
0
2
113
David Lawrence
David Lawrence@dc_lawrence·
What unites Reform, Your Party and the Greens? They all benefit from, and propagate, a zero-sum approach to politics. I'm in the @NewStatesman arguing that Labour must offer an alternative: abundance. Zero-sum politics assumes that for one person to be winning, someone else must be losing. Research from @Moreincommon_ finds that a zero-sum mindset underlies populism in France, Germany and the UK. It’s a common thread that runs through from the Greens’ demands for a wealth tax to Reform’s mass deportations. On the left, a zero-sum mindset says that if people are getting richer, others must be getting poorer. Billionaires, rather than creators of jobs and productivity, are extractors: their gain is someone else’s loss. Similarly, zero-summers on the right assume that migrants are a drain on resources, rather than productive participants who increase the amount of resources available.
David Lawrence tweet media
English
23
32
176
26K
Philip Bell
Philip Bell@PhilipfvBell·
One of the inventors of the transformer (the basis of chatGPT aka Generative Pre-Trained Transformer) says that it is now holding back progress. What comes next in 2026? Here are 3 architectures competing to replace the transfomer: Text Diffusion Models: Current LLMs write left-to-right. Diffusion models denoise whole sentences at once, allowing for better planning. Continuous Thought Machines: Models that "pause" to think. Instead of fixed steps, they use neural synchronisation - meaning the model can spend more time on complex queries. Nested Learning: Inspired by the brain, this architecture uses fast and slow learning loops to help models 'learn to learn' on the job. techfuturesproj.substack.com/p/the-post-tra…
English
0
0
0
46
Wu Wei
Wu Wei@WuWei304·
@PhilipfvBell @rajsinghchohan Yes, he watches games. And no, Bradley is not “consistently one of our best players”. He’s had a chance to cement his place and has proven to be not good enough. In attack, he doesn’t pose enough of a threat; in defence, he’s positionally suspect and often gives away silly fouls.
English
1
0
0
43
Raj Chohan
Raj Chohan@rajsinghchohan·
Without Salah, Conor Bradley has had a few games now to provide Liverpool’s wide attacking threat and he’s basically shown he’s not good enough to do it. Limited crossing bag and doesn’t run past his man much dynamically or in 1v1s. He isn’t good enough in deep build-up to make up for it either. Could be a solid backup but Frimpong is better for that as he also covers other positions. Think Bradley gets sold in the summer.
English
84
122
1.7K
130.6K
Shane Legg
Shane Legg@ShaneLegg·
@AnnaLeptikon Seems like he's pointing to the Buddhist concept of impermanence.
English
1
0
9
1.4K