tom cunningham

462 posts

tom cunningham

tom cunningham

@testingham

Economics & AI @ @METR_Evals (ex-openai) https://t.co/FZobuYjdOc

San Francisco, CA Katılım Mart 2009
2.9K Takip Edilen9.1K Takipçiler
Joel Becker
Joel Becker@joel_bkr·
this chart bringing to life the inner-workings of time horizon is so cool. from my super-talented colleague @CFGeek.
Joel Becker tweet media
English
5
11
118
20.3K
J. Mark Hou
J. Mark Hou@JMarkHou·
Tom said this much more diplomatically than I would: economists calling this "AI exposure" is **clickbait**: we're just gonna take a term that regular people justifiably interpret as "I will have no job", and then give them grief when they don't understand the nuance?
J. Mark Hou tweet media
tom cunningham@testingham

An observation on the "exposure" discourse: I think much of the problem is semantic, many people are using that word without defining it, but it has many meanings, I propose a moratorium. A natural interpretation of "this job is highly exposed to AI" is "this job is close to being fully automated, experience is no longer necessary."

English
2
0
5
425
tom cunningham retweetledi
tom cunningham
tom cunningham@testingham·
tom cunningham tweet media
ZXX
1
11
95
0
tom cunningham
tom cunningham@testingham·
@EconBerger Ha thank you! i was very pleased when i figured out a way of drawing this feeling i'd had.
English
0
0
1
28
Guy Berger
Guy Berger@EconBerger·
@testingham This is an incredible diagram. I plan to borrow it (w/credit to you of course)
English
1
0
0
28
tom cunningham
tom cunningham@testingham·
An observation on the "exposure" discourse: I think much of the problem is semantic, many people are using that word without defining it, but it has many meanings, I propose a moratorium. A natural interpretation of "this job is highly exposed to AI" is "this job is close to being fully automated, experience is no longer necessary."
English
2
5
30
3K
tom cunningham
tom cunningham@testingham·
(Final observation: I think the nature of intellectual discourse is that it's *attracted* to ambiguity. Academics love to make claims that have two readings, one big-but-false, one small-but-true. Every literature is infected with these worms you have to constantly pick them out)
English
1
0
8
382
Lukas Althoff
Lukas Althoff@AlthoffLukas·
🔥 Real-time update: AI's labor market effects. 1⃣ Work is increasingly getting simplified. 2⃣ Occupations predicted to gain from AI continue to rise in importance as of January 2026. 3⃣ We've made our data publicly available. @ReichardtHugo
English
3
31
128
25.2K
tom cunningham
tom cunningham@testingham·
@sayashk just to clarify -- you're not worried about the *incremental* productivity of PhD students? I.e. you think professors will be able to get significantly better results by sending work to a PhD who uses an agent, than to an agent alone?
English
0
0
2
73
Sayash Kapoor
Sayash Kapoor@sayashk·
In the last few months, I've spoken to many CS professors who asked me if we even need CS PhD students anymore. Now that we have coding agents, can't professors work directly with agents? My view is that equipping PhD students with coding agents will allow them to do work that is orders of magnitude more impressive than they otherwise could. And they can be *accountable* for their outcomes in a way agents can't (yet). For example, who checks the agent's outputs are correct? Who is responsible for mistakes or errors?
English
58
39
520
470.7K
tom cunningham retweetledi
Jason Abaluck
Jason Abaluck@Jabaluck·
Let's separate how to tax and how to structure the benefits. RE: taxes -- higher land and property taxes, higher Pigouvian taxes (e.g. carbon), broad based consumption tax, with bulk of revenue coming from the combination of progressive income tax (for redistribution), a broad consumption tax, and property taxes, perhaps shifting more to various taxes on rents (like business cash flow taxes) if firm concentration dramatically increases. RE: benefits -- a combination of wage loss insurance, UI, and cash welfare, titrating over time from wage loss insurance to more generous cash welfare/UBI as the share of automated jobs increases.
English
0
1
7
1.4K
tom cunningham
tom cunningham@testingham·
@Jabaluck Agreed. Jason you should write more on this , and come to some of the ai Econ workshops.
English
0
0
2
291
Jason Abaluck
Jason Abaluck@Jabaluck·
I'm starting to wonder if any politician will come out in favor of the best platform: 1) It's good if AI replaces humans at work 2) We should have generous public insurance to compensate losers 3) Massively increased funding for safety (transparency, control & misuse prevention) 4) International Manhattan project as recursive self-improvement becomes closer, with frontier models developed under direct oversight of teams of scientists with no direct profit motive
ControlAI@ControlAI

At an AI policy roundtable, Florida Governor Ron DeSantis (@GovRonDeSantis) says we should not build tech that will supplant us as human beings. "I don't think you can say these machines are just gonna be doing things and we're gonna suffer harm and there's nothing anybody can do about it." "There have to be ways to make sure that this stuff is controllable."

English
9
6
52
10.8K
tom cunningham
tom cunningham@testingham·
@alexolegimas My best guess at Pareto frontier: it’s a v tight ellipse Implication is that it really doesn’t matter what you’re optimizing for, you’ll end up in roughly the same location. (Additionally I think labs are optimizing heavily for augmentation )
tom cunningham tweet media
English
0
0
1
83
tom cunningham
tom cunningham@testingham·
@alexolegimas Your claim is that there’s a substantial tradeoff, that they could be reallocating investments so they have significantly higher augmentation ability at the cost of significantly lower automation ability?
English
1
0
1
73
Alex Imas
Alex Imas@alexolegimas·
This is a choice: So much of the US development race has been to make as smart of a general system as possible (race to AGI/ASI), instead of developing and perfecting tools that people can use to improve their own productivity/every day lives. This choice has consequences both for how useful AI is perceived to be and the potential opposition against it.
Ethan Mollick@emollick

As the AI labs continue to see acceleration, their design choices beyond just alignment become ever more important. Their products are one of the most powerful tools for shaping how AI is used, and I think a lot of recent focus has been on tools for automation, not augmentation.

English
6
14
84
12.5K
tom cunningham
tom cunningham@testingham·
@alexolegimas and an enormous amount of the post-training pipeline is on *augmentation* metrics (e.g. user preference). Outside observers I think over-concentrate on automation benchmarks.
English
1
0
2
312
tom cunningham
tom cunningham@testingham·
@alexolegimas It's not clear this is true! I think labs are optimizing pretty hard against chatbot usage (augmentation) and individual software-engineer usage (augmentation) It's notable how small a share of their revenue comes from wholesale automation, e.g. replacing customer service.
English
1
0
9
707
Razvan Ciuca
Razvan Ciuca@Raz_Ciuca·
@testingham now if this is an infinite straight line and runners begin uniformly distributed on the line, then this means that you are at the 91st percentile of speeds among runners
English
1
0
1
131
tom cunningham
tom cunningham@testingham·
Suppose you overtake 10 times as many runners as overtake you. What can you say about your speed relative to the other runners? (everyone is running around a circular track in the same direction forever, and each person started at a random point)
English
3
2
33
7.4K