Kevin A. Bryan

11.9K posts

Kevin A. Bryan banner
Kevin A. Bryan

Kevin A. Bryan

@Afinetheorem

Assoc. Prof. of Strategy, U Toronto Rotman | Chief Economist, CDL Toronto | Co-Founder, AllDayTA | Ars longa, vita brevis, occasio praeceps (especially now)

Toronto, ON, Canada Katılım Şubat 2015
18 Takip Edilen19.5K Takipçiler
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
@_LukasFreund_ @lukasfmann @Noahpinion @marthagimbel @DKThomp @binarybits @testingham @PeterMcCrory @sebkrier @Kantrowitz @joshgans @AndreyFradkin @leopoldasch @dwarkesh_sp @emollick @albrgr @ajeya_cotra For sure, "new bundles" very hard to estimate. Depends on complements which you handle really well. But I wonder, if we apply a task model to the rise of the computer, how well does it do? Calculation drops to zero cost yet people good at that get richer. Keep up the good work!
English
0
0
2
80
Lukas Freund
Lukas Freund@_LukasFreund_·
Thanks, @Afinetheorem & sorry for the slow response (seminar day...). A couple of points: - First you of course won't be surprised that I am likewise very curious about both a) new task, or b) "newly bundled jobs", which could involve either new tasks or a new mix of old tasks. Conceptually, we can already accommodate a) in so far as new "micro tasks" fall into an existing "task cluster" (e.g. "reviewing AI agent code output" is comparable to some existing tasks, just more important now). What's difficult to handle are "tasks for which we cannot reasonably estimate skills based on historical data." - In terms of time horizon, I broadly agree (and we explicitly discuss this scope restriction in the paper), but would add two nuances. First, the model allows workers to switch across occupations (due to task content changes as well as GE effects). Second -- and this goes beyond the model, it's intuition only-- some of the results seem less sensitive to the horizon. For example, the predictions for rising social skill returns seem fairly robust to, say, new tasks/jobs emerging. (But could be attenuated when allowing for human capital investment.) - All this being said, @lukasfmann and I can hopefully give you a more satisfying answer in a next paper :)
English
1
0
2
41
Lukas Freund
Lukas Freund@_LukasFreund_·
⏭️ Summary. We develop a framework to quantify job transformation effects and demonstrate the central role of this mechanism in shaping the labor market consequences of genAI. Beyond LLMs, the framework canaccommodate different automation shocks, from self-driving vehicles to humanoid robots. Lots more work to do.
English
1
0
1
209
Kevin A. Bryan retweetledi
Jon Steinsson
Jon Steinsson@JonSteinsson·
Totally agree that the right way to think about climate policy is 1) We have a huge runway of proven cheep tech, 2) Important to keep spending on R&D for harder stuff (needed a few decades from now), but this should not distract us from focus on deploying proven cheep tech
John Bistline@JEBistline

The latest IPCC offers hope that we can limit warming, but time is short. In today's @nytimes, we make a case for which decarbonization issues should be prioritized and which debates distract us from our shared, near-term goals: nyti.ms/3uq3hy0

English
0
5
20
3.3K
Kent Fellows
Kent Fellows@GK_Fellows·
@Afinetheorem There is significant potential that, heading down this road, targetted ads would offer less value propostion than detailed data collection for the purposes of algorythmic price discrimination, as grim as that prospect is.
English
1
0
0
52
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
On econ of LLMs and ads, Olivia is right on - folks don't get how much more you charge for a targeted ad, and why that gives FB/Google such an advantage over, say, a newspaper. Can hit exactly the right person. Would love a good IO paper on ad-supported vs. subscription for LLMs.
Olivia Moore@omooretweets

A big story that most people are missing in the AI race for the consumer (ChatGPT vs Claude) is ads. Right now, most consumer AI revenue is coming from power users who are willing to pay high cost subscriptions. This currently skews positive for products like Claude - but this will not be the end state. Google makes ~$460/ user/year in the U.S., mostly on ads. Meta makes around ~$250. I would argue ChatGPT’s ad-based ARPUs will be even higher as they will ultimately have deeper / more frequent user engagement. Even at the $460 level - monetizing everyone in the U.S. via ads is $152 billion in annual revenue. By contrast, if you’re able to monetize even 5% of the population on a $200/month subscription (which is a stretch!), that’s only $40 billion 🤔 I suspect this will be even more drastic outside the U.S. where users are even less willing or able to pay directly for subscriptions. And, the earliest data from a very small rollout shows ChatGPT ads are already outperforming Meta in effectiveness - this just gets better over time. TL;DR - I would not count ChatGPT out on consumer AI revenue. Once ads start working, that can quickly become a massive machine.

English
3
2
15
2.5K
Kevin A. Bryan retweetledi
Martin Chorzempa 马永哲
Martin Chorzempa 马永哲@ChorzempaMartin·
This is the best piece I have seen about it, from @pstAsiatech open.substack.com/pub/pstaidecry… My takeaway is that big Chinese tech firms see it as the new OS for AI that could mediate people's interaction with other services and want to control it within their platform (like they did as super apps before). Plausibly some of the security concerns can be mitigated if it is, say, a Tencent version that cannot just go download malware-laden skills online.
English
0
1
4
755
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
I don't even know that the main result is about LLM structure vs others - I'm not sure there is enough entropy in a given amount of output to even learn a new syntax in principle given how these models would decide to cut off response/thinking. Would have to check, but...
English
0
0
0
367
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
On today's Gary-Marcus-Thinks-AI-Can't-Handle-New-Tasks: like, *of course* pure token prediction w/ a cap on token output can't learn a new syntax! But the authors note that the *same models* with a harness to learn the new task do fine. This should be your prior, of course...
Lossfunk@lossfunk

7/ After the paper was finalized, we ran agentic systems that mimic how humans would learn to solve problems in esoteric languages. We supplied our agents with a custom harness + tools on the same benchmark. They absolutely crushed the benchmark. Stay tuned 👀

English
2
2
10
2.1K
Kevin A. Bryan retweetledi
Jacob Schaal
Jacob Schaal@FutureEconJacob·
New @windfalltrust AI Economics Brief on AI R&D and automation is out! Here a quick summary followed by the link: A new growth model by @TomDavidsonX , @BasilHalperin, @akorinek, and @tomwhoulden predicts that AI R&D automation could accelerate AI’s economic impact, driven by economic and technological feedback loops. They model an AI research sector that spans software and hardware and calibrate it using estimates of AI software and hardware progress, such as Moore’s Law. But current measures of AI R&D automation are limited, and existing benchmarks saturate quickly. The think tank @GovAIOrg @_achan96_ @ranayssance, Joe Kwon, Hilary Greaves + @Manderljung proposes a framework that combines experiments, surveys, operational tracking, and organisational metrics to address this gap. AI automation might also accelerate as task chains are automated contiguously according to a model from @Peyman_Shahidi @demirermert @johnjhorton @immorlica, and Brendan Lucier. This might overturn comparative advantage in some cases, as firms prefer to automate a task when AI is sufficiently good at it to save labor costs, while end-stage verification costs remain fixed. Plus: Anthropic's "observed exposure" metric, @ajeya_cotra on AI R&D timelines, and @davideoks on ATMs vs. iPhones.
Jacob Schaal tweet media
English
1
8
38
2.6K
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
Don't use AI academically in a way you wouldn't use a student RA. Your name is on the paper, that means you are responsible both for the contents and for the rigor and thought that is *summarized* by the words in the paper.
Alexander Kustov@akoustov

Enough is enough. Just because you can generate an academic paper in minutes doesn't mean you should. When your name is on something, you should check every reference and claim before submitting. If you can't be bothered to do that, you should be banned from submitting.

English
0
8
87
8.5K
Kevin A. Bryan retweetledi
Ethan Mollick
Ethan Mollick@emollick·
We are back to the phase of the AI news cycle where people are underestimating how jagged the AI ability frontier is, as well as how much they still depend on expert human decision-making or guidance at key points in order to function well. Still far from "doing all jobs," today.
English
43
22
284
14.7K
Kevin A. Bryan retweetledi
Greg Ip
Greg Ip@greg_ip·
My boring take, as a so-far failed vibe coder: AI is more likely to keep lots of software developers employed making way more , and better, software than ever, than to put them out of work because we consume the same amount of software as before, delivered by fewer people.
Guy Berger@EconBerger

4/ My boring take as a non-software-developer is these jobs will exist in the future and that they will be very different from the past. I don't know how many there will be. This incipient recovery *may* be the early innings of that transformation.

English
7
6
58
9.5K
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
@Jabaluck Yup. Ben Jones (2025, NBER TAI) paper is my mental model here. I want to estimate this empirically in this setting.
English
0
0
0
111
Jason Abaluck
Jason Abaluck@Jabaluck·
@Afinetheorem Yes, good point. I suppose the question is whether time is a bottleneck in the production function once you get vast amounts of superhumanly capable labor.
English
1
0
0
126
Jason Abaluck
Jason Abaluck@Jabaluck·
In my view, it's a completely open question whether ASIs could make rapid progress in biology. The fundamental question is whether sufficiently good computational models and high-resolution data can substitute for time. While current generation models require vast amounts of data to achieve superhuman performance at some tasks, ASIs will also be able to use superhuman modeling abilities to draw better inferences from a given amount of data. An ASI could also build new data gathering devices and collect short-run biological data with great efficiency. This would likely enable much higher resolution biological imaging of various kinds. What an ASI cannot do is collect empirical data that can *only* be generated over time. It cannot, for example, run a randomized experiment to see the impact of caloric restriction in humans over 30 years. But does it need to? Waiting and observing how biological systems evolve over time is clearly necessary for humans to learn about biology with our current scientific understanding -- we don't know enough to observe biological systems for a day and then model how they will develop over 20 years. It is an open question whether this is true of an ASI with vastly superior data collection and modeling abilities. There may be fundamental barriers introduced by computational complexity that cannot be skirted by any modeling techniques. But we are very far from knowing whether this is the case for the biological quantities we care about, including aging and death. Workable cryonic technologies in particular seem like low-hanging fruit for an ASI compared to solving aging entirely.
Geoffrey Miller@gmiller

A mini-rant abut AI and longevity. They say "Artificial Superintelligence would take only a few years to cure cancer, solve longevity, and defeat death itself'. This is a common claim by pro-AI lobbyists, accelerationists, and naive tech-fetishists. But the claim makes no sense. The recent success of LLMs does NOT suggest that ASIs could easily cure diseases or solve longevity, for at least two reasons. 1) The data problem. Generative AI for art, music, and language succeeded mostly because AI companies could steal billions of examples of art, music, and language from the internet, to build their base models. They weren't just trained on academic papers _about_ art, music, and language. They were trained on real _examples_ of art, music, and language. There are no analogous biomedical data sets with billions of data points that would allow accurate modelling of every biochemical detail of human physiology, disease, and aging. ASIs can't just read academic papers about human biology to solve longevity. They'd need direct access to vast quantities of biomedical data that simply don't exist in any easy-to-access forms. And they'd need very detailed, reliable, validated data about a wide range of people across different ages, sexes, ethnicities, genotypes, and medical conditions. Moreover, medical privacy laws would make it extremely difficult and wildly unethical to collect such a vast data set from real humans about every molecular-level detail of their bodies. 2) The feedback problem. LLMs also work well because the AI companies could refine their output with additional feedback from human brains (through Reinforcement Learning from Human Feedback, RLHF). But there is nothing analogous to that for modeling human bodies, biochemistry, and disease processes. There are no known methods of Reinforcement Learning from Physiological Feedback. And the physiological feedback would have to be long-term, over spans of years to decades, taking into account thousands of possible side-effects for any given intervention. There's no way to rush animal and human clinical trials -- however clever ASI might become at 'drug discovery'. More generally, there would be no fast feedback loops from users about model performance. GenAI and LLMs succeeded partly because developers within companies, and customers outside companies, could give very fast feedback about how well the models were functioning. They could just look at the output (images, songs, text), and then tweak, refine, test, and interpret models very quickly, based on how good they were at generating art, music, and language. In biomedical research, there would be no fast feedback loops from human bodies about how well ASI-suggested interventions are actually affecting human bodies, over the long term, across different lifestyles, including all the tradeoffs and side-effects. It's interesting that most of the people arguing that 'ASI would cure all diseases and aging' are young tech bros who know a lot about computers, but almost nothing about organic chemistry, human genomics, biomedical research, drug discovery, clinical trials, the evolutionary biology of senescence, evolutionary medicine, medical ethics, or the decades of frustrations and failures in longevity research. They think that 'fixing the human body' would be as simple as debugging a few thousand lines of code. Look, I'm all for curing diseases and promoting longevity. If we took the hundreds of billions of dollars per year that are currently spent on trying to build ASI, and we devoted that money instead to longevity research, that would increase the amount of funding in the longevity space by at least 100-fold. And we'd probably solve longevity much faster by targeting it directly than by trying to summon ASI as a magical cure-all. ASIs has some potential benefits (and many grievous risks and downsides). But it's totally irresponsible of pro-AI lobbyists to argue that ASIs could magically & quickly cure all human diseases, or solve longevity, or end death. And it's totally irresponsible of them to claim that anyone opposed to ASI development is 'pro-death'.

English
5
1
26
4.2K
Kevin A. Bryan retweetledi
Jesús Fernández-Villaverde
Jesús Fernández-Villaverde@JesusFerna7026·
I am very happy that my survey paper, "Deep Learning for Solving Economic Models," is forthcoming in the Journal of Economic Literature (pending final replication checks, which should be quick). The paper benefited greatly from the editor, David Romer, five referees, and many friends who read earlier versions. I believe the result is a solid introduction to the field, though in 48 pages, there is only so much one can do. So, I created a companion webpage: sas.upenn.edu/%7Ejesusfv/dee… where you can find the paper, the code, and some slide decks with my teaching material. My plan is to expand the slides over time, adding new material and updating them as new results appear. I will probably do a thorough revision once the spring semester is over. Those who follow my feed know that I think deep learning is the most fundamental change to computational economics in the last 40 years. I am by now convinced it is more important than the development of Markov chain Monte Carlo methods in the early 1990s or the introduction of projection and perturbation methods in the 1980s. To find a comparable shift, one would probably need to go back to Richard Bellman's invention of value function iteration in 1957. More pointedly, we need to redesign the Ph.D. in economics. Not at the margin. From the ground up. Economists can either fully embrace the deep learning revolution or become irrelevant, as has already happened, I would dare say, to some fields in academia that refused to accept reality. Finally, let me apologize to everyone working in this area whom I could not cite. Space was a binding constraint. And yes, this post was written with the considerable help of AI. There is nothing I am prouder of than the fact that AI is now an integral part of every step I take in my professional life.
Jesús Fernández-Villaverde tweet media
English
19
280
1.2K
118.8K
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
It is quite intriguing to see OpenClaw being used so heavily in China, despite the (correct!) perception that it is an insane security risk. Any insight from China folks about what we are missing? Or is it just "the scaffolding of the big US models isn't easily available"?
English
2
1
7
1.8K
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
@sebkrier The real decision theory heads want to see Savage, though!
English
0
0
5
410
Séb Krier
Séb Krier@sebkrier·
Since we now have agents, why doesn't anyone design evals that test whether they fit the four Von Neumann-Morgenstern axioms, since that's such a fundamental assumption behind alleged AI drives?
English
12
4
69
7.1K