Robert Metcalfe

5.2K posts

Robert Metcalfe banner
Robert Metcalfe

Robert Metcalfe

@RDMetcalfe

Economist, professor @ColumbiaSIPA @nberpubs | Co-Editor @jpubecon | Chief Economist @CentreNetZero | Co-founder @tbehaviouralist & @signol_io | 🏴󠁧󠁢󠁷󠁬󠁳󠁿

Manhattan, NY Katılım Kasım 2011
1.7K Takip Edilen7.9K Takipçiler
Sabitlenmiş Tweet
Robert Metcalfe
Robert Metcalfe@RDMetcalfe·
First-gen problems
Robert Metcalfe tweet media
English
75
444
15.5K
0
Robert Metcalfe retweetledi
John A. List
John A. List@Econ_4_Everyone·
My copy from the @UChicagoPress just arrived! At nearly 900 pages, if you don’t like it I bet it can stop any door from shutting abruptly! Mine will rest next to a cherished treasure that brings me back to where the roots of this book started, in the early 1990s!
John A. List tweet media
English
15
83
723
29.7K
Robert Metcalfe retweetledi
John A. List
John A. List@Econ_4_Everyone·
PhD decision season. My inbox is more flooded than ever. Prospective PhD students asking some version of the question: "Should I keep going? Is an Econ PhD still worth it in the age of AI?" I get it. The uncertainty is real. And, honestly no one knows the answer. My response begins with the caveat that I have no real certainty around my thoughts, I merely have a hunch. And, that intuition comes from combining my experiences in the academy with my recent field work alongside charities, governments, Walmart, and Anthropic itself. My hunch is that AI will reveal expertise, not replace it, at least for the foreseeable future. Indeed, I wrote about this earlier with my justification. As such, I do not view a PhD in economics as a credential. It's a forcing function for building that kind of deep, durable expertise. The expertise that AI amplifies rather than erodes. So my advice? The uncertainty about AI may be the best reason yet to double down and go for an econ PhD. Why? Because the future belongs to people who know things deeply enough that AI becomes a multiplier, not a replacement.
English
23
120
750
95.4K
Robert Metcalfe retweetledi
John A. List
John A. List@Econ_4_Everyone·
Over my career I have tried to use science in naturally-occurring settings to learn something of social import. Whether baseball card and collector shows, charitable organizations, schools, elections, government agencies, and more recently businesses, nothing has been off limits. One of the first hurdles I always face when I talk to firms is that “field experiments are too expensive.” It is high time we begin to think like economists and retire that myth! The real cost isn't the field experiment. It's the opportunity cost of not knowing something. Ignorance is rarely free. Every failed new product, bad public policy, incentive backfire, or mispricing that a well-designed test could have caught before scaling is the true price tag. The key that nearly everyone misses is that every organization rolls out new ideas all the time. New pricing schemes. New onboarding flows. New incentive structures. New web design, new choice architecture, new public policies. They go live, usually untested. What you need to ask is simple: what is the marginal cost of embedding a field experiment into a rollout that you are already doing? Most of the time trivial, especially compared to the knowledge gains. You are already paying for the infrastructure. The experiment is what converts that action into knowledge. In my new book, Experimental Economics: Theory and Practice (amazon.com/Experimental-E…), the economics of this argument become crystal clear, especially in the chapters on experiments with organizations and the overarching model of optimal design. The question was never "can we afford to run this experiment?" The real question: "can we afford not to?"
English
3
14
74
5.6K
Robert Metcalfe retweetledi
John A. List
John A. List@Econ_4_Everyone·
Hi Tom. That feels too narrow. This really has nothing to do with preserving rents. I am too old to care about that! Part of my worry was that people would look at AI and decide that learning economics was unnecessary. Why invest years in mastering causal inference or market design when you can just ask a machine and I won't get paid for it anyway? As my twitter handles suggests, learning economics was never just about producing economic analysis. Our classes train critical thinking skills: what thinking on the margin means, how to reason about incentives, how to use demand identification before making causal claims, and spot what is actionable and what is not. I always says, Economics is life and life is economics Such thinking habits spill over into everything. How you evaluate a policy proposal. How you run a business. How you make decisions in your own life. How to vote. f AI convinces a generation of youngsters that they can skip that training because the machine will handle it, we lose something that goes well beyond the economics profession.
English
1
2
33
4K
Robert Metcalfe retweetledi
John A. List
John A. List@Econ_4_Everyone·
When AI first arrived on the scene, I worried it would make economists, or even critical thinkers more broadly, less valuable. In my travels in the past 6 months to work with non-profits, for profits, and government agencies, I have observed how people are actually using AI. I have watched them fumble around with insights they clearly did not create themselves. My fears are now assuaged. One observation is that AI can produce something that in some cases is very wrong and in others looks nearly right, but is not quite there. Even if in time AI improves to "nearly right" or "exactly right" every time, a second issue still arises: explaining the materials. Explaining why an answer is almost correct but subtly off requires exactly the critical thinking skills that created the knowledge in the first place. Even explaining "exactly right" material takes critical thinking. I've watched smart people confidently present AI-generated material they clearly don't fully understand. The words sound right. But when someone pushes back just a little bit, the sand castle crumbles. It is quite difficult to defend what you didn't build. This leads me to now make the optimistic case for human expertise. The value of deeply understanding something — of having built the knowledge yourself — hasn't diminished with AI. If anything, it's increased. The people who can tell the difference between "nearly right" and "right" are more valuable than ever. The people who can explain the subtle details about something that is exactly right are invaluable. Creating knowledge still matters. Maybe now more than ever.
English
31
168
727
65.8K
Robert Metcalfe retweetledi
Eva Vivalt
Eva Vivalt@evavivalt·
I'm hiring pre-docs interested in applied microeconomics, especially with AI. Check out the link and apply! Deadline for the first round of review is tomorrow, Feb. 24. evavivalt.com/2026/02/pre-do…
English
2
40
152
20.9K
Robert Metcalfe retweetledi
David Schönholzer
David Schönholzer@davidfromterra·
Hi everyone, Paulina Oliva and I are hiring Pre-doctoral Research Fellows to work with us here at USC Economics in Los Angeles. If you know of suitable candidates, please encourage them to apply! dornsife.usc.edu/paulina-oliva/…
English
5
43
108
19K
Robert Metcalfe retweetledi
John A. List
John A. List@Econ_4_Everyone·
***Big Announcement!*** I'm thrilled to let everyone know that the Chicago School in Experimental Economics is heading to Buenos Aires! This edition of CSEE will take place November 4-8, 2026 at the Universidad del CEMA.  This intensive one-week summer school is designed to deepen scholars' understanding of frontier experimental methods. Lessons will range from designing and conducting experiments to analyzing and interpreting data to writing up your findings. The curriculum draws from my forthcoming textbook, Experimental Economics: Theory and Practice (amazon.com/Experimental-E…), and participants will also have the opportunity to present and discuss their own research. I'm honored to be joined by an outstanding group of lecturers: Gwen-Jirō Clochard (The University of Osaka) Jared Gars (University of Florida) Luca Henkel (Erasmus University Rotterdam) Justin Holz (University of Michigan) Sally Sadoff (UC San Diego) Julia Seither (University of Chicago) Karen Ye (Queen's University) Applications are now open to researchers across disciplines who work with experimental methods, want to apply them in the future, or teach experimental approaches. Priority goes to junior faculty, but doctoral and postdoctoral researchers are also welcome to apply. There is no program fee, and financial assistance for accommodations is available. Application deadline: April 30, 2026 Please apply here: voices.uchicago.edu/jlist/the-chic… Questions? Contact Melissa De Vries at melissade4@uchicago.edu I look forward to seeing you in Buenos Aires!
English
0
31
120
19.1K
Robert Metcalfe retweetledi
AEA Journals
AEA Journals@AEAjournals·
Forthcoming in the AER: "A Welfare Analysis of Policies Impacting Climate Change" by Robert W. Hahn, Nathaniel Hendren, Robert D. Metcalfe, and Ben Sprung-Keyser. aeaweb.org/articles?id=10…
English
0
12
50
7.3K
Robert Metcalfe retweetledi
John A. List
John A. List@Econ_4_Everyone·
Today I presented some of our recent first-gen research to a group of educators. The finding that surprised the audience the most? Family income and school quality together explain only about one-third of the gap. The remaining two-thirds point to something deeper: parental human capital channels that operate beyond income, likely through information, guidance, and the navigational knowledge that college-educated parents transmit to their children. We also find that some teachers have a genuine comparative advantage in helping first-gen students reach excellence, which has real implications for how we think about teacher assignment. The bottom line for policymakers: if we wait until college access interventions to address first-gen disadvantage, we've already lost most of the game. These gaps emerge early, compound over time, and demand comprehensive support starting in elementary school. The study is available for free download here: ideas.repec.org/p/feb/artefa/0…
English
3
12
72
7K
Alex Imas
Alex Imas@alexolegimas·
I completely agree with everything in this post: decouple the functions of academic publishing.
Seth Lazar@sethlazar

I think this is a bit hysterical. Academic publishing has at least two pragmatic functions. Much of the difficulty derives from trying to perform both of those functions with the same venue/process. Decouple them, and things can work just fine. Academic publishing does two things: it allocates *credentials*, and it attempts to allocate *attention*. The credentialing function of academic publishing is actually really important. Universities are our largest and most enduring organisations for fundamental research in the public interest. For that research to make genuine progress towards knowledge, we have to apply standards of rigour. To know who to hire into tenure-track positions, we need to be able to assess the quality of their work, and see how it is judged by their peers. And to decide who gets the ultimate liberty of being able to follow their intellectual interests wherever they lead (ie tenure), we need the same thing. Peer review has all sorts of flaws, it's broken in many different ways, but we need to have *some* mechanisms for this kind of quality control in order for the whole endeavour of the collective pursuit of wisdom in the public interest to advance. But the problem is, peer review takes *time*. And by the time it's complete, it's generally too late to contribute in a meaningful and effective way to the public conversation on a topic as fast-moving as AI progress. In addition, while the people running journals (etc) are often good judges of methodological rigour and other epistemic criteria, they are very *fallible* judges of what work is actually the most important, or otherwise in the broader interest of society (this is especially true for humanities fields like mine). But this just means that while academic publishing is the right vehicle for quality control and credentialing, it's the wrong one for the allocation of public attention. The solution seems pretty clear—especially since this is how things are basically already done in CS. Decouple the two things. Preprints, substacks, the other changing forms of public communication for the allocation of attention; journals and peer-reviewed conferences for the quality control/credentialing function. Obviously this exposes us to epistemic risk: if you don't wait for peer-review, then some of the work that attracts a tonne of public attention will prove to be bogus. But that's a self-correcting problem, since if it attracts a lot of attention then the errors are likely to be found out. And obviously peer review faces MANY challenges right now. But that's something that *can* be salvaged (and an area where AI is likely to help).

English
4
1
24
8.6K
Robert Metcalfe retweetledi
SIPA SusDev
SIPA SusDev@SipaSusdev·
Can artificial intelligence weather prediction models help large vulnerable populations adapt to weather shocks in the face of climate variability and change? Research by @ColumbiaSIPA PhD alum, @amirjina , et al, looks at monsoon onset forecasts in India to answer this question. arxiv.org/pdf/2602.03767
SIPA SusDev tweet media
English
0
1
2
365