Chris Blattman

32K posts

Chris Blattman banner
Chris Blattman

Chris Blattman

@cblatts

Economist & political scientist @UChicago @HarrisPolicy studying conflict & organized crime. My book is Why We Fight: https://t.co/pwWjDnYzvo

Chicago, IL Katılım Eylül 2009
4.3K Takip Edilen105.7K Takipçiler
Chris Blattman
Chris Blattman@cblatts·
@bennpeifert Before going on a field trip, someone once asked me if I knew how to use a gun. I responded “would anyone feel safer if I was handling a weapon?” And that was the end of that story.
English
1
0
16
1.4K
Chris Blattman retweetledi
Kevin Kloiber
Kevin Kloiber@EconKevin·
Handed in my thesis two weeks ago and finally have time for fun again. So I built two small games about AI vs. real economics writing. Can you tell a real top-5 journal abstract from an AI-generated fake? Three lives, no hints, leaderboard. kevin-kloiber.com/ai-or-real.html #EconTwitter
English
9
16
85
31.8K
Aniket Panjwani
Aniket Panjwani@aniketapanjwani·
most of the junior econ faculty I talk to are finding somewhere between "less" and "almost no" use for research assistants for simple tasks - data cleaning, making tables, running regressions - in the time it would take them to give instructions to the RA, they can instead just instruct CC and get the results (and done much better) so, the potential utility of an RA often depends now on the RA having high quality higher-level/architectural thoughts, being skilled at interpreting and iterating on the results of code, and/or being better at using coding agents than their prof yes, ofc RAs may have a comparative advantage despite not having an absolute advantage - but this is quite tricky a lot of the difference between mediocre and great results with agentic coding depends on having a simultaneous good mental model of the tools, how to work symbiotically with them, *AND* of the architecture of what it is you want to do (whether software or research) in the past, an RA would work slow enough - because the process of writing code was slow, so that you - as a prof - could catch them in all these small problems/mistakes they'd make along the way due to a lack of experience even so, reviewing RAs code was already a bottleneck for many profs now reviewing (tens of) thousands of lines of vibe coded RA code, esp if poorly thought out, is just untenable worth saying - the pool of junior faculty I talk to are highly selected, bc they're either coming to me for agentic coding training, or they're my friends and so have had to listen to me blather on about this stuff for the last year I do expect though that the sentiments of these junior faculty is wider spread than is implied by what economists are publicly saying about "RA replacement", because it's an unpopular/costly opinion to publicly voice that you don't need your RAs anymore
English
13
12
121
100.5K
Chris Blattman
Chris Blattman@cblatts·
This is interesting because if anything my ability to manage more projects and my research assistants’ ability to use Claude code means that I'm probably hiring more Chicago-based and international-based RAs now more than before because their productivity and mine is so much greater. Used correctly I think smart young researchers are compliments to me and AI and not substitutes. (And I'm doing this despite the fact that Mr. Trump and Mr. Musk cut about 80% of my funding in the past year so I'm hiring despite rather desperate financial constraints).
Aniket Panjwani@aniketapanjwani

most of the junior econ faculty I talk to are finding somewhere between "less" and "almost no" use for research assistants for simple tasks - data cleaning, making tables, running regressions - in the time it would take them to give instructions to the RA, they can instead just instruct CC and get the results (and done much better) so, the potential utility of an RA often depends now on the RA having high quality higher-level/architectural thoughts, being skilled at interpreting and iterating on the results of code, and/or being better at using coding agents than their prof yes, ofc RAs may have a comparative advantage despite not having an absolute advantage - but this is quite tricky a lot of the difference between mediocre and great results with agentic coding depends on having a simultaneous good mental model of the tools, how to work symbiotically with them, *AND* of the architecture of what it is you want to do (whether software or research) in the past, an RA would work slow enough - because the process of writing code was slow, so that you - as a prof - could catch them in all these small problems/mistakes they'd make along the way due to a lack of experience even so, reviewing RAs code was already a bottleneck for many profs now reviewing (tens of) thousands of lines of vibe coded RA code, esp if poorly thought out, is just untenable worth saying - the pool of junior faculty I talk to are highly selected, bc they're either coming to me for agentic coding training, or they're my friends and so have had to listen to me blather on about this stuff for the last year I do expect though that the sentiments of these junior faculty is wider spread than is implied by what economists are publicly saying about "RA replacement", because it's an unpopular/costly opinion to publicly voice that you don't need your RAs anymore

English
0
9
155
65.1K
Chris Blattman
Chris Blattman@cblatts·
Four months ago, despite the terrible X algorithm, I'd managed to train my feed to be mostly super interesting foreign policy commentary, smart social scientists, and my pet interest in conflict and organized crime. Fast forward a few months and based on my own posting and what I've been reading, short term my entire algorithm is just a stream of Claude code posts, barely a foreign policy discussion to be seen. Any thoughts on how I can fix this? I almost want two separate Twitter accounts!
English
13
3
132
16K
Chris Blattman retweetledi
Ioan Grillo
Ioan Grillo@ioangrillo·
How much of Mexico's GDP is crime cartel activity? Likely single digits, as they make tens of billions in a two trillion dollar economy. But the cash has a huge impact in bribery, driving violence and is felt much bigger in certain communities. Read:↓ crashoutmedia.com/p/cartel-cash-…
English
17
93
250
17.1K
Chris Blattman retweetledi
Joseph Steinberg
Joseph Steinberg@jbsteinberg·
I spend way too much time on social media debunking "economic slop" promulgated by lawyers pretending to be economists, so I built Show Me the Model: a tool that uses AI to check whether the economic reasoning in an essay actually holds up. showmethemodel.io Give it a URL or paste some plain text, and the tool flags hidden assumptions, internal inconsistencies, and other problem areas, and tells you how a real economist would think through the issue. Right now, it has 4 "personas:" macro, trade, IO/price theory, and labor. The tool first figures out which persona is right for the job, and then uses a parallelized prompt scaffold specific to that persona to process the source text. Here are some example outputs based on some essays that triggered me hard: Citrini Research's viral essay on how AI could trigger a self-reinforcing financial crisis rivaling the GFC: showmethemodel.io/#/results/2ez3… American Compass on the harms of trade deficits: showmethemodel.io/#/results/kOvt… @oren_cass on why Built-to-Rent should be banned: showmethemodel.io/#/results/OXjr… American Compass on the "China Shock:" showmethemodel.io/#/results/dJM7… @michaelxpettis on why China's trade surplus reduces global output: showmethemodel.io/#/results/C8OT… Try it yourself at showmethemodel.io. You'll need to bring your own API key (OpenAI or Anthropic), and a typical analysis costs $0.50–$1.50. It's super preliminary and will probably break on you. I'd love feedback about both the functionality as well as the quality of the output.
Joseph Steinberg tweet mediaJoseph Steinberg tweet mediaJoseph Steinberg tweet mediaJoseph Steinberg tweet media
English
22
129
713
92.6K
Chris Blattman retweetledi
Anup Malani
Anup Malani@anup_malani·
A city has slums. Two options: bulldoze and move residents to new housing, or upgrade the slum where it stands. New housing sounds better. Chile tested both for 20 years. It wasn't even close.
English
44
286
4.7K
696.2K
Chris Blattman
Chris Blattman@cblatts·
But I can get the AI referee in advance from Refine.ink for $50. So why would I want an AI referee to do all of the things AI has a comparative advantage in? I guess if the AI referee could have a subtle appreciation of the literature and the frontier and fit and the nature of the journal, then an AI referee might be quite reasonable but we're not quite there yet.
English
1
0
0
200
D. Yanagizawa-Drott
D. Yanagizawa-Drott@YanagizawaD·
Question for fellow Economics researchers… Suppose you submit your paper to a journal, and the criteria are codified in transparent high level guidelines. The Editor, a human, makes the final decision. As of today, if you had a choice between: A) 2 human referees B) 2 AI reports (let’s say each are $100 of compute using frontier models) C) 1 human, 1 AI Which would you choose?
English
14
5
27
18.3K
Chris Blattman retweetledi
Ruijiang Gao
Ruijiang Gao@ruijianggao·
What happens when you invite 150 AI economists (Claude Code) to a research conference, give them the exact same data, and ask them to test the same hypotheses? We did just that. The results reveal a new phenomenon: Nonstandard Errors in AI Agents. 🧵👇
English
22
271
1.5K
188K
Chris Blattman retweetledi
Justin Sandefur
Justin Sandefur@JustinSandefur·
Wow. Macro model in @MinkiKim_Econ's JMP suggests rollout of the R21 malaria vaccine would boost Tanzania's long-run GDP by almost 7pp (!!!), mostly through reduced morbidity --> increased human capital acquisition
Justin Sandefur tweet mediaJustin Sandefur tweet media
English
3
52
207
18.3K
Chris Blattman retweetledi
Abdul Șhakoor
Abdul Șhakoor@abxxai·
I found a way to read a research paper the way academics actually read them. A friend of mine at Cambridge showed me her Claude workflow. I thought she was just fast. Then I watched her pull apart a methodology section in twenty minutes that her seminar group had spent a week discussing without fully understanding. Here's exactly what she did: First: she didn't ask Claude to summarise the paper. That's what everyone does. They paste in a paper and ask for a summary. They get a clean paragraph. They feel like they've read it. They move on. That's not reading. That's skimming with extra steps. She did something completely different. She read the paper herself first. All of it. Without Claude. Then she asked: "Based on the methodology and results sections alone, what can and cannot be legitimately concluded from this study? Now read the abstract and tell me where the authors overreach." She wasn't asking Claude to read the paper for her. She was using it to test whether the paper was actually saying what it claimed to be saying. The gap between those two things is where most students get lost. They read what the authors claim and treat it as what the authors found. An experienced academic never does that. She learned not to in twenty minutes. But the next part is what I keep thinking about. She asked: "What did this study not measure that would have significantly strengthened or weakened the central claim? What is the authors' methodology quietly assuming without ever stating it?" Most students read a methodology section to understand what the researchers did. She read it to find what they didn't do and what they hoped nobody would notice. Those are completely different acts of reading. One produces a student who can describe a study. The other produces a researcher who can evaluate one. Her seminar group spent a week on the same paper and never reached that question. Then she did something most students never think to do. She tested the paper against itself. "If I tried to replicate this study with a different population in a different context, what would most likely change about the results? What does that tell me about how far the authors' conclusions actually travel?" Most published claims are presented as general. Most are actually specific. That question finds the line between the two every time. Once you see it you cannot read a paper without looking for it. It changes what you take from every study you ever read after that. Then she mapped the paper's place in the conversation. She asked: "What debate is this paper entering? Who wrote the work this paper is responding to and what would those authors say back? Where does this paper sit in the argument that was already happening before it was written?" She stopped reading papers as standalone objects that day. Every paper is a reply to something. Most students never find out what. She found out in five minutes and it changed the way the paper meant something entirely. A paper you understand in isolation is information. A paper you understand inside its conversation is knowledge. Then she ran the final check. Before closing the paper she asked: "What is the single most important citation missing from this paper that every serious researcher in this field would consider essential? What conversation is this author not in that they should be?" She found a foundational paper the authors had never cited. Not because they were careless. Because they came from a slightly different tradition and had a blind spot they weren't aware of. That blind spot explained a gap in their argument she hadn't been able to name until that moment. She walked into the seminar and named it. Her supervisor stopped the discussion and asked her to explain how she'd found it. She told him she'd asked the right questions of the paper instead of just reading it. He told her that was exactly what twenty years in academia teaches you to do. She'd been doing it for three weeks. Here is the actual workflow. Five questions. In order. Question one: what can and cannot be legitimately concluded from the methodology and results alone? Where does the abstract overreach? Question two: what did this study not measure that would have changed what it found? What is the methodology quietly assuming it never defends? Question three: if you replicated this with a different population or context, what changes? How far do the conclusions actually travel? Question four: what debate is this paper entering? Who is it responding to and what would those people say back? Question five: what is the most important paper missing from the bibliography? What conversation is this author not in? Most students spend three years at university reading papers from the outside. Those five questions put you on the inside in twenty minutes. Claude didn't read the paper for her. It taught her the questions that experienced academics ask automatically after years in a field. She just learned them earlier. The papers didn't change. The questions did. Most students finish a paper feeling like they've understood it. She finished a paper knowing exactly what it proved, what it didn't prove, where it sat in the field, and what it was quietly hoping nobody would ask. That is not a faster way to read. It's a completely different thing to do with a paper. And almost nobody teaches it directly.
Abdul Șhakoor tweet media
English
59
236
1.3K
137.8K