Gustavo de Souza
567 posts

Gustavo de Souza
@GusMicrotoMacro
PhD from @UChicago. @ChicagoFed economist. Opinions are mine. Email: [email protected] Leave me anonymous feedback: https://t.co/mY5RfIf52C
شامل ہوئے Kasım 2021
840 فالونگ2.7K فالوورز
پن کیا گیا ٹویٹ


I am happy that the paper was finally published. It was a long and arduous journey since I first started working on this idea.
Economics is about humans making choices. I want to tell you about the humans that inspired this paper and the humans that wrote it.
The idea came from an odd couple: my friend John and the Brazilian president Dilma Rousseff.
John didn't really like his job. He devised an audacious plan: do poor enough work to get himself fired, collect unemployment insurance, and chill at home playing FIFA.
Sadly for John, he didn't factor in the whims of President Dilma. Elected under the promise of not cutting labor protections, she implemented a plan that reduced unemployment benefits and tightened eligibility requirements. When John finally got fired, he didn't qualify. In our circle of friends, that story never gets old.
That kept me thinking: how should a government choose its unemployment insurance requirements? I reviewed the literature and studied the practice in other countries.
I found two very interesting facts. The US not only has a tenure requirement, like the one that excluded John, but also a monetary requirement that removes eligibility from workers who make too little money.
At first, excluding those who need benefits the most sounded outrageous. I couldn't find any research documenting the harm of such an anti-poor policy. So me and @avdluduvice took it upon ourselves to show it.
We wrote down a model of — well — John's behavior. Agents choose to work or not while facing random income shocks. If someone stops working, the government can't tell if it was a quit or a layoff, kind of like what John was hoping. The government's problem is to design unemployment insurance to maximize agents' welfare. Strict requirements give less protection but make it harder for people to engineer their own firing just to collect benefits and play FIFA. So the government faces a clear trade-off, which force dominates is an empirical question.
So I went to the data. I hand-collected historical unemployment requirement data for the US. There was no AI at the time, so all of it was done by human ctrl+c ctrl+v intelligence buried in old government reports. At @UChicago, that earned me the title of Ghost of the Basement 👻: youtube.com/watch?v=MS-GPz…
Here's what we found. When a state introduces a tenure requirement, workers hop between employers more and are more likely to become part-time, which is consistent with people staying in the job market just long enough to become eligible to UI. The monetary requirement had the opposite effect: workers became less likely to switch jobs or go part-time, consistent with those jobs being less attractive since they no longer come with UI coverage. The data showed that UI requirements matter not only to John.
Using those elasticities to validate the model, we found that the monetary requirement plays an important role, contrary to my initial beliefs. UI increases workers' incentives to accept any job, including ones they'd otherwise turn down. To correct this distortion while still providing insurance, the optimal policy is to exclude low-paying jobs from UI eligibility. This force dominates for two reasons: it reduces the cost of UI, and it creates incentives for workers to search longer for better jobs.

YouTube
Dirk Krueger@IERJournal
The International Economic Review has just published a new exciting paper by Gustavo de Souza and André Victor Luduvice on Optimal Unemployment Insurance Requirements. It is available through @WileyEconomics here: onlinelibrary.wiley.com/doi/full/10.11…
English

@GusMicrotoMacro Beautiful story and great paper! Well done!
English

@joshgans If these tools replace routine tasks and complement decision making, isn’t it natural that people with less decision making skills and tasks, like PhD students, will use them less?
English

I have a history of science anecdote to answer that.
In the 80s, most AI research was concentrated in symbolic AI and expert systems. Only a handful of researchers were working on neural networks. Geoffrey Hinton (@geoffreyhinton), now a Nobel Prize winner, was one of them.
When expert systems failed to deliver tangible results, funding for AI research dried up in the US. This became known as the AI Winter.
Geoffrey moved from the US to the University of Toronto because of this funding cut and, according to @CadeMetz in the brilliant book Genius Makers, his wife's desire to leave the US.
In 2012, he co-authored the AlexNet paper, showing that neural networks vastly outperformed every other method in image classification. Shortly after, he sold his startup to Google for $44 million.
AlexNet and breakthroughs happening elsewhere, led to the AI boom, creating strong demand for AI scientists and people with hands-on experience in neural networks.
With so much of this happening in Toronto, the city developed a startup and research culture around neural networks. Keep in mind: for many years, only a select few universities were doing serious AI research.
I imagine all of this got "in the air" (@joshgans) and spilled over into economics.
I don't know how much of this anecdote is true. But it is nice to think that ideas diffuse beautifully like that. It is also interesting to think that a flap of a butterfly's wings, or someone's wife wanting to leave the US, can spark such cascade of events.
Lukas Freund@_LukasFreund_
What's the history of UT having such a strong focus on/expertise in AI? @avicgoldfarb @professor_ajay @Afinetheorem @joshgans
English

This is my overall take on research: follow your calling. Write a paper that only you could have written.
Each of us is a unique, complex individual. We have our own experiences, knowledge, curiosity, interests, fears, loves, and traumas. These rich personal idiosyncrasies, if well channeled, can translate into a new angle on any topic.
If it’s a question someone else could have answered, in a way someone else could have done it, written as someone else would have written it, LET SOMEONE ELSE DO IT. That is not your calling.
There is something out there, in the space of all knowledge to be gained, that only YOU could have taught the world. Your mission is to find that and teach us. And I, honestly, look forward to that.
Eduard Talamàs@EduardTalamas
what's your advice for PhD students thinking about doing research on AI's economic impacts? Where are the blue oceans (opportunities)? Any red oceans (bloodbaths) to avoid?
English

@mgaldino @lucasgoncalima Brasileiro, graças a Deus.
Português

@lucasgoncalima Ah. não sabia. Vi FED e achei que era gringo pensando sobre coisas gringas. Vou olhar com calma, valeu.
Português

I am presenting at this great session at @MEAGrinnell on AI, with @BharatKChandar and Seyed. Come chat if you are around!

English
Gustavo de Souza ری ٹویٹ کیا

Just published: Cleveland Fed economist paper “Optimal Unemployment Insurance Requirements” was published at the International Economic Review. doi.org/10.1111/iere.7…
English
Gustavo de Souza ری ٹویٹ کیا

💻 Free online lecture: Present and future of industrialisation
On Tuesday March 10th (15:00 GMT), Tristan Reed, @RodimiroRodrigo, @GusMicrotoMacro, and @mposchke will cover the present and future of industrialisation.
Register here: cepr-org.zoom.us/webinar/regist…

English

@vincentgregoire @MattSDawson You should totally add Claude as a coauthor. You should also add as coauthor Stata, Microsoft, Overleaf, your electricity company, the manufacturer of your chair, and your dog (for moral support).
English

@MattSDawson Why would I list the software I used as an author? I have an AI use disclosure on the title page, which seems more appropriate.
English

I wrote an academic paper in 4 days using AI.
Claude Code wrote the first draft overnight. I iterated with multiple AI agents as reviewers. AI made me faster, not smarter.
The paper: "Investing in Artificial General Intelligence" The full story: vincent.codes.finance/posts/vibe-res…
English

@econcallum How do you know how this chart would look like without AI?
English
Gustavo de Souza ری ٹویٹ کیا

📢 Our new VoxDevLit on Industrial Development is out now!
Senior Editors Francesco Amodio & @mposchke summarise everything you need to know about industrialisation.
Read & download here voxdev.org/voxdevlit/indu…

English

Can AI forecast trade?
Today, we awarded the AI for Trade Challenge.
Here are the answers we got.
Forecasting trade is hard for structural reasons: sparse time series, extreme skewness, regime changes, and strong political and macro shocks.
In the Challenge, we asked teams from all over the world a simple but demanding question: can modern AI and machine-learning methods reliably beat naïve trade forecasts in a realistic, policy-relevant setting?
Here are the three best answers we got:
🥉 3rd place, Insaf Guedidi (University Jaume I)
Used tree-based segmented boosting with economic weighting.
Insaf estimated separate models by country and trade flow, modeled trade values in log-space, and gave larger trade flows higher weights in the loss function.
Rather than optimizing purely statistical accuracy, the model was explicitly tuned to prioritize economically meaningful trade relationships.
🥈 2nd place , Tirumala Venkatesh (Indian Trade Service)
Developed a model based on horizon-aware, feature-rich ensembles.
This was a more traditional but extremely disciplined approach. Tirumala trained separate models for U.S. and China imports and exports, using rich lag structures, volatility measures, cross-flow features, and specific macro indicators (including global supply-chain pressure and leading indicators).
Predictions came from an ensemble of gradient-boosted models, with weights determined by rolling, horizon-consistent validation.
🥇 1st place, Michael Nawar (University of Cairo)
Zero-shot forecasting with foundation models.
The winning solution used TimesFM, a large pre-trained time-series foundation model, applied out of the box to trade data. Instead of fitting a trade-specific model, the approach relied on transfer learning: generic temporal patterns learned from massive datasets were mapped directly onto bilateral trade series.
The pipeline was deliberately simple, but required careful data aggregation and cleaning. This suggests that, as foundation models mature, performance may depend increasingly on data coverage and problem framing rather than extensive model tuning. For institutions with limited modeling capacity, this is a powerful result.
What do these approaches have in common?
Despite their differences, all three solutions converged on a few core insights:
Recent data matter enormously: every team extended the dataset to get as close as possible to October 2025.
Segmentation beats one-size-fits-all models, whether via horizons, flows, or country roles.
Beating naïve baselines is non-trivial and requires discipline in validation, not just clever algorithms.
There is no single “best” model. Foundation models, ensembles, and boosted trees all performed well when aligned with the structure of the problem.
For us, this is exactly what the AI for Trade Challenge was about: not just prediction accuracy, but learning how different modeling philosophies perform when confronted with real, policy-relevant trade data.
Finally, we would like to thank the sponsors who made this challenge possible.
The OEC (@OECtoday) for providing the data and the prizes. And all of our international partners for helping us push the project within their networks:
🌎 Trade Practice, The World Bank
🌎 Asian Development Bank
🇪🇺 European Lighthouse of AI for Sustainability (ELIAS)
🇬🇧 The Supply Chain AI Lab at the University of Cambridge
🇬🇧 Complexity Economics Group Institute of New Economic Thinking (INET) Oxford University
🇬🇧 MIOIR, AMBS, University of Manchester
🇭🇺 Corvinus Institute of Advanced Studies (CIAS) at Corvinus University of Budapest
🇫🇷 Institute for Advanced Study in Toulouse (IAST) at the Toulouse School of Economics
🇪🇸 Fundación Cotec, Spain
🇺🇸 Global Opportunity Lab at UC Berkeley

English

You can find the paper with this figure here:
chicagofed.org/publications/w…
English

This figure shows that the effect of AI on employment depends on how an occupation uses prediction. Let me explain.
AI is a prediction technology: it predicts the X/Twitter post you’re most likely to like, flags whether a bank transfer is fraudulent, finds a human face in a video, among many others. Our beloved ChatGPT/Claude/Gemini is AI because it is a next-token predictor.
Building on this idea, @professor_ajay, @joshgans, and @avicgoldfarb argue that AI’s impact on employment depends on an occupation’s relationship to prediction.
If an occupation mainly produces predictions, like accountants forecasting profits or secretaries predicting the best time for the office meeting, it is likely to be replaced by AI. After all, AI can do prediction better and more cheaply.
If an occupation instead uses predictions to make decisions, like a manager deciding which product to launch or a production worker deciding the best settings for a machine, it is likely to be in higher demand with AI. After all, if prediction complements them, when AI makes prediction cheaper, it increases demand for them.
Now to the figure: the y-axis plots the estimated effect of AI on employment for each occupational group. The x-axis plots “information-use intensity,” which is the ratio of the number of tasks in that group that use prediction to make a decision to the number of tasks related to generating predictions.
The figure shows the point made by @professor_ajay, @joshgans, and @avicgoldfarb visually: AI increases employment in occupations that use prediction to make decisions and decreases employment in those that mostly generate predictions.
These results are important because they help us understand why so many papers have found so many different responses to AI: the effect of AI depends on the set of occupations in the sample.

English





