Gina Pieters, PhD

9.9K posts

Gina Pieters, PhD banner
Gina Pieters, PhD

Gina Pieters, PhD

@ProfPieters

I was an academic economist for over 10 yrs (UMinn, TrinityU, UChicago) now I'm a researcher specializing in macro-impact of currency+asset digitization.

Chicago, IL เข้าร่วม Ocak 2016
1.7K กำลังติดตาม4.8K ผู้ติดตาม
ทวีตที่ปักหมุด
Gina Pieters, PhD
Gina Pieters, PhD@ProfPieters·
1/Many/ This is a thread for those of you who to applying to Economics graduate degrees, MA or PhD. I just finished my first year on an MA applications committee and it’s clear there is a *lot* of hidden information harming people’s applications.
English
14
112
463
0
Gina Pieters, PhD รีทวีตแล้ว
Jessica Leight
Jessica Leight@leightjessica·
Did a head-to-head comparison of refine + coarse.ink today #econtwitter (I had used refine last week for one of my recent WPs, so I fed in the same version into coarse.) This is a paper reporting on a RCT, fairly standard development / applied micro design
English
3
10
100
14K
Gina Pieters, PhD รีทวีตแล้ว
Joseph Steinberg
Joseph Steinberg@jbsteinberg·
The more I think about this paper the more annoyed I get. The framing that macro is "lagging" because we don't use credibility-revolution tools is bullshit. We don't use them as much (and when we do, we use them differently) because they simply don't work for macro questions!
English
6
10
143
13K
Gina Pieters, PhD รีทวีตแล้ว
John Ruf
John Ruf@JohnRuf6·
I don't like the "lagging" framework. A big reason why macro and finance don't use did and other CR methods is that the assumptions required are absolutely heroic when you get to finance and country-wide variables. In finance every market affects every other market.
NBER@nberpubs

A study of around 44,000 papers finds that the credibility revolution has spread unevenly beyond applied micro, driven mainly by difference-in-differences, with finance and macro lagging by roughly 15 years, from @paulgp nber.org/papers/w35051

English
1
4
43
3.8K
Gina Pieters, PhD รีทวีตแล้ว
Saki Bigio
Saki Bigio@SakiBigio·
I tell this to my students when I teach them HJB equations: Economics is harder than physics because our particles are forward looking, but at least we can talk to our particles.
Jesús Fernández-Villaverde@JesusFerna7026

A point that is sometimes overlooked is that PDEs in physics and economics have a subtle but important difference. When a physicist solves the Schrödinger equation (see my slide below), the potential is given. The coefficients of the equation are part of the problem statement. You pick your grid, refine your mesh, and the equation never changes on you. Better numerics give a better approximation to a fixed target. In economics, this is not the case. Look at the Hamilton-Jacobi-Bellman equation for the neoclassical growth model (also slide below). The drift of capital depends on a derivative of the value function, the very object you are trying to solve for. The “coefficients” of the PDE are endogenous to the optimal choices of the agents. This is what @UncertainLars and Sargent referred to as the cross-equation restrictions implied by optimizing behavior. This is what @MahdiKahou and I call the “equilibrium loop”: improving your approximation changes the policy, which changes the dynamics, which changes where in the state space the economy spends its time, which changes where your approximation needs to be accurate. You are not chasing a fixed target with a better net. Moving the net moves the target. This has serious consequences for computation. You cannot just borrow neural network architectures from deep learning in the natural sciences. The loss function comes from equilibrium conditions, not from labeled data. The evaluation points are not given. Instead, they are regenerated each epoch from the current approximation. Ignoring it is why you often get solutions that look good on a training set but fall apart in simulation.

English
10
16
199
22.5K
Gina Pieters, PhD
Gina Pieters, PhD@ProfPieters·
This is the kind of thing thats "obvious" to practioners after years of training, but completely hidden to outsiders without that training.
Jesús Fernández-Villaverde@JesusFerna7026

A point that is sometimes overlooked is that PDEs in physics and economics have a subtle but important difference. When a physicist solves the Schrödinger equation (see my slide below), the potential is given. The coefficients of the equation are part of the problem statement. You pick your grid, refine your mesh, and the equation never changes on you. Better numerics give a better approximation to a fixed target. In economics, this is not the case. Look at the Hamilton-Jacobi-Bellman equation for the neoclassical growth model (also slide below). The drift of capital depends on a derivative of the value function, the very object you are trying to solve for. The “coefficients” of the PDE are endogenous to the optimal choices of the agents. This is what @UncertainLars and Sargent referred to as the cross-equation restrictions implied by optimizing behavior. This is what @MahdiKahou and I call the “equilibrium loop”: improving your approximation changes the policy, which changes the dynamics, which changes where in the state space the economy spends its time, which changes where your approximation needs to be accurate. You are not chasing a fixed target with a better net. Moving the net moves the target. This has serious consequences for computation. You cannot just borrow neural network architectures from deep learning in the natural sciences. The loss function comes from equilibrium conditions, not from labeled data. The evaluation points are not given. Instead, they are regenerated each epoch from the current approximation. Ignoring it is why you often get solutions that look good on a training set but fall apart in simulation.

English
4
0
6
586
Gina Pieters, PhD รีทวีตแล้ว
Patrick McKenzie
Patrick McKenzie@patio11·
@TheStalwart "There is substantial overlap between the smartest bears and the least-equipped humans" was once an observation on how it is difficult to engineer a bear-proof garbage receptacle for national parks. I think a spiritually similar thing happens with LLMs.
English
4
17
285
13.7K
Matt Darling 🌐🏗️
Matt Darling 🌐🏗️@besttrousers·
@ProfPieters Haha, this was the most common point I was making on twitter around 2017 or so? (Stats conversations somewhat regressed over time...)
English
1
0
2
39
Gina Pieters, PhD รีทวีตแล้ว
Alex Volkov
Alex Volkov@altryne·
If you, like me, just woke up, let me catch you up on the Claude Code Leak (I know nothing, all conjecture): > Someone inside Anthropic, got switched to Adaptive reasoning mode > Their Claude Code switched to Sonnet > Committed the .map file of Claude Code > Effectively leaking the ENTIRE CC Source Code > @realsigridjin was tired after running 2 south korean hackathons in SF, saw the leak > Rules in Korea are different, he cloned the repo, went to sleep > Wakes up to 25K stars, and his GF begging him to take it down (she's a copyright lawyer) > Their team decided - how about we have agents rewrite this in Python!? Surely... this is more legal > Rewrite in Py > Board a plane to SK🇰🇷 > One of the guys decides python is slow, is now rewriting ALL OF CLAUDE CODE into Rust. > Anthropic cannot take down, cannot sue > Is this "fair use?" > TL;DR - we're about to have open source Claude Code in Rust
Alex Volkov tweet media
English
353
1.1K
11.8K
2M
Gina Pieters, PhD รีทวีตแล้ว
Gregor Schubert
Gregor Schubert@gregorschub·
The WSJ article has some great anecdotes that align with our findings in the paper: while research has so far mostly focused on the labor market disruptions and on firms, GenAI use is common for private purposes - and can have equally large effects there! (2/n)
Gregor Schubert tweet mediaGregor Schubert tweet media
English
1
1
10
4.4K
Gina Pieters, PhD รีทวีตแล้ว
Alex Volkov
Alex Volkov@altryne·
PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this. Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota. If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue. There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) : 1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI. The workaround there is to use Claude with `npx @anthropic-ai/claude-code` 2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features) This bug is also documented on Github and confirmed by other folks. I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier. The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug. cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.
Alex Volkov tweet media
Alex Volkov@altryne

My feed is showing me a bunch of folks who tapped out their whole usage limits on Mon/Tue. Is this your experience? Please comment, I want to understand how widespread this is

English
225
421
5K
1.6M
Gina Pieters, PhD รีทวีตแล้ว
Jesús Fernández-Villaverde
Jesús Fernández-Villaverde@JesusFerna7026·
Which of the rationales I outlined last Tuesday for traditional higher education still hold up against AI? As I noted in a later post, the answer depends on the college-major pair. A finance degree from Wharton and a psychology degree from a commuter college are different products, so AI will affect them in very different ways, and we need to always think at the margin. The twelve rationales fall into three categories: 🅐 Mostly resilient to AI: signaling, networking, and cultural capital at highly selective institutions, commitment for time-inconsistent students, the hold-out period for traditional-age students, proximity to the research frontier at research universities, and physical infrastructure. 🅑 Highly vulnerable: skill acquisition, topic curation, and assessment. 🅒 Entirely dependent on the college-major pair: credentialing (robust where statutory, fragile where normative), peer effects, and cultural capital (robust at selective institutions, negligible elsewhere). Let me go through them one by one. ① Signaling. Robust at the top, irrelevant at the bottom. The signaling value of a STEM degree from an Ivy is untouched by AI, because the signal comes from admission and from the ability to complete the degree. By contrast, a degree in humanities from a non-selective institution is a weak signal, because both admission and completion are easy. AI opens the door to alternative ways of assessing competence (for example, personalized evaluations) that may soon compete with degrees from weaker institutions. So the student who was already indifferent between attending a mid-ranked institution and entering the labor market now has a third option. ② Credentialing. The most durable rationale. You cannot practice medicine, law, or nursing without a credential, and AI does not change that. For college-major pairs where credentialing is statutory, the university’s position is secure unless AI itself creates political pressure against credentialing. Where credentialing is a social norm rather than a legal requirement (many firms ask for a B.A. for jobs that do not need one), the norm is more fragile. AI weakens it by giving employers additional ways to assess competence. ③ Networking. Robust at residential colleges, nearly absent at commuter institutions. The networking value of a college depends almost entirely on the residential experience. At a commuter campus, where students arrive, sit in a lecture, and leave, the networking value is already minimal. AI changes nothing. ④ Peer effects in learning. The value of being challenged by smart classmates depends on the quality of the peers and on the structure of the interaction. At selective institutions with small classes, this is valuable. In large lecture courses with 400 students, the peer effect was already negligible. ⑤ Commitment. Most people are time-inconsistent. They start things and do not finish them. Coursera completion rates are below 10 percent. A university provides structure: deadlines, exams, the sunk cost of tuition, and social pressure. AI does not solve this. If anything, it makes it worse because easier access lowers the cost of quitting. You can always come back to Claude tomorrow. And tomorrow never comes. The student who needs someone to force her through econometrics at 8 a.m. still needs a college. This rationale is strongest for students who are least self-directed, which is a large share of the marginal students I discussed last Thursday. ⑥ Curation of topics. Highly vulnerable. Deciding what a well-educated economist or biologist needs to know was once one of the university’s core functions. Now AI does it very well. My Goffman study plan is a good example. Claude curated a sequence of readings, identified the key themes, and structured the material for my background. A good prompt can now produce a syllabus at least as good as what most departments offer, and better tailored to the individual. The only place curation still has real value is at the research frontier, where the knowledge is new enough that no training data fully captures it. But that is really an argument about proximity to research (see rationale ⑩ below). ⑦ Skill acquisition. The most vulnerable rationale. A student who wants to learn Python or financial modeling can now do so with Claude at a fraction of the cost and at her own pace. This was possible in the past (that is how I learned assembly language in high school), but the cost was much higher (oh, yes; learning assembly language on my own in a Spectrum was not fun, believe me) and the approach worked only in some fields. AI lowers those costs for almost everyone. For college-major pairs where the main value proposition is skill acquisition, the pressure is clear. A mid-tier business school charging $40,000 per year to teach Excel and financial statements will quickly lose students. ⑧ Cultural capital. Robust at elite residential institutions, weak elsewhere. This is about learning how to present yourself, read a room, navigate social hierarchies you were not born into, and hold the “right” values. Four years at Yale transmit cultural capital that no AI can replicate. At a commuter campus, that transmission was already close to zero. ⑨ Hold-out period. Surprisingly robust. Most 18-year-olds are not ready to work. They do not know what they want to do, and they are not yet good at figuring it out. The university gives them a place to be while they mature. The argument is real, but it applies to residential programs serving traditional-age students. A 32-year-old commuter student does not need a holding period. But even among traditional-age students, the marginal ones are those whose families feel the cost most. If the hold-out period is the main thing a university provides, $200,000 is an expensive kindergarten for late teenagers. ⑩ Proximity to the research frontier. Robust at research universities, absent elsewhere. Learning asset pricing from John Cochrane is qualitatively different from learning it from me because he created the knowledge, and I am only transmitting it. AI is extraordinarily good at transmitting existing knowledge, but it is not yet producing new knowledge. At institutions where faculty do not publish (or publish forgettable research), AI’s advantage in content delivery becomes decisive. ⑪ Assessment and feedback. Moderately vulnerable. AI is already good at grading standardized work and giving feedback on writing. For those tasks, it is arguably better than the average overworked TA. But there is a more nuance form of assessment that AI still does not do well: the Socratic method in a small tutorial/seminar. On the other hand, nobody is receiving Socratic feedback in a 400-student Economics 101 section. ⑫ Physical infrastructure. Completely robust. If you need a chemistry lab, a wind tunnel, or a particle accelerator, you need a university. For a STEM college-major pair, the infrastructure argument alone justifies the institution. Not coincidentally, it is also the STEM college-major pairs where the ROI data tend to be the highest. In any case, I expect a great deal of reshuffling within the higher-ed sector. Some top universities will adapt well, while others will not, often for reasons that are hard to predict in advance: leadership, governance, institutional culture. Among less selective institutions, some will move toward value propositions AI does not threaten (adult education, community, credentialing in regulated fields), while others will simply disappear. In summary, higher education may look very different in a few decades. Not because universities are going away, but because the marginal student, the marginal program, and the marginal institution will all face a different set of relative prices. And when prices change, behavior changes. The adjustment will begin at the margins and move inward from there. Markets eventually do their work, even in the department of anthropology. For reference, my post on the 12 rationales: x.com/JesusFerna7026… A framework for evaluation: x.com/JesusFerna7026… Setting up your own personalized course: x.com/JesusFerna7026… Some suggestions: x.com/JesusFerna7026…
Jesús Fernández-Villaverde tweet media
English
7
29
115
15.4K
Gina Pieters, PhD รีทวีตแล้ว
euan ashley
euan ashley@euanashley·
He was working on our recently released multi-modal cardiologist MARCUS (arxiv.org/abs/2603.22179) and had neglected to uncomment a key line of code that gave the model access to the images. Despite that, the model answered all the questions and scored highly on the benchmark.
English
3
11
206
31.6K
Gina Pieters, PhD รีทวีตแล้ว
Jessica Riedl 🧀 🇺🇦
Jessica Riedl 🧀 🇺🇦@JessicaBRiedl·
Economics: The optimal level of tax evasion, govt waste, crime, and even voter fraud is often *above* zero. (Because getting to literal zero usually requires imposing dramatically higher costs & burdens on everyone else to ensure prevention of those last, few, toughest cases.)
English
0
253
2.8K
94K
Gina Pieters, PhD รีทวีตแล้ว
Aniket Panjwani
Aniket Panjwani@aniketapanjwani·
I've taught all the 50+ economists I've trained on agentic coding to use @every's Compound Engineering (CE) plugin. Recently, @danshipper, CEO of Every, has integrated a new update to CE which is particular useful to economists - his tool "Proof". Whenever Claude Code via CE makes a plan or a brainstorm document, an option will come up to "Share to Proof". This will generate in your browser an interface in which you can interact with that plan or brainstorm document and give very specific feedback to Claude Code. A big problem for economists is that they have a much greater need for certain kinds of correctness than software engineers do. A big part of getting the most correct or best results downstream is getting your plan right from the beginning. I think Proof - especially with its tight integration with Claude Code - is pushing practically in a very intelligent direction for human/AI collaboration, and I'd recommend any Claude Code using economists to try out this feature in Compound Engineering. @every is rapidly iterating on the Compound Engineering plugin, but I did a video a couple months ago on Compound Engineering which I think is still worth watching: youtu.be/IQ1_5jPiQoE?si…
YouTube video
YouTube
Aniket Panjwani tweet mediaAniket Panjwani tweet media
English
3
9
85
10.2K
Gina Pieters, PhD รีทวีตแล้ว
Jesús Fernández-Villaverde
Jesús Fernández-Villaverde@JesusFerna7026·
By now, I have published a fair number of papers, and one more acceptance would have close to zero marginal impact on anything that matters professionally. But getting my survey on “Deep Learning for Solving Models” accepted into the Journal of Economic Literature made me genuinely happy, for reasons that have nothing to do with my CV. I had the misfortune of studying my undergraduate degree in economics at a quite awful institution. Two professors, David Taguas and Alfredo Arahuetes, were outstanding, and I owe them a great deal. The rest were well below any reasonable professional level, and some violated the basic standards of ethical conduct. They had no business teaching economics at any level, let alone at a university that charged tuition and claimed to prepare students for professional life. I had to work out most of my education on my own. The surveys published in the Journal of Economic Literature were how I did it. I spent hours in the library’s reading room going through one survey after another on topics I had never been properly taught. Some helped more than others, but collectively they gave me a solid enough foundation that, when I arrived at Minnesota for my PhD, I discovered, to my considerable surprise, that I was ahead of nearly all the other first-year students, including some who held master’s degrees, despite the fact that I had finished my undergraduate degree just six weeks before. I owe the Journal of Economic Literature a debt I will never be able to repay. Publishing a survey there is the closest I can come to trying. So, the thought that some student somewhere, working on her own in a library or on a laptop, might find my survey useful gives me tremendous satisfaction. But there is a broader point worth making. Even in the world of AI, the profession has an important mission in making educational material widely available. Textbooks, surveys, teaching slides, these are public goods in the economist’s sense: high social value, insufficient private incentive to produce. This is also why I post all my slides and teaching material online: sas.upenn.edu/~jesusfv/deepl… We do not reward these activities nearly enough, and their supply is well below what any reasonable social planner would choose. I do not have a good proposal for changing this, and I would welcome suggestions. What I do find heartbreaking is that many of the great economists of the past couple of generations never wrote textbooks on their areas of expertise. I do not mean this as criticism. All of them maximize, and perhaps they all suffer from the same bias I suffer from: the belief that one can always do it next year. But I often think about the hours of pure intellectual pleasure I would have had reading “Time Series Econometrics: An Advanced Textbook” by Chris Sims or “Methods in Structural Estimation” by Pat Bajari. Those books do not exist. They should.
Jesús Fernández-Villaverde tweet media
English
34
200
1.3K
139.5K