Igor Letina

585 posts

Igor Letina banner
Igor Letina

Igor Letina

@IgorLetina

Associate professor, Uni Bern | Vice President, Swiss Competition Commission | Competition policy/IO, innovation economics, contest design.

Katılım Ekim 2014
1.8K Takip Edilen3.7K Takipçiler
Sabitlenmiş Tweet
Igor Letina
Igor Letina@IgorLetina·
In *Startup Acquisitions: Acquihires and Talent Hoarding,* Jean-Michel Benkert (@jm_benkert), Shuo Liu, and I study acquihires, a type of startup acquisition where the acquirer is mainly (or only) interested in people working for the startup. PDF: igorletina.com/files/Talent_H…
Igor Letina tweet media
English
2
11
51
17K
Igor Letina retweetledi
Luis Garicano 🇪🇺🇺🇦
A day like today, exactly 25 years ago, my admired advisor Sherwin Rosen died, way too young, at 62 years of age. He was then the president of the AEA. We owe him some crucial ideas. I highlight 7. Hedonic Prices and Implicit Markets (JPE, 1974). How does the market price something as complex as a car, a house, or a job? Goods are bundles of characteristics. In equilibrium, the price schedule is the envelope of heterogeneous buyers' bids and heterogeneous sellers' offers—so market prices reveal the implicit value of each characteristic. Key to environmental valuation, the value of life, and urban quality-of-life indexes. Monopoly and Product Quality (JET, 1978, with Mussa). Did you wonder if your tourist class seat is too narrow? A monopolist who faces buyers differing in taste for quality degrades what she sells to low types so high types can't mimic them and capture the surplus. The foundational screening model. Education and Self-Selection (JPE, 1979, with Willis). Do grads from better colleges earn more because of how much they know? People sort into college by comparative advantage: those who go are better at college-type work, those who don't are better at non-college work. The returns need to be corrected. A crucial idea for an entire literature. The Economics of Superstars (AER, 1981). Why do rewards concentrate at the top in music, movies, sports etc.? When output can be replicated at zero marginal cost, and there is little substitutability in production, small talent differences produce enormous earnings gaps. The economics of the internet, twenty years early. Rank-Order Tournaments (JPE, 1981, with Lazear). When individual output is noisy, firms pay on rank; the spread between winner and loser is the instrument that elicits effort. Authority, Control, and the Distribution of Earnings (Bell Journal, 1982). In a hierarchy, each manager's talent is multiplied across everyone below her. A slightly better person at the top is worth disproportionately more- a better general decides which war we fight, hence affects all of our marginal products. That is why we see convexity of pay at the top of organizations. Prizes in Elimination Tournaments (AER, 1986). In a multi-round promotion ladder, the biggest jump must come in the final round, because the option value of future rounds has vanished, only the current prize can motivate. Professor Rosen would look distracted in seminars. He would look confused. Then he'd say something that changed the entire analysis and discussion. He never tried to look good at the speaker's expense. He just saw the problem more deeply than anyone in the room- no exceptions. One personal anecdote: during my PhD studies, I was totally depressed: I could not advance, all my ideas were awful. I could not bear going to his office. As i was coming upstairs towards the 4th floor of the Social Science building I met him in the stairs. He said. "I have not seen you, Luis, for a while." I said "Sorry Prof. Rosen, I had nothing to show you." He said "Well, you have to come. Come every week, whether you have something or not". incredibly, that short exchange was probably the most important one in my life. The duty to go to his room got me out of the hole.
Luis Garicano 🇪🇺🇺🇦 tweet media
English
11
54
270
62.5K
Igor Letina
Igor Letina@IgorLetina·
Very happy to be part of this OECD workshop on start-ups’ ability to scale. If this topic interests you, take a look at the website to see the full speaker lineup and consider joining us online or in person. oecd.org/en/events/2026…
English
0
1
4
219
Ashwin Varma, MD
Ashwin Varma, MD@varma_ashwin97·
Reading “Apple in China” (@PatrickMcGee_ ) will count as one of the most politically important and radicalizing moments of my life.
English
3
0
7
825
Igor Letina
Igor Letina@IgorLetina·
@Afinetheorem It is possible that I am at fault. But I am certainly trying to use agents .md file.
English
0
0
0
23
Igor Letina
Igor Letina@IgorLetina·
1/7 I have spent a month testing ChatGPT, Claude, and Gemini ($20 subscriptions) in parallel. Here is my experience, in case someone finds it useful. I am not an expert user, so some of this may reflect my (suboptimal) usage. But maybe that will reflect your experience as well.
English
5
2
63
25.7K
Igor Letina
Igor Letina@IgorLetina·
@Afinetheorem I use that, and I still find that Claude Code regularly ignores the instructions. This is what happened to me literally 10 minutes ago: "You're right, I violated the AGENTS. md rule. The datasets/ folder is read-only. Let me fix this."
English
1
0
0
41
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
@IgorLetina With Claude Code (and any of these agentic models), there is a HUGE benefit from a claude .md or agents .md that points to other files with your coding, style, writing preferences.
English
2
0
1
45
Igor Letina
Igor Letina@IgorLetina·
@s_lauermann Thanks! Your experience with GPT 5.2 pro matches mine. However, I have gotten used to switching between thinking and pro, as the pro usage is quite expensive on my plan ($25/seat team plan). I was surprised by the low quality of literature reviews all three models produce.
English
1
0
1
976
Stephan Lauermann
Stephan Lauermann@s_lauermann·
Great post. Too little of such information out there. Generally, my experience is that the difference for math problems between the top models and the $20 versions is quite high. Was that yours, too? GPT 5.2 pro would solve all exercise of intermediate micro without any errors, e.g., and write beautiful solutions with figures of game trees and everything on top.
English
1
0
4
1.8K
Regi Kusumaatmadja
Regi Kusumaatmadja@abdurrahmanregi·
@IgorLetina I'd suggest @OpenRouter, there, you can also try gpt 5.2 pro (via API), where -iirc- this model is not available in the $20 subscription. Would be very interesting to know the opinion of another IO theorist on gpt 5.2 pro. However, Opus via API is super expensive for daily usage
English
1
0
0
2.2K
Igor Letina
Igor Letina@IgorLetina·
7/7 For example, when asked to recommend local LLM models, ChatGPT would regularly recommend models from 2024, whereas Gemini would recommend the most recent variants. In the end, I will be keeping ChatGPT and Claude and I will be upgrading Claude if my workload requires it.
English
1
0
7
3.1K
Igor Letina
Igor Letina@IgorLetina·
6/7 Gemini: I ended up using it the least. Gemini CLI is OK, but I did not find any real advantages over Codex or Claude Code. One big advantage of Gemini over ChatGPT is that its answers are based on more recent data.
English
2
0
2
3.3K
Florin Bilbiie 🇪🇺 🇺🇦
Florin Bilbiie 🇪🇺 🇺🇦@FlorinBilbiie·
I used to be so proud of my em dashes—and so sad to have to let them go now.
Luis Garicano 🇪🇺🇺🇦@lugaricano

Since relearning to write at @uchicago, I aimed to prune: only short, declarative, information packed, sentences, few adjectives, no adverbs, no subordinate clauses, but this is now LLM's preferred style, so humannness necessitates a defiant return to the ornate, labyrinthine, and gloriously textured baroque writing I first mastered amidst the golden echoes of my youth in Spain.

English
2
3
20
4.6K
Igor Letina
Igor Letina@IgorLetina·
@SanchezCartas Oh, I'm sure it's by design, but I don't think their objective is to waste tokens. They built Claude with a "move fast and break things" mindset. Codex, in my experience, has been much more restrained.
English
1
0
0
40
J. Manuel Sánchez-Cartas
J. Manuel Sánchez-Cartas@SanchezCartas·
@IgorLetina Maybe it's by design. A simple instruction to raise "temperature" above users' configuration. The worst case scenario is you wasting tokens, the best case scenario for them is you wasting tokens.
English
1
0
0
18
Igor Letina
Igor Letina@IgorLetina·
Something tells me Claude is not actually sorry.
Igor Letina tweet media
English
2
0
5
1K