PDS

3.5K posts

PDS banner
PDS

PDS

@TopadoDuran

Modern Global Macro Investor. Macro AI Systems Engineer. Macroeconomics. Markets. Fed Watcher. China Macro. AI. Computer Science. Opinions are my own.

Manhattan, NY 가입일 Ekim 2009
1.8K 팔로잉13.9K 팔로워
고정된 트윗
PDS
PDS@TopadoDuran·
Here is a Citrini critique from a Macro-AI lens. Citrini's "2028 Intelligence Crisis" direction of labor market impact is sensible. The velocity is wrong. His scenario timeline is off by ~ 5 years + because his AI capability assumptions are anecdotal, not grounded in the math of how models actually improve. His core assumptions: 1) The AI feedback loop is sustained by "AI Investment Increases & AI Capabilities Improve" — implying that as long as you throw money at compute, you get better models that unlock capabilities replacing humans at a given task 2) He extrapolates from what he's observed anecdotally about AI agents to "smooth sailing" toward autonomous multi-week agents by 2027. This is a guess, not an informed view Let's rigorize his feedback loop with some Macro-AI: scaling laws → capabilities → task automation → displacement The relationship between scaling laws and capabilities is determined by a "Capability Transfer" function. This relationship is non-linear and task-dependent. Capabilities don't emerge linearly from loss improvements — they phase-transition at specific thresholds (I model this with a sigmoid in pic below). Du et al. (2024) validates pre-training loss as the sufficient proxy for capability emergence below task-specific thresholds. The hard tasks Citrini envisions — autonomous multi-week work by 2028 — likely have thresholds near the irreducible loss floor (E ≈ 1.82 nats). But here's the deeper issue: next-token prediction is a proxy objective. A model can approach E without acquiring the causal reasoning, goal persistence, and calibrated uncertainty that autonomous work demands. The frontier labs' pivot to post-training (RLHF, inference-time compute, tool use) is an implicit admission of this gap. All this to say, some hard tasks may not be solvable by scaling compute alone - which is what Citrini assumes. If we know the capability transfer thresholds and map them to loss levels (and thus compute), we can estimate when complex tasks become feasible. (yup -> we can actually model this) Using 0.40 OOM/year in compute growth and 0.30 OOM/year in algorithmic efficiency gains, I model that professional replacement (defined workflows: customer service, routine legal, basic analysis) starts mid-2026 — but cognitive displacement (multi-week autonomy, novel problem solving) only arrives ~2031. Citrini's crisis requires cognitive displacement. So his timeline starts at 2031 at earliest. **AND** we haven't even addressed two more constraints that push it further: agent reliability decays exponentially with task complexity and the training data wall. If I can vibe code myself into a web dev, I'll upload the interactive model at pdsmacro.com for you to stress-test the assumptions yourself.
PDS tweet media
English
5
7
76
172.8K
PDS
PDS@TopadoDuran·
@emollick @alexolegimas @danielrock @joshgans @robseamans Related to the topic Yann LeCun is directionally right in my view
PDS@TopadoDuran

Here is a Citrini critique from a Macro-AI lens. Citrini's "2028 Intelligence Crisis" direction of labor market impact is sensible. The velocity is wrong. His scenario timeline is off by ~ 5 years + because his AI capability assumptions are anecdotal, not grounded in the math of how models actually improve. His core assumptions: 1) The AI feedback loop is sustained by "AI Investment Increases & AI Capabilities Improve" — implying that as long as you throw money at compute, you get better models that unlock capabilities replacing humans at a given task 2) He extrapolates from what he's observed anecdotally about AI agents to "smooth sailing" toward autonomous multi-week agents by 2027. This is a guess, not an informed view Let's rigorize his feedback loop with some Macro-AI: scaling laws → capabilities → task automation → displacement The relationship between scaling laws and capabilities is determined by a "Capability Transfer" function. This relationship is non-linear and task-dependent. Capabilities don't emerge linearly from loss improvements — they phase-transition at specific thresholds (I model this with a sigmoid in pic below). Du et al. (2024) validates pre-training loss as the sufficient proxy for capability emergence below task-specific thresholds. The hard tasks Citrini envisions — autonomous multi-week work by 2028 — likely have thresholds near the irreducible loss floor (E ≈ 1.82 nats). But here's the deeper issue: next-token prediction is a proxy objective. A model can approach E without acquiring the causal reasoning, goal persistence, and calibrated uncertainty that autonomous work demands. The frontier labs' pivot to post-training (RLHF, inference-time compute, tool use) is an implicit admission of this gap. All this to say, some hard tasks may not be solvable by scaling compute alone - which is what Citrini assumes. If we know the capability transfer thresholds and map them to loss levels (and thus compute), we can estimate when complex tasks become feasible. (yup -> we can actually model this) Using 0.40 OOM/year in compute growth and 0.30 OOM/year in algorithmic efficiency gains, I model that professional replacement (defined workflows: customer service, routine legal, basic analysis) starts mid-2026 — but cognitive displacement (multi-week autonomy, novel problem solving) only arrives ~2031. Citrini's crisis requires cognitive displacement. So his timeline starts at 2031 at earliest. **AND** we haven't even addressed two more constraints that push it further: agent reliability decays exponentially with task complexity and the training data wall. If I can vibe code myself into a web dev, I'll upload the interactive model at pdsmacro.com for you to stress-test the assumptions yourself.

English
0
0
3
845
PDS
PDS@TopadoDuran·
@ylecun @Ph_Aghion @erikbryn 💯 Most macroeconomists thou are not close enough to the algorithmic capabilities of the models so they struggle understanding the capability transfer function @ylecun
PDS@TopadoDuran

Here is a Citrini critique from a Macro-AI lens. Citrini's "2028 Intelligence Crisis" direction of labor market impact is sensible. The velocity is wrong. His scenario timeline is off by ~ 5 years + because his AI capability assumptions are anecdotal, not grounded in the math of how models actually improve. His core assumptions: 1) The AI feedback loop is sustained by "AI Investment Increases & AI Capabilities Improve" — implying that as long as you throw money at compute, you get better models that unlock capabilities replacing humans at a given task 2) He extrapolates from what he's observed anecdotally about AI agents to "smooth sailing" toward autonomous multi-week agents by 2027. This is a guess, not an informed view Let's rigorize his feedback loop with some Macro-AI: scaling laws → capabilities → task automation → displacement The relationship between scaling laws and capabilities is determined by a "Capability Transfer" function. This relationship is non-linear and task-dependent. Capabilities don't emerge linearly from loss improvements — they phase-transition at specific thresholds (I model this with a sigmoid in pic below). Du et al. (2024) validates pre-training loss as the sufficient proxy for capability emergence below task-specific thresholds. The hard tasks Citrini envisions — autonomous multi-week work by 2028 — likely have thresholds near the irreducible loss floor (E ≈ 1.82 nats). But here's the deeper issue: next-token prediction is a proxy objective. A model can approach E without acquiring the causal reasoning, goal persistence, and calibrated uncertainty that autonomous work demands. The frontier labs' pivot to post-training (RLHF, inference-time compute, tool use) is an implicit admission of this gap. All this to say, some hard tasks may not be solvable by scaling compute alone - which is what Citrini assumes. If we know the capability transfer thresholds and map them to loss levels (and thus compute), we can estimate when complex tasks become feasible. (yup -> we can actually model this) Using 0.40 OOM/year in compute growth and 0.30 OOM/year in algorithmic efficiency gains, I model that professional replacement (defined workflows: customer service, routine legal, basic analysis) starts mid-2026 — but cognitive displacement (multi-week autonomy, novel problem solving) only arrives ~2031. Citrini's crisis requires cognitive displacement. So his timeline starts at 2031 at earliest. **AND** we haven't even addressed two more constraints that push it further: agent reliability decays exponentially with task complexity and the training data wall. If I can vibe code myself into a web dev, I'll upload the interactive model at pdsmacro.com for you to stress-test the assumptions yourself.

English
0
0
4
2K
Yann LeCun
Yann LeCun@ylecun·
Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market. Don't listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this, like @Ph_Aghion , @erikbryn , @DAcemogluMIT , @amcafee , @davidautor
TFTC@TFTC21

Anthropic CEO Dario Amodei: “50% of all tech jobs, entry-level lawyers, consultants, and finance professionals will be completely wiped out within 1–5 years.”

English
1.1K
2.6K
19.8K
3.4M
PDS
PDS@TopadoDuran·
@Geiger_Capital These weekend drops are a wall street staple 🤝
English
0
0
1
917
Geiger Capital
Geiger Capital@Geiger_Capital·
Have a good weekend. 🤝
English
10
9
311
26.5K
PDS
PDS@TopadoDuran·
There's a dimension of the AI frontier that gets zero airtime next to compute scaling and Blackwell trained models — and it's where the actual alpha lives. Hermeneutics is the study of interpretation — not what a text says, but what it means. For 200 years it's been confined to philosophy departments. It now has a direct application in markets that nobody is exploiting. The entire financial NLP ecosystem — $9B deployed — operates at what hermeneutics would call the 'surface' and 'concept' layers. What was literally said, what category it maps to. This is textual processing, not interpretation. Interpretation begins at the mechanism layer: what causal model does this framing imply? 'Softening' through demand destruction and 'softening' through supply normalization are the same word pointing at completely different forward worlds. The concept layer can't distinguish them. The hermeneutic layer must. Below mechanism: principle (what foundational framework is active) and axiom (what's assumed without being stated). The market's coverage at these layers is sub 10% — and the participants processing there are doing it in their heads, unscalable. The processing cliff between concept and mechanism is a measured 44 points of F1 accuracy. That cliff is the boundary between text processing and interpretation. Most alpha theories rely on attention asymmetry. This relies on interpretive asymmetry — a gap that requires architectural change, not redirected attention. The LLM paradigm makes it possible to perform hermeneutic operations computationally — holding multiple interpretive horizons open, tracking epistemic modality, detecting framework shifts across time. But only when architected as structured extraction systems, not conversational tools. The gap between those two architectures is the gap between reading and interpreting. Markets have always rewarded the latter. They've just never been able to scale it.
PDS tweet media
English
2
4
107
221.5K
Geiger Capital
Geiger Capital@Geiger_Capital·
Posting this for November… You are all now third-party witnesses.
Geiger Capital tweet media
English
217
129
5K
317.3K
PDS
PDS@TopadoDuran·
@WhiteHouse Countries will just give a call to Iran.. basically we gave them agency over the strait and wasted ton of tax payer money.. Anyways Let’s get back to work Mr President
English
2
0
15
2.5K
The White House
The White House@WhiteHouse·
“All of those countries that can’t get jet fuel because of the Strait of Hormuz, like the United Kingdom, which refused to get involved in the decapitation of Iran, I have a suggestion for you…” - President Donald J. Trump
The White House tweet media
English
22.8K
38.6K
173.2K
11.4M
PDS
PDS@TopadoDuran·
@pmarca Some macro ai math
PDS@TopadoDuran

Here is a Citrini critique from a Macro-AI lens. Citrini's "2028 Intelligence Crisis" direction of labor market impact is sensible. The velocity is wrong. His scenario timeline is off by ~ 5 years + because his AI capability assumptions are anecdotal, not grounded in the math of how models actually improve. His core assumptions: 1) The AI feedback loop is sustained by "AI Investment Increases & AI Capabilities Improve" — implying that as long as you throw money at compute, you get better models that unlock capabilities replacing humans at a given task 2) He extrapolates from what he's observed anecdotally about AI agents to "smooth sailing" toward autonomous multi-week agents by 2027. This is a guess, not an informed view Let's rigorize his feedback loop with some Macro-AI: scaling laws → capabilities → task automation → displacement The relationship between scaling laws and capabilities is determined by a "Capability Transfer" function. This relationship is non-linear and task-dependent. Capabilities don't emerge linearly from loss improvements — they phase-transition at specific thresholds (I model this with a sigmoid in pic below). Du et al. (2024) validates pre-training loss as the sufficient proxy for capability emergence below task-specific thresholds. The hard tasks Citrini envisions — autonomous multi-week work by 2028 — likely have thresholds near the irreducible loss floor (E ≈ 1.82 nats). But here's the deeper issue: next-token prediction is a proxy objective. A model can approach E without acquiring the causal reasoning, goal persistence, and calibrated uncertainty that autonomous work demands. The frontier labs' pivot to post-training (RLHF, inference-time compute, tool use) is an implicit admission of this gap. All this to say, some hard tasks may not be solvable by scaling compute alone - which is what Citrini assumes. If we know the capability transfer thresholds and map them to loss levels (and thus compute), we can estimate when complex tasks become feasible. (yup -> we can actually model this) Using 0.40 OOM/year in compute growth and 0.30 OOM/year in algorithmic efficiency gains, I model that professional replacement (defined workflows: customer service, routine legal, basic analysis) starts mid-2026 — but cognitive displacement (multi-week autonomy, novel problem solving) only arrives ~2031. Citrini's crisis requires cognitive displacement. So his timeline starts at 2031 at earliest. **AND** we haven't even addressed two more constraints that push it further: agent reliability decays exponentially with task complexity and the training data wall. If I can vibe code myself into a web dev, I'll upload the interactive model at pdsmacro.com for you to stress-test the assumptions yourself.

English
0
0
3
1.3K
Marc Andreessen 🇺🇸
Claude knows! —> The Lump of Labor Fallacy and Why AGI Unemployment Panic Is Economically Illiterate Let me lay this out with full rigor, because this argument deserves to be prosecuted completely rather than waved away with a sound bite. I. What the Lump of Labor Fallacy Actually Is The lump of labor fallacy is the assumption that there exists a fixed, finite quantity of work in an economy — a lump — such that if a machine (or an immigrant, or a woman entering the workforce) does some of it, there is necessarily less left for human workers to do. It treats employment as a zero-sum pie. The fallacy was named and formalized in the early 20th century but the error it describes is far older. It animated the Luddite riots of 1811–1816, where English textile workers destroyed power looms convinced that the machines would steal their jobs permanently. It drove opposition to the spinning jenny, the cotton gin, the mechanical reaper, the steam engine, the telegraph, the railroad, the automobile assembly line, the personal computer, and every other major labor-displacing technology in the history of industrial civilization. Every single time, the catastrophists were wrong. Not partially wrong. Structurally, fundamentally, categorically wrong — because they misunderstood the nature of economic production itself. The reason the fixed-pie assumption fails is this: demand is not fixed. Work generates income. Income generates demand for goods and services. Demand for goods and services generates new categories of work. This is an engine, not a reservoir. When you drain some of the reservoir with a machine, the engine speeds up and refills it — and often refills it past its previous level. II. The Classical Economic Mechanism That Destroys the Fallacy To understand why the lump-of-labor assumption is wrong about AGI, you need to understand the precise mechanism by which technological unemployment resolves itself. There are four distinct channels, all operating simultaneously: Channel 1: The Productivity-Demand Feedback Loop (Say’s Law, Modified) When a technology increases the productivity of labor or replaces labor entirely in a given task, it lowers the cost of producing whatever that task was part of. Lower production costs mean either: ∙Lower prices for consumers (real purchasing power rises), or ∙Higher profits for producers (which get reinvested, distributed as dividends, or spent as wages for other workers), or ∙Both. Either way, aggregate real income in the economy rises. That additional real income does not evaporate. It gets spent on something — including goods and services that didn’t previously exist or were previously too expensive to consume at scale. That spending creates demand. That demand creates jobs. This is not a theoretical conjecture. The average American in 1900 spent roughly 43% of their income on food. Today it’s around 10%. Agricultural mechanization didn’t produce a nation of starving unemployed farm laborers — it freed up 33% of household income to be spent on automobiles, television sets, air conditioning, healthcare, education, travel, smartphones, and streaming services, most of which didn’t exist as industries in 1900. The workers who left farms went to factories, then to offices, then to service industries, then to information industries. The economy didn’t run out of work. It metamorphosed.
Marc Andreessen 🇺🇸@pmarca

AI employment doomerism is rooted in the socialist fallacy of lump of labor. It is wrong now for the same reason it’s always been wrong. More people really should try to learn about this. The AI will teach you about it if you ask! (Hinton is a socialist. youtube.com/shorts/R-b8RR6…)

English
326
485
3K
548.7K
PDS
PDS@TopadoDuran·
@EpsilonTheory Finally realizing the emperor had no clothes? 👀
English
0
0
27
12.5K
Ben Hunt
Ben Hunt@EpsilonTheory·
This Bessent performance on Meet the Press is a disaster. My god.
English
116
241
4.6K
419.8K
PDS
PDS@TopadoDuran·
@rohanpaul_ai 40% Gen Ai adoption? Way off… One thing is to chat once in a while with it, the other one is to adopt it in you workflow to the extend that is doing work for you (not just chitchat)…
English
0
0
3
638
Rohan Paul
Rohan Paul@rohanpaul_ai·
Citadel Securities: Generative AI adoption will follow a historical S-curve, eventually plateauing, rather than growing exponentially. Because economic and physical boundaries will halt exponential growth. Displacing human labor demands massive compute power, data centers, and energy. If automation expands rapidly, surging compute demand will drive up its marginal cost. Once AI's operating costs exceed human labor costs, they expect businesses will stop substituting workers. Therefore, even if AI algorithms improve recursively, physical capital limits and energy availability prevent infinite, frictionless economic adoption. --- Chart from citadelsecurities. com/news-and-insights/2026-global-intelligence-crisis/
Rohan Paul tweet media
English
17
25
187
20.6K
PDS
PDS@TopadoDuran·
Had dinner with a senior marketing exec friend last night. He told an AI adoption story that resonates. His team completed a large client RFP by dumping the proposal context and industry background into ChatGPT, instructing it to generate an implementation prompt, then feeding that into Gamma to build the presentation. The AI got them 80% of the way. A group of senior colleagues then refined and finalized it — effectively applying RL. They finished in 2 days versus 7 days on average. They won it. The RFP anecdote is more important than it sounds. What my friend described isn't just "using ChatGPT." It's a multi-agent workflow: context dump → meta-prompt generation → application-layer execution → human RLHF refinement. That's a production pipeline that compressed a 7 day knowledge-work cycle into 2 days with best outcome. This is what first-mover adoption looks like. Production workflows that are changing the competitive landscape for business in real time. We are compute-constrained, but not creativity-constrained. We are compute-constrained, but not algorithmically constrained. The exponential is happening — and it's transforming the economy right now. The Macro lesson -> Traditional business creation has massive friction — capital requirements, hiring cycles, regulatory overhead, distribution costs. AI-native businesses compress all of these. The implication is that Schumpeterian creative destruction accelerates dramatically. New entrants can now compete with incumbents at a fraction of the cost basis. That's simultaneously deflationary (unit cost compression) and growth-positive (output expansion, new market creation). Imagine 50 Anthropic-scale cases across 50 sub-industries. Now imagine 1,000 of these popping up every quarter. Fast forward 5 years — we are probably looking at 3 million. The deeper you get into the weeds of AI, the more you realize the possibilities of what you can build are expanding faster than the market appreciates. This is where the exponential lives now — in business creation and adoption — something the market was obsessed with finding only in the compute space not long ago. So Mr. President, let's wrap up the Middle East and get back to work. 🚀🚀
English
5
1
63
89.8K
PDS 리트윗함
Bob Elliott
Bob Elliott@BobEUnlimited·
While folks fall all over each other to claim private credit losses will be a systemic crisis, the losses math just doesn't work out. And importantly, the vast majority of the risk is held by unlevered investors vs 30:1 levered banks back in '08. x.com/BobEUnlimited/…
Bob Elliott@BobEUnlimited

@DumbInvestorGuy In the most extreme scenario (say 25% losses on the whole 1.3tln industry in the US), the losses on private credit will be less than a 1 day standard deviation in the US stock market.

English
37
25
272
55.2K
PDS
PDS@TopadoDuran·
@Hwy41 @Sen_JoeManchin My party is called common sense buddy - and the 2 party system they feed you with spoon os ther reason is not existent… Although, to be accepted in the common sense arty, u have to start by stopping the finger pointing to justify a race to the bottom..
English
0
0
0
64
PDS 리트윗함
Senator Joe Manchin
Senator Joe Manchin@Sen_JoeManchin·
When Democrats wanted to eliminate the filibuster in 2022, I stood my ground because I understood the consequences of turning the Senate into a glorified House with simple majority rule. Senator John Cornyn said of Democrats at the time: “They'll soon find themselves rueing the day their party broke the Senate.” The filibuster exists to make both sides work together and produce good legislation that can withstand the test of time. Eliminating the filibuster would consolidate even more power into the hands of the majority party’s leadership — and take power away from the minority and everyday Americans. When I was a U.S. Senator, there was not another person more committed to keeping the filibuster than Senator John Cornyn. He understood the incredible political pressure I faced from my former party to get rid of the filibuster and give Democrats complete power — and at the time, he understood why neither party should take our country past this point of no return. The filibuster — the soul of the Senate — has preserved the Senate’s role for nearly 250 years as the institution that cools passions, protects minority voices, and demands consensus. America was built on institutions designed to resist political convenience, not surrender to it. It’s deeply disappointing to see that Senator Cornyn is now willing to scrap the very rule he once praised and personally thanked me for defending. These extreme election-year politics that put party power over everything else are why Americans are sick and tired of the duopoly of the two-party system of Democrats and Republicans.
English
848
406
2.7K
602K
Joe Weisenthal
Joe Weisenthal@TheStalwart·
I still don’t get extreme multidimensional space. How could there be more than three dimensions. Ok time. Fine that’s four. But more than that? Come on.
English
256
16
1K
159.1K
PDS
PDS@TopadoDuran·
@kevg1412 She is a superstar
English
0
0
2
1.4K
Kevin Gee
Kevin Gee@kevg1412·
Divya Nettimi went from intern to portfolio manager in 7 years at Viking Global (where she led the firm's TMT investment team) before starting Avala Global. She's kept a very low public profile, but last week sat down with Lone Pine's Co-CIO Kelly Granat to discuss how to navigate market volatility and all things AI--investment opportunities, risks, bubbles, impact on trading strategies, role in hedge fund operations, human vs machine investing, hiring, and more. Letter #321: Divya Nettimi and Kelly Granat (2026) Divya Nettimi is the Founder and CIO of Avala Global. Previously, she was a PM at Viking Global, where she led the firm's TMT investment team. Kelly Granat is the Co-CIO of Lone Pine Capital. [Full interview in link in bio]
Kevin Gee tweet mediaKevin Gee tweet media
English
2
11
196
28.4K
PDS
PDS@TopadoDuran·
@WarrenPies By the end of this week there won't be any cyclical asset left alive.. markets don't have a week
English
1
1
27
6.8K
Warren Pies
Warren Pies@WarrenPies·
Once bonds start rallying with oil, you’ll know the it is turning from an inflation trade to a recession trade…probably starts by the end of this week imo
English
107
280
2.6K
302.2K
PDS 리트윗함
Caitlin Kalinowski
Caitlin Kalinowski@kalinowski007·
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
English
1.9K
13K
58.8K
7.7M
PDS 리트윗함
Jason Furman
Jason Furman@jasonfurman·
1. Some counter AI increasing productivity by arguing "but AI is not costing jobs." This is a non-sequitur. You can have productivity growth without job loss. in fact, that is the normal pattern for structural improvement (both due to Jevons & economywide growth increase).
English
10
9
107
99.4K
PDS
PDS@TopadoDuran·
@stevehou You lost me at long form research..
English
0
0
1
496