David A. Price

574 posts

David A. Price

David A. Price

@PriceIndex

Author of Geniuses at War (digital pioneers @ Bletchley Park), The Pixar Touch, Love and Hate in Jamestown, all from Knopf. New comic fiction: The Underachiever

Beigetreten Ekim 2010
331 Folgt859 Follower
David A. Price retweetet
Javi Lopez ⛩️
Javi Lopez ⛩️@javilopen·
Yesterday I showed you NASA's Artemis II official app, but they don't have a 3D trajectory simulator 🤔 So I vibe coded one with Claude in an afternoon. Real JPL Horizons data (they are public). Real scale. Full mission simulation. Link 🔗👇
English
38
77
667
107.8K
David A. Price retweetet
Andy Masley
Andy Masley@AndyMasley·
Suno just released a new model and I'm finding it to be a big shocking improvement where it's becoming very hard to detect the hints that it's AI music. Here's "I am actually scared of linear algebra"
English
41
31
301
35.1K
David A. Price
David A. Price@PriceIndex·
@_KarenHao Good luck on what sounds like an interesting story! No doubt you know Feeding the Machine.
English
0
0
1
268
Karen Hao
Karen Hao@_KarenHao·
If you live in the US and have worked or are working in data annotation for platforms like Outlier, Handshake, Mercor, or others, I'd love to hear about your experience. Please drop me a line: karendhao.com/contact.
English
7
14
86
11.3K
David A. Price retweetet
Charles Curran
Charles Curran@charliebcurran·
Seedance 2.0 Prompt: Punch the Monkey punches back. Make the girls proud.
English
349
1.8K
13.9K
1.4M
gil duran
gil duran@gilduran76·
We should be talking a lot more about how the San Francisco tech scene is populated by a strange cult of doom acceleration nerds who fantasize about using computers to destroy humanity.
gil duran tweet media
English
44
587
1.9K
54.6K
David A. Price retweetet
Nostalgia
Nostalgia@nostalgiaa·
The Harry Potter director warning the child actors not to eat the fake food on the table.
English
29
525
14.1K
621.2K
David A. Price
David A. Price@PriceIndex·
Are we spending anywhere near enough on preventing AI disasters? Stanford economist Chad Jones did the back-of-the-envelope math: Assuming policymakers’ standard $ values for avoiding human deaths, and with conservative assumptions about the risks, the U.S. should be spending ~1% or more of GDP per year to mitigate catastrophic AI risk (from bad actors or misaligned models). With 2025 GDP ≈ $30.5 trillion, that’s $305 billion annually. If he's right, we're way off. No one knows current AI safety spending—but it’s hard to believe it’s even 1% of that 1%.😬 Oh, and his 10-million-run Monte Carlo analysis yielded, on average, a far higher 8.1% share of GDP as the optimum. And one last thing. . . None of these calculations assign any value to the lives of future generations. The big picture: While there's huge uncertainty about AI risks and the effectiveness of mitigation, even low-end estimates of risk justify a large effort (or direct regulation, which the paper doesn't address). web.stanford.edu/~chadj/reduce_…
English
0
2
4
4K
Owain Evans
Owain Evans@OwainEvans_UK·
is there a novel that has an AI as a proper character (with some interiority) and is science fiction? if not, i expect one soon.
English
50
0
62
14.6K
David A. Price
David A. Price@PriceIndex·
@robertwiblin Nor is it clear to me how they'd assess your scenario where the value of human labor goes to $0.00.
English
0
0
0
36
David A. Price
David A. Price@PriceIndex·
For someone who has looked specifically at optimal taxation in the context of a growing role for AI (not nec. AGI), you could consider Ryota Nakatani or Spencer Bastani. For high-profile economists who've worked on optimal taxation and whose analysis could be an *input* into a discussion of post-AGI taxation, maybe Emmanuel Saez or Stefanie Stantcheva. (Stantcheva won the John Bates Clark Medal last year, a big deal, and Saez won it some years back.) mpra.ub.uni-muenchen.de/121347/1/MPRA_… docs.iza.org/pp212.pdf I can't vouch for how any of them would fare as podcast guests, tho.
English
1
0
2
336
Rob Wiblin
Rob Wiblin@robertwiblin·
Who's the best person to interview on optimal taxation post-AGI? x.com/dwarkesh_sp/st…
Dwarkesh Patel@dwarkesh_sp

New blog post w @pawtrammell: Capital in the 22nd Century Where we argue that while Piketty was wrong about the past, he’s probably right about the future. Piketty argued that without strong redistribution of wealth, inequality will indefinitely increase. Historically, however, income inequality from capital accumulation has actually been self-correcting. Labor and capital are complements, so if you build up lots of capital, you’ll lower its returns and raise wages (since labor now becomes the bottleneck). But once AI/robotics fully substitute for labor, this correction mechanism breaks. For centuries, the share of GDP that goes to paying wages has been 2/3, and the share of GDP that’s been income from owning stuff has been 1/3. With full automation, capital’s share of GDP goes to 100% (since datacenters and solar panels and the robot factories that build all the above plus more robot factories are all “capital”). And inequality among capital holders will also skyrocket - in favor of larger and more sophisticated investors. A lot of AI wealth is being generated in private markets. You can’t get direct exposure to xAI from your 401k, but the Sultan of Oman can. A cheap house (the main form of wealth for many Americans) is a form of capital almost uniquely ill-suited to taking advantage of a leap in automation: it plays no part in the production, operation, or transportation of computers, robots, data, or energy. Also, international catch-up growth may end. Poor countries historically grew faster by combining their cheap labor with imported capital/know-how. Without labor as a bottleneck, their main value-add disappears. Inequality seems especially hard to justify in this world. So if we don’t want inequality to just keep increasing forever - with the descendants of the most patient and sophisticated of today’s AI investors controlling all the galaxies - what can we do? The obvious place to start is with Piketty’s headline recommendation: highly and progressively tax wealth. This might discourage saving, but it would no longer penalize those who have earned a lot by their hard work and creativity. The wealth - even the investment decisions - will be made by the robots, and they will work just as hard and smart however much we tax their owners. But taxing capital is pointless if people can just shift their future investment to lower tax countries. And since capital stocks could grow really fast (robots building robots and all that), pretty soon tax havens go from marginal outposts to the majority of global GDP. But how do you get global coordination on taxing capital, when the benefits to defecting are so high and so accessible? Full automation will probably lead to ever-increasing inequality. We don’t see an obvious solution to this problem. And we think it’s weird how little thought has gone into what to do about it. Many more thoughts from re-reading Piketty with our AGI hats on at the post in the link below.

English
23
2
82
28K
ControlAI
ControlAI@ControlAI·
Tristan Harris says some people at the top of AI companies think it's inevitable that biological life is replaced with AI, and that it's a good thing.
English
10
8
38
4.1K
David A. Price
David A. Price@PriceIndex·
Thank you Angie Miale for including my comic novel The Underachiever in your list of Top Speculative Fiction for 2025! (A great list -- wonderful company to be in.) Link in reply ⬇️
David A. Price tweet media
English
2
1
4
28.3K
David A. Price retweetet
Joachim Voth
Joachim Voth@joachim_voth·
How did people in 1913 see the world? How did they think about the future? We trained LLMs exclusively on pre-1913 texts—no Wikipedia, no 20/20. The model literally doesn't know WWI happened. Announcing the Ranke-4B family of models. Coming soon: github.com/DGoettlich/his…
Joachim Voth tweet media
English
202
610
5.6K
554.6K