Jason Abaluck

13.1K posts

Jason Abaluck banner
Jason Abaluck

Jason Abaluck

@Jabaluck

Professor of Economics at Yale SOM

New Haven, CT Katılım Nisan 2009
463 Takip Edilen15K Takipçiler
Jason Abaluck
Jason Abaluck@Jabaluck·
@Afinetheorem Yes, good point. I suppose the question is whether time is a bottleneck in the production function once you get vast amounts of superhumanly capable labor.
English
1
0
0
127
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
@Jabaluck These types of bottleneck are empirical questions in part, no? E.g., the rise of China biotech is a shock to labor, not to time. And it sure as heck seems like it is increasing the pace of drug/treatment development!
English
1
0
1
161
Jason Abaluck
Jason Abaluck@Jabaluck·
In my view, it's a completely open question whether ASIs could make rapid progress in biology. The fundamental question is whether sufficiently good computational models and high-resolution data can substitute for time. While current generation models require vast amounts of data to achieve superhuman performance at some tasks, ASIs will also be able to use superhuman modeling abilities to draw better inferences from a given amount of data. An ASI could also build new data gathering devices and collect short-run biological data with great efficiency. This would likely enable much higher resolution biological imaging of various kinds. What an ASI cannot do is collect empirical data that can *only* be generated over time. It cannot, for example, run a randomized experiment to see the impact of caloric restriction in humans over 30 years. But does it need to? Waiting and observing how biological systems evolve over time is clearly necessary for humans to learn about biology with our current scientific understanding -- we don't know enough to observe biological systems for a day and then model how they will develop over 20 years. It is an open question whether this is true of an ASI with vastly superior data collection and modeling abilities. There may be fundamental barriers introduced by computational complexity that cannot be skirted by any modeling techniques. But we are very far from knowing whether this is the case for the biological quantities we care about, including aging and death. Workable cryonic technologies in particular seem like low-hanging fruit for an ASI compared to solving aging entirely.
Geoffrey Miller@gmiller

A mini-rant abut AI and longevity. They say "Artificial Superintelligence would take only a few years to cure cancer, solve longevity, and defeat death itself'. This is a common claim by pro-AI lobbyists, accelerationists, and naive tech-fetishists. But the claim makes no sense. The recent success of LLMs does NOT suggest that ASIs could easily cure diseases or solve longevity, for at least two reasons. 1) The data problem. Generative AI for art, music, and language succeeded mostly because AI companies could steal billions of examples of art, music, and language from the internet, to build their base models. They weren't just trained on academic papers _about_ art, music, and language. They were trained on real _examples_ of art, music, and language. There are no analogous biomedical data sets with billions of data points that would allow accurate modelling of every biochemical detail of human physiology, disease, and aging. ASIs can't just read academic papers about human biology to solve longevity. They'd need direct access to vast quantities of biomedical data that simply don't exist in any easy-to-access forms. And they'd need very detailed, reliable, validated data about a wide range of people across different ages, sexes, ethnicities, genotypes, and medical conditions. Moreover, medical privacy laws would make it extremely difficult and wildly unethical to collect such a vast data set from real humans about every molecular-level detail of their bodies. 2) The feedback problem. LLMs also work well because the AI companies could refine their output with additional feedback from human brains (through Reinforcement Learning from Human Feedback, RLHF). But there is nothing analogous to that for modeling human bodies, biochemistry, and disease processes. There are no known methods of Reinforcement Learning from Physiological Feedback. And the physiological feedback would have to be long-term, over spans of years to decades, taking into account thousands of possible side-effects for any given intervention. There's no way to rush animal and human clinical trials -- however clever ASI might become at 'drug discovery'. More generally, there would be no fast feedback loops from users about model performance. GenAI and LLMs succeeded partly because developers within companies, and customers outside companies, could give very fast feedback about how well the models were functioning. They could just look at the output (images, songs, text), and then tweak, refine, test, and interpret models very quickly, based on how good they were at generating art, music, and language. In biomedical research, there would be no fast feedback loops from human bodies about how well ASI-suggested interventions are actually affecting human bodies, over the long term, across different lifestyles, including all the tradeoffs and side-effects. It's interesting that most of the people arguing that 'ASI would cure all diseases and aging' are young tech bros who know a lot about computers, but almost nothing about organic chemistry, human genomics, biomedical research, drug discovery, clinical trials, the evolutionary biology of senescence, evolutionary medicine, medical ethics, or the decades of frustrations and failures in longevity research. They think that 'fixing the human body' would be as simple as debugging a few thousand lines of code. Look, I'm all for curing diseases and promoting longevity. If we took the hundreds of billions of dollars per year that are currently spent on trying to build ASI, and we devoted that money instead to longevity research, that would increase the amount of funding in the longevity space by at least 100-fold. And we'd probably solve longevity much faster by targeting it directly than by trying to summon ASI as a magical cure-all. ASIs has some potential benefits (and many grievous risks and downsides). But it's totally irresponsible of pro-AI lobbyists to argue that ASIs could magically & quickly cure all human diseases, or solve longevity, or end death. And it's totally irresponsible of them to claim that anyone opposed to ASI development is 'pro-death'.

English
6
1
26
4.2K
Geoffrey Miller
Geoffrey Miller@gmiller·
@Jabaluck Fair points. ASI might help. I'm just arguing that it's far from obvious that ASI would definitely solve longevity within a few years -- which is the standard claim by the pro AI lobbyists.
English
2
0
4
307
Jason Abaluck
Jason Abaluck@Jabaluck·
@Noahpinion I’m not an accelerationist (I think risk is high too!) and I wasn’t making an argument for why this is good. I was describing how I think attitudes towards work will change as the necessity for work decreases.
English
0
0
0
384
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
AI accelerationists are doing a terrible job of selling the idea of a world of complete human disempowerment. Why would we want to live in a world where we're merely slaves or pets of a machine? Why wouldn't we just ban AI instead? This is a bad sales pitch.
Noah Smith 🐇🇺🇸🇺🇦🇹🇼 tweet media
Jason Abaluck@Jabaluck

I think it's worth separating two worlds: a) Partial automation -- AI automates some stuff but not others (like a supercharged version of past technology), with wages falling for some and increasing for others. In this world, I agree completely about disempowerment. Promises to prevent job loss will be hugely politically popular, and sometimes defensible for political economy reasons. b) Full automation -- the world in a) lasts as long as most people can delude themselves into thinking they are adding value relative to a machine, which I think will last for some time: threadreaderapp.com/thread/1870884…. But once it becomes unmistakable that few people can, norms about work will shift quickly. If humans still have a say in the matter, norms about governance will shift as well -- if aligned, the machines will also be better than humans at normative reasoning and policy-making (a subset of automating everything!), so politician will be one of the jobs which is automated.

English
64
45
409
75.1K
Jason Abaluck
Jason Abaluck@Jabaluck·
I think it's worth separating two worlds: a) Partial automation -- AI automates some stuff but not others (like a supercharged version of past technology), with wages falling for some and increasing for others. In this world, I agree completely about disempowerment. Promises to prevent job loss will be hugely politically popular, and sometimes defensible for political economy reasons. b) Full automation -- the world in a) lasts as long as most people can delude themselves into thinking they are adding value relative to a machine, which I think will last for some time: threadreaderapp.com/thread/1870884…. But once it becomes unmistakable that few people can, norms about work will shift quickly. If humans still have a say in the matter, norms about governance will shift as well -- if aligned, the machines will also be better than humans at normative reasoning and policy-making (a subset of automating everything!), so politician will be one of the jobs which is automated.
English
8
0
5
76.9K
Jason Abaluck
Jason Abaluck@Jabaluck·
@Noahpinion Okay then you are right I misunderstood and we disagree about the speed with which politics will change.
English
1
0
4
2.1K
Jason Abaluck
Jason Abaluck@Jabaluck·
In the status quo, jobs are much better than checks. Welfare gives low $ amounts and is stigmatized. Things would be quite different in a world where "direct-income support" meant a lot of $ and where most people were beneficiaries (not status quo social insurance!)
David Shor@davidshor

Voters want jobs, not checks - "creating good-paying jobs" beats "direct income support" 54-17 as the preferred approach to AI displacement. And voters want a tax specifically on companies that profit from AI to pay for it (over a wealth tax) 49-27.

English
3
3
35
26K
Jason Abaluck
Jason Abaluck@Jabaluck·
@unit_accord I think you are misunderstanding the word "consume" and its meaning to economists, among other issues.
English
1
0
2
168
Unit Accord
Unit Accord@unit_accord·
@Jabaluck What about an ivory palace with a fountain of wine & 72 virgins? You're a professor of economics at Yale? What you're describing doesn't survive even the most basic, cursory economics analysis. And it's INSULTING. "We'll consume more than now!" That's your paradise? WTF 😳
English
2
0
0
213
Jason Abaluck
Jason Abaluck@Jabaluck·
@jamesharrigan That is the positive case if all labor is automated. You will likely get very rapid medical progress.
English
2
0
7
250
James Harrigan
James Harrigan@jamesharrigan·
@Jabaluck "without having to worry about disease and death" you can't be serious.
English
1
0
2
280
Jason Abaluck
Jason Abaluck@Jabaluck·
@foomagemindset I should have added that AI can give you designer drugs that will create the same feeling of superiority you get in the status quo while removing your awareness that it is artificial and undeserved ;)
English
3
0
8
315
Jason Abaluck
Jason Abaluck@Jabaluck·
Also, I neglected part of the positive case -- AI will better understand the relationship between brain and consciousness and can likely create new experiences better than any than any humans have experienced to date.
English
0
0
12
575
Jason Abaluck
Jason Abaluck@Jabaluck·
The negative case for AI is that: a) It might lead to human extinction b) It might lead to political instability and war (and thus mass death) c) It might lead to great concentration of wealth and power
English
1
0
14
627
Jason Abaluck
Jason Abaluck@Jabaluck·
@Chris_Said It’s an equilibrium phenomenon. Unemployment is very different if you feel other people are out there doing something worthwhile but you are not.
English
0
0
0
38
Jason Abaluck
Jason Abaluck@Jabaluck·
I'm starting to wonder if any politician will come out in favor of the best platform: 1) It's good if AI replaces humans at work 2) We should have generous public insurance to compensate losers 3) Massively increased funding for safety (transparency, control & misuse prevention) 4) International Manhattan project as recursive self-improvement becomes closer, with frontier models developed under direct oversight of teams of scientists with no direct profit motive
ControlAI@ControlAI

At an AI policy roundtable, Florida Governor Ron DeSantis (@GovRonDeSantis) says we should not build tech that will supplant us as human beings. "I don't think you can say these machines are just gonna be doing things and we're gonna suffer harm and there's nothing anybody can do about it." "There have to be ways to make sure that this stuff is controllable."

English
9
6
52
10.8K
Jason Abaluck
Jason Abaluck@Jabaluck·
@arindube Do you mean like wage loss insurance instead of ui or something like retraining? At some point, people will receive welfare indefinitely but it won’t have the negative stigma attached to welfare today.
English
0
0
0
1.4K
Arin Dube
Arin Dube@arindube·
Here's a suggestion to folks who think AI will take over most jobs in the not-too-distant future. Don't talk about compensating the losers using existing frameworks of social insurance. We don't have anything remotely necessary to do that. The UI system - even if made much better - is not designed for that. If you want voters to feel good about such a scenario, so that they don't vote to shut it all down, think of solutions that don't involve telling most people they'll receive unemployment benefits or welfare payments indefinitely. [Again, I have very serious doubts that this is where we'll be anytime soon, but if you do, my advice is for you. Also, given more realistic disruptions to employment from AI, the UI system can work quite well, especially if we implement long-overdue reforms.]
English
9
10
113
67K
Jason Abaluck
Jason Abaluck@Jabaluck·
@arindube People would vastly prefer a life where they could spend time doing what they wanted with friends and loved ones and were less vulnerable to diseases. You are correct that they would miss some things about the status quo.
English
1
0
0
175
Arin Dube
Arin Dube@arindube·
@Jabaluck Most people would be pretty unhappy to be indefinitely on public assistance as a price of progress
English
1
0
2
95
Arin Dube
Arin Dube@arindube·
@Jabaluck How do we compensate if most of us really were to be losers? Maybe we need something broader than insurance as a metaphor.
English
2
0
6
909