Jonathan Paulson

3.1K posts

Jonathan Paulson

Jonathan Paulson

@PaulsonJonathan

Software Engineer. Interests: quant finance, competitive programming, Effective Altruism, policy.

Katılım Mayıs 2012
352 Takip Edilen397 Takipçiler
Lyman Stone 石來民 🦬🦬🦬
The lowest-controversy use case for euthanasia is "Death is certain in the next few weeks, options exhausted, spare the person weeks of grueling agony." This requires euthanasia programs to process all paperwork, requests, and safeguarding strategies within hours or days of application. If euthanasia is rare, this may be possible. In practice, when bureaucracies have effectively 3-5 day turnaround times and growing demand, implementing safeguards of any kind is effectively impossible. There is an intrinsic incompatibility between "euthanasia only for the clear cases" and "good safeguards for abuses," because the clear cases are operating on an extremely compressed timeline. In the long run, it simply cannot be done.
Shaamba Ⓥ@ShaambaBaashdi

@lymanstoneky IF this is true, IF, that still doesn't mean there aren't ways to remedy many of the ills that could come about through these things. After all, it seems that 95% of the complaints against assisted suicide are only in response to Canada, which is only choosing the worst example.

English
16
15
148
9.8K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@conor64 The vaccine rollout (not maximizing lives saved) and the way anti-racism protests were encouraged (despite the lockdowns) seemed like they let leftist ideology override scientific expertise.
English
0
0
1
14
Conor Friedersdorf
Conor Friedersdorf@conor64·
A question for everyone: survey data suggests that by the end of the Covid-19 emergency trust in public health institutions had decreased significantly. If you are among the people who reacted that way, why specifically? I'm hoping for long, diverse, individualized answers.
English
1.4K
41
467
272.6K
Richard Ngo
Richard Ngo@RichardMCNgo·
One striking illustration of this mindset comes from rationalist fiction, which often ends with the hero gaining total power to design a new world order. Four examples (with many spoilers!):
Richard Ngo tweet mediaRichard Ngo tweet mediaRichard Ngo tweet mediaRichard Ngo tweet media
English
11
3
92
9.9K
Richard Ngo
Richard Ngo@RichardMCNgo·
It’s helpful to think of rationalists as High Modernists specifically about the future. No human was smart enough to successfully plan an economy or a society. But if we hypothesize an AI intelligent enough to do so, we can hold on to many of the same technocratic intuitions.
English
13
8
175
15.9K
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
Everyone talking about AI job displacement was always so focused on the working class. They thought only the highest-IQ people would still be valuable. But it turned out mathematicians got replaced long before truckers did.
English
58
39
479
60.7K
Carlos
Carlos@sharintaint·
@LinkofSunshine Also... has any effective altruism campaign been proven to maintain efficacy? I remember a few years ago people were glazing MacKenzie Scott like she was the new Gates but now everyone sees her as a joke. Same with "just give people money" declining massively in popularity.
English
3
0
18
6.2K
Alejandro Zarzuelo Urdiales
Alejandro Zarzuelo Urdiales@AlejandroZarUrd·
@prfsanjeevarora Why should we care about humans and elegance? Math isn't there for us, it's a system that exists outside of humanity I do math because I want to reveal things that are true, I do not care about our silly social conventions becsuse Math is alien to our reality
English
2
0
5
582
Sanjeev Arora
Sanjeev Arora@prfsanjeevarora·
At a recent meeting I heard similar comments/worries from a leader in the Lean community. I suggest that people designing AI tools here make them helpful to humans. We need a loss function that captures elegance and human utility.
Mario Krenn@MarioKrenn6240

After the apparently amazing announcement by @mathematics_inc on the formalization of a major recent Fields-medal winning theorem, i had no idea how pissed the math-formalization community is. Very worrying discussions by some of the leaders/founders of Lean's mathlib. cc @ChrSzegedy

English
5
7
95
22.2K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@MattBruenig @simonw Lawyers don’t read this stuff before sending it out? “You can have the AI test the code” seems similar to “you can have the AI check the citations”, so I don’t think programmers have it easier.
English
0
0
0
306
Matt Bruenig
Matt Bruenig@MattBruenig·
Not so @simonw. If you have a database that contains all of the relevant law, you can deterministically check each citation and quote against that database.
Matt Bruenig tweet media
English
15
2
147
26.7K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@lymanstoneky I hope this isn’t true! Doesn’t it get a lot easier once they can do things for themselves?
English
0
0
0
269
Matthew Yglesias
Matthew Yglesias@mattyglesias·
Gavin Newsom is yet another case of an establishment politician who leftists don't like but who's actual record and image is very left. Obama was the opposite — moderate on most things, but *liked* by progressive activists because he'd opposed Iraq.
English
64
37
747
44.6K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@Noahpinion We already have some RSI; models probably improve coding productivity. Probably AI will slowly increase %. And models are not *that* much faster than humans. Not clear to me there’s a discontinuity here; if AI is 95% maybe humans aren’t the bottleneck anymore.
English
1
0
0
318
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
In other words, an AI "FOOM" is almost certainly coming soon, where AI suddenly gets insanely better, seemingly almost overnight.
English
14
5
136
18.9K
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
I've been playing around with toy models of recursively self-improving AI, and the one robust conclusion is that whether or not the improvements eventually plateau, the initial phase of RSI will be a very sudden, significant FOOM in capabilities.
English
12
13
298
49.6K
Flare
Flare@FlareIsGone·
In my experience, Poison in Slay the Spire 2 feels much worse than before. Losing Catalyst is a big deal, even though some of the newer cards are quite powerful. It also feels quite bad vs 2 of the 3 final bosses. Anyone else feel this way?
Flare tweet mediaFlare tweet mediaFlare tweet mediaFlare tweet media
English
70
8
496
204.6K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@AugmentedPhoto @TheStalwart Yes :) I think we’re already seeing this to some extent; AI looks very productive on fresh projects but then moderately-sized purely-AI projects get too complicated for either AIs or humans to continue working on. OTOH the AIs are getting better quickly.
English
1
0
0
35
Joe Weisenthal
Joe Weisenthal@TheStalwart·
Why do a lot of software people like a tool that can allow them to expend their mental energy on higher order problems, while writers dislike the tool that can replace their output completely? Truly one of the great mysteries of our time
English
130
33
1.2K
182.4K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@AugmentedPhoto @TheStalwart The “how” ends up mattering a lot in software in a different way. Typically many changes are made over a long period of time so it’s important that the code is easy to work with. Whereas with writing usually you pick a version to publish and don’t change it later.
English
1
0
0
46
AugmentedPhoto
AugmentedPhoto@AugmentedPhoto·
@PaulsonJonathan @TheStalwart Right…the “how” is what the AI automates. In writing, people see the “how” … they see the “code”… in software that is hidden…
English
1
0
2
59
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@AugmentedPhoto @TheStalwart Partially. A lot of decisions (especially “how?”) are left to the coders. Also I think the fact that there’s a whole other job dedicated to design indicates that design is a very important part of the process.
English
1
0
0
71
AugmentedPhoto
AugmentedPhoto@AugmentedPhoto·
It does exactly the same for writers in theory…allows them to think more about what to write and can automate the writing process. But unlike software, in writing the bottleneck isn’t really the writing itself, it’s deciding what you want to say. Also nobody sees inelegant code, people just care if it works. In writing, style is huge …using AI kills your unique style and ends up taking you more time since you need to mop up after all the bullshit it adds in on its own.
English
5
0
47
3.6K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@TheStalwart I don’t think this is a meaningful distinction; you could just as easily claim the other way around (any PM can write features now without developers vs. now writers can focus on making sure the big ideas are good without agonizing over the details)
English
0
0
1
121
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@mattyglesias Could be that the academic side of college admissions has become generally less weighted. Capping the ceiling is part of that.
English
0
0
1
217
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@CentristMadness How did skipping work that day help anything? What is the value of being “right” in a way that accomplishes nothing?
English
1
0
0
6K
Jonathan Paulson
Jonathan Paulson@PaulsonJonathan·
@__venki__ Either you’re buying from the company directly, in which case they have more money to spend on stuff, or from other investors who will repurpose that money to do something else useful.
English
0
0
0
25
venki
venki@__venki__·
good question from sophia, that i've reworded, for the capitalism appreciators out there: When you purchase $100k of Google shares, what is the actual counterfactual impact of your investment, that justifies the returns you make? more context: it's pretty clear that if you eg: invest $100k in a factory, the capital you put in is doing something valuable. it's necessary for the machines that make stuff, so you're paid for your part in it. even if it's risk-free, and you're not paid for risk, you're paid because your money could be spent on other capital goods elsewhere, producing economic value. but, Google has lots of money. they don't need your capital. they're not even a net issuer of shares annually - they repurchase more than they issue via stock-based compensation. stock-based acquisitions too are much smaller than annual repurchases. what work is your $100k in google doing, or what value is it producing to justify the returns you make? why isn't your $100k in google stock unproductive capital? claude hasn't been able to give me satisfying answers. there's the rentier capitalism answer which is: Certain companies are sufficiently well-capitalized that investors aren't really helping them with their investment at all, but merely purchasing a share of future cash flows. Maybe smaller companies, and certain capital-intensive larger companies indeed do benefit from marginal capital though. But the total stock market is dominated by companies whose productivity is insensitive to marginal capital, and thus, capital-providers do not in fact provide much value, but collect rents for unclear reasons. there's some retrocausal answer: The market should efficiently values companies based on DCF, and this both: a) helps the more capital-intensive companies b) incentivizes earlier stage investors in companies that still have high marginal productivity per $ invested. so, it's fine, you should just purchase things based on DCF, and this will in markety ways cause investment in companies that will eventually be productive. it seems maybe closest to an explanation, but it feels very handwavy? it also feels uncompelling if the majority of capital is spent on purchasing rights to existing cash flow, and all it does is induce a little bit of productive investment
English
22
0
42
3.5K