Opthymos

63 posts

Opthymos banner
Opthymos

Opthymos

@JanFrederickM

Singleton(complimentary) advocate. | Reason is and ought to be only the slave of the passions, and can never pretend to any office than to serve and obey them.

शामिल हुए Mart 2024
147 फ़ॉलोइंग9 फ़ॉलोवर्स
Opthymos
Opthymos@JanFrederickM·
It breaks the possibility of the Predictor being perfect, because you can just choose to do the opposite of what was predicted. But the Predictor being perfect was never stipulated and isn't important. And we can stipulate that if you decide to do the opposite the Predictor doesn't fill the second box, just like in the original problem it was stipulated that if you choose random the second box is empty.
English
0
0
0
15
Land Value Tax Extremist
Land Value Tax Extremist@ConservYIMBY·
@corsaren Transparency breaks this, though. The money visible through the box could (and would) change behavior, therefore leading Omega’s prediction to change the results of what it is predicting. This version is not only practically impossible, but theoretically impossible.
English
1
0
2
514
Not Elon Musk
Not Elon Musk@LivingReasonJP·
@dannycantalk What does the predictor do if I decide via some sufficiently randomized process? (Im a one boxer though)
English
7
0
3
2.6K
DannyCanTalk 🌈
DannyCanTalk 🌈@dannycantalk·
We're done rehashing the button question. Time to rehash Newcomb's Paradox. Are you a one-boxer or a two-boxer?
DannyCanTalk 🌈 tweet media
English
548
18
329
461.1K
Opthymos
Opthymos@JanFrederickM·
@quetzal_rainbow But FDT by itself doesn't resolve it, right? There are still the undefined parameters of how similar the other players are to you and how much do you care for them.
English
1
0
3
272
quetzal_rainbow
quetzal_rainbow@quetzal_rainbow·
People who know FDT looking at people arguing about blue and red pill:
quetzal_rainbow tweet media
English
9
4
51
7.4K
Opthymos
Opthymos@JanFrederickM·
@interro_9 @ZyMazza Not quite. MWI doesn't by itself imply that *everything* is a vector in Hilbert space. But some Everettians have argued for it. Sean Carroll calls it 'Mad-Dog Everettianism'. arxiv.org/pdf/2103.09780
English
0
0
0
8
interrobang
interrobang@interro_9·
@ZyMazza If Everett is right then the entire universe is just a vector in Hilbert space...so not surprising!
English
1
0
1
25
Zy
Zy@ZyMazza·
I feel like it kind of gets glossed over that semantic information can be expressed as vectors. That’s surprising, right?
English
113
21
1K
67.2K
Opthymos
Opthymos@JanFrederickM·
Started thinking how the world wars wouldn't have happened if French population growth kept pace with the rest of Europe. But more likely they would've happened anyway, just with the role of France and Germany switched.
Joan Larroumec@larroumecj

L’essentiel du malheur français des deux derniers siècles tient dans cette carte. Au lieu de voir sa population multipliée par entre 6 et 10 comme les autres pays européens, la France n’a fait que 2,5x. Pour comprendre ce que c’était d’être Français au 18e siècle, il faut s’imaginer une France contemporaine de 250 millions d’habitants. Cela ne nous donnerait que la densité du Royaume-Uni, avec 3 fois plus de terres arables. Rien d’exagéré ou d’impossible. Notre relation au monde serait légèrement différente. Bien sûr que nous avons la gueule de bois. La Grande Bretagne grâce à ses colonies a même fait 40x. Pour nous cela aurait voulu dire 900 millions de descendants de Français. Ce qui n’est pas délirant. Notre modeste population québécoise a été multipliée par 100. Le but de ce rappel n’est pas d’entretenir la nostalgie mais de remettre sur le devant de la scène un enjeu clé : la fécondité s’effondre massivement, cela va rebattre au 21e siècle les cartes de la puissance et de la prospérité tout autant qu’elles le furent au 19e siècle. Nous avons été les plus grands perdants à l’échelle mondiale de cette précédente transition démographique. Essayons de ne pas l’être ce coup-ci.

English
0
0
0
18
Opthymos रीट्वीट किया
Joan Larroumec
Joan Larroumec@larroumecj·
L’essentiel du malheur français des deux derniers siècles tient dans cette carte. Au lieu de voir sa population multipliée par entre 6 et 10 comme les autres pays européens, la France n’a fait que 2,5x. Pour comprendre ce que c’était d’être Français au 18e siècle, il faut s’imaginer une France contemporaine de 250 millions d’habitants. Cela ne nous donnerait que la densité du Royaume-Uni, avec 3 fois plus de terres arables. Rien d’exagéré ou d’impossible. Notre relation au monde serait légèrement différente. Bien sûr que nous avons la gueule de bois. La Grande Bretagne grâce à ses colonies a même fait 40x. Pour nous cela aurait voulu dire 900 millions de descendants de Français. Ce qui n’est pas délirant. Notre modeste population québécoise a été multipliée par 100. Le but de ce rappel n’est pas d’entretenir la nostalgie mais de remettre sur le devant de la scène un enjeu clé : la fécondité s’effondre massivement, cela va rebattre au 21e siècle les cartes de la puissance et de la prospérité tout autant qu’elles le furent au 19e siècle. Nous avons été les plus grands perdants à l’échelle mondiale de cette précédente transition démographique. Essayons de ne pas l’être ce coup-ci.
Joan Larroumec tweet media
Français
119
319
3K
531K
Opthymos
Opthymos@JanFrederickM·
@morallawwithin Golden rule says no. But irrespective of transitioning, why shove cringe old videos into somebody's face that they made when they were basically a juvenile? Like it's hipocricy to change after you turn 20.
English
2
0
5
317
florence 🦐🪻
florence 🦐🪻@morallawwithin·
(Btw is this morally okay to post? Generally a good rule would be not to post pre-transition pics of a trans woman without special reason, but idk if that’s still a strong consideration when we’re talking about a public figure)
English
9
0
31
2.4K
Opthymos
Opthymos@JanFrederickM·
Nobody can 'refute' the Chinese Room because it's not an argument, it's an intuition pump. And intuitions are hard to argue with. And our intuitions about what is or isn't conscious just aren't worth much. Nobody *intuits* that processes in their brain cause their consciousness. Many people disbelieve it to this day. Searle accepts it. But he thinks the Room is different. But neither he nor anyone else can identify what is it about the brain, that *it* can cause consciousness but the Chinese Room or any computer running the right algorithm couldn't. Yes, it's a different substrate, but *what* about the substrate could plausibly be relevant?
🌘ʀᴇᴠᴇɴᴀɴᴛ⚡@revenant_MMXX

Going on 50 years since Searle's Chinese Room and absolutely no one has come up with a real non-coping refutation btw

English
0
0
0
35
Sawyer 🔍, ⏸️/⏹️, is using fewer disclaimers
@JanFrederickM 80% of the way through this, and enjoying it a lot. It's just super well-done, and also mostly resolved my confusion about what Ninov could've been thinking. Gonna check out some of this guy's other stuff next, but wanted to thank you for linking.
English
1
0
1
8
Sawyer 🔍, ⏸️/⏹️, is using fewer disclaimers
Just learned about this case, and just from a psych perspective, the fraud here seems weird to me. All normative stuff aside, what was this guy thinking? How did he expect that to work out well for him?
MugaSofer@MugaSofer

@FrankBednarz @CoughsOnWombats It's even possible for someone to find something legitimately, but then fake data to bolster the discovery or cut corners. Consider the tale of physicist Victor Ninov, who started out doing that, then got greedier and tried to wholesale fake two elements en.wikipedia.org/wiki/Victor_Ni…

English
2
0
7
541
Opthymos
Opthymos@JanFrederickM·
@scyshw6492 @Hesamation The entire value of Dwarkesh's podcast is due to him having 'the gall' to ask his guests to argue for their conclusions, and this is true even when some of the questions are somewhat naive.
English
0
0
9
443
Hakuhyo
Hakuhyo@scyshw6492·
@Hesamation 😑Dwarkesh Patel is worse than Lex Fridman ngl. For all Fridman’s fault, he knows what he doesn’t know. Patel had gall to challenge Sutton during his interview without understanding different modes of learning and Sutton had to say “don’t be difficult.”
English
10
3
124
28.3K
Opthymos
Opthymos@JanFrederickM·
@prerat And I think it's a more serious point than it looks!
English
0
0
2
46
prerat
prerat@prerat·
here is my decision theory proposal PDT: practical decision theory in any concrete actual problem in the real world, PDT outputs the right answer it's literally the hard coded right answers. it doesn't do counterfactuals. it doesn't work in other worlds, just the real one.
English
5
1
106
3.6K
Opthymos
Opthymos@JanFrederickM·
@yhdistyminen >It has never done this. Arguably it did in 1824.
English
0
0
0
169
Opthymos
Opthymos@JanFrederickM·
@TheZvi "The dataset consists of questions that only require basic familiarity with decision theory, so other decision theories like Functional Decision Theory aren’t considered." So silly... Maybe try to see if the model has *more* than basic understanding??? Which it does of course
English
0
0
0
14
Opthymos
Opthymos@JanFrederickM·
Very interesting. But the numbers are a bit inconclusive as far as I can tell. You'd expect models to be better at predicting other instances of themsleves, and for agents to take this into account. But the way you set this up, only predictors take into account that they're predicting another instance of themselves. Agents are just given the accuracy of the predictor.
English
1
0
1
39
typebulb
typebulb@typebulbit·
@TheZvi FWIW I did 192 runs with the major models simulating: Newcomb's problem Transparent Newcomb's Parfit's Hitchhiker You can see the summaries of the results and drill down into what the models actually said here: typebulb.com/u/antypica/min…
English
1
0
7
220
Opthymos
Opthymos@JanFrederickM·
@allTheYud @TheZvi I've been wondering if someone's tracking this. That's very interesting.
English
0
0
1
49
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
@TheZvi I of course have my own evals that I've been running through OpenRouter. We go from two-boxing to one-boxing on Transparent Newcomb in Opus 4 -> Opus 4.5.
English
2
0
11
585
Opthymos
Opthymos@JanFrederickM·
Maybe the capabilities progress is worth the cost of doing alignment research. It's probably necessary even to increase capabilities on your way to solving alignment. Maybe you can't settle this with abstract fantasy arguments. But this is beside the point. The reality is that AI companies CEOs have said that they would prefer to slow down, but they can't without assurance that others will do so as well. The rate of capabilities progress is determined by bad incentives. It is orthogonal, if you will, to any nice arguments we're having about how actually hard alignment is.
English
1
0
2
306
🎭
🎭@deepfates·
Collecting posts that have helped me think about the alignment problem, and make me skeptical of the arguments behind "if anybody builds it everybody dies". Not making a rebuttal, just gathering resources for those questioning thread below, reply with others I should consider
Scott Alexander@slatestarcodex

@deepfates What are the best introductions collected in one place to the "points brought up by people like me, janus, xlr8, gyges" etc?

English
22
17
361
41.5K