@greatBigDot@ESYudkowsky@YosarianTwo Why debate this as a yes/no answer? For the non-hypothetical case it depends on what amounts you pick (~$3000 for me for standard version). For the hypothetical case, there's no simple formula you'd want to encode since it would fail if the amounts are sufficiently extreme.
@ESYudkowsky@YosarianTwo (... inb4 "the whole idea of 'precommitment' is a complete red herring, and is nothing but a crutch used by broken decision theories incapable of simply *predictably doing the profit-maximizing thing* in one pass")
Like all good rationalists, you think you're a one-boxer on Newcomb's paradox. It happens, they give you two boxes. They're transparent, and you can see they each only have $10,000 in them. You look up, and they shrug. "The Oracle said you would open both. Can't change it now."
@conitzer@C_Oesterheld congrats! how does this compare to the evidentialist's wager (in terms of how much you support or what they recommend or relate to each other)?
New paper with Emery Cooper and @C_Oesterheld on a new approach to Newcomb scenarios based on causal decision theory and self-locating beliefs (e.g., you may currently be in a simulation by the predictor).
arxiv.org/abs/2411.04462
This is a neat result on the ability of AI to answer medical questions.
A well-prompted frontier generalist AI (GPT-4) beats highly fine-tuned specialized LLMs (Med-PaLM 2) in its area of specialty. Models are improving fast, though usual testing caveats. arxiv.org/pdf/2311.16452…