Xavier O'Rourke

1.1K posts

Xavier O'Rourke

Xavier O'Rourke

@XavierORourke

Hope you're having a great day! 😇

เข้าร่วม Temmuz 2011
340 กำลังติดตาม77 ผู้ติดตาม
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@AndrewCritchPhD Impossible to judge Idk if EY's right that alignment can't be solved without a pause. Even if extreme RSI+misalignment were off the table, idk if advocating a stop is *still* best thing. And if pause is the right strategy, idk if EY's passionate honest rhetoric helps or hurts.
English
0
0
0
18
Andrew Critch (🤖🩺🚀)
Andrew Critch (🤖🩺🚀)@AndrewCritchPhD·
1) Have you ever participated in the rationality movement in some way? 2) Whose current stream of public communications do you believe on average is more helpful, or less harmful, to humanity's future? • Eliezer Yudkowsky • Scott Alexander
English
4
1
0
441
Fernando Rosas 🦋
Fernando Rosas 🦋@_fernando_rosas·
This view, know as computational functionalism, is taken as an obvious true by a large portion of the ML and CS communities But it has been progressively rejected by most people that actually study consciousness References below 👇🏽
Eliezer Yudkowsky@allTheYud

Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.

English
28
11
173
58.3K
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@ramez @Noahpinion Post-mythos/glasswing - it turns out we were all spectacularly wrong (I was wrong too for being too underconfident). Turns out this technology will be full paradigm-shift in cyber
English
1
0
1
16
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@ramez @Noahpinion Even if we grant that AI will only ever be a medicore coder (which is a crazy asusmption). Just the fact that having more datacenters and better models lets you patch things faster in response to new vulnerabilities is a big deal.
English
1
0
0
11
Ramez Naam
Ramez Naam@ramez·
What do people believe China will do with more advanced AI chips that will harm the US?
English
21
4
25
7.3K
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@theboldmoose @Kazanir @segyges "LVT is efficient" "Idc, I oppose cause I believe in sacred land rights" "Those rights *aren't* sacred though, they only exist thanks to sovereign enforcement" "But that's also true of other property! why not tax that too?" "Cause LVT is efficient" - and repeat ad nauseam...
English
2
0
1
56
theboldmoose
theboldmoose@theboldmoose·
@Kazanir @segyges Find anyone in this discussion who does not think that property taxes or LVT are economically expedient. This is about land rights, no one is disputing it’s a productive way to tax.
English
2
1
9
2.5K
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@xwanyex An income tax approximates a tax on owning bicycles. Trying to tax people based on the total value of all possessions would be impractical. Taxing land is far more feasible. The best argument for land tax is you need revenue somehow and it's less distortionary than alternatives
English
1
0
3
471
wanye
wanye@xwanyex·
This is true of literally everything! It’s only your bicycle because you get to decide what happens to it and the state says that marauders can’t just steal it.
Joe Weisenthal@TheStalwart

@xwanyex Your reply is completely nonsensical. After you buy the land, what makes it “yours” is the ability to exclude others from it, which is a service that the state provides to you in a variety of ways.

English
37
10
595
46.9K
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung @akrugs94 Can you explain that a bit more? What's an example of an effective reasoning process you wouldn't label as "using base rates"
English
1
0
2
69
Nathan 🔎
Nathan 🔎@NathanpmYoung·
I don't see how you can have a P(doom) over 90% when the probability of a war for Taiwan is like 20% before 2030. How on earth is scaling going to continue apace in the that world?
English
33
1
77
9.2K
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung @akrugs94 But I am saying that when you think about the mechanics of it - it really seems like the kind of thing that won't be amenable to the same techniques you use to score well in a forecasting tournament
English
0
0
0
12
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung @akrugs94 If there was a solar eclipse happening in 11 years people could predict this exactly without appealing to base rates. When it comes to predicting eclipses, forecasting techniques are inferior to considering the object level mechanics. (Not saying AI takeoff is like eclipse)
English
2
0
2
68
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung @akrugs94 Idk whether AI takeoff is a good domain to apply base rates to. There's pretty strong reasons to see it as a uniquely impactful "once off" occurrence. Even if things go well the future will still be unrecognizably weird and unprecedented right?
English
1
0
1
56
Nathan 🔎
Nathan 🔎@NathanpmYoung·
@XavierORourke @akrugs94 I think that forecasting away from the base rate gets harder the further away we are. I think delays are meaningful to high confidence of x risk.
English
1
0
0
53
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung @akrugs94 I never said there was a 90% chance. (Maybe ask Claude to explain my position if I'm not being clear enough). I'm just saying whatever the chance is, I don't think it interacts much with the chance of Taiwan conflict - (contra your implication in original tweet)
English
1
0
2
80
Nathan 🔎
Nathan 🔎@NathanpmYoung·
@XavierORourke @akrugs94 I think you are not nearly good enough at forecasting to forecast everyone dead in an unspecified timeline at 90%. I think if you think you are you are likely wrong about lots of other things.
English
1
0
1
67
retarded oil longer
retarded oil longer@quantyboi·
@XavierORourke @sriramk dario and sam's organizations are full of retards getting paid to larp your yud-adjacent discourse has plagued the internet for the last decade for training data especially given the propensity for massive sci-fi word-salads
English
1
0
0
14
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung @akrugs94 Like... If we constrain timeframe then, as you point out, the resolution might hinge on technicalities like whether scaling is paused during a war/recession. But I don't care about point scoring in imaginary for casting comp. What we really wanna know is "will my kids be okay"
English
1
0
1
75
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung @akrugs94 Well sure, if we're only doing 5 years then delays matter a lot! But I think with this particular question, given takeoff is ~inevitable at some point and the thing we actually care about is what happens after takeoff - defining the question "within x years" is not natural frame
English
1
0
3
76
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung In the space of worlds where we would have been doomed if not for a war over Taiwan - in what fraction of those do you think the conflict causes AI takeoff to go well instead?
English
0
0
2
15
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung I'm less than 90% p-doom. Your claim that it must be less than 90% *because* there might be a war in Taiwan is what I'm challenging. In worlds where RSI goes very fast and alignment problem is very hard, we lose regardless of whether TSMC shuts down.
English
1
0
2
28
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@quantyboi @sriramk Please copy paste this thread into your ai chat app and ask your assistant what "hypotheticals" are
English
1
0
0
36
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@moralityetalon @sriramk Lol I wondered... Should have realized. But sometimes people really fo get that mixed up when they talk about this stuff 😂
English
0
0
1
17
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@quantyboi @sriramk If no evidence would change your mind about AI, let's make the hypothetical about some other tech instead (idk something like mirror life maybe) If it was being rapidly developed and you thought there was high chance it causes calamity, what would you say/do?
English
1
0
0
42
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@moralityetalon @sriramk I don't think you mean that - everyone dying includes the people who love you (your wife, your kids, your best friends) - you'd think it's wrong to make any effort at all to save them?
English
1
0
1
29
Xavier O'Rourke
Xavier O'Rourke@XavierORourke·
@NathanpmYoung If doomers are right that recursive self improvement can happen and that alignment is hard - then government curtailment only works if enforced globally. Are you saying: "If we were in danger we'd do something to stop it, so we're not in danger, so we don't need to do anything"
English
2
0
6
88
Nathan 🔎
Nathan 🔎@NathanpmYoung·
For clarity, I think many people who think AI will almost certainly kill us are way too confident. AI companies have incentives for this not to happen. If it seemed likely to in the next 5 years I think they would either get nationalised or strongly curtailed by government.
English
13
0
32
2.1K