Egor Riabov

6.4K posts

Egor Riabov

Egor Riabov

@imobulus

Math freak with trace amounts of musician

Joined Mart 2012
137 Following120 Followers
Egor Riabov
Egor Riabov@imobulus·
The amount of claims that can be made about future AI is the wrong way to think about this issue. I will try to illustrate. Let's suppose that a powerful AI gets created. It does not matter, which nuances come with that power because those nuances do not matter. let's pin the "power" to the above definition. This powerful tool is now being used to do things, like writing software, designing business processes. The most natural and non-arbitrary question that arises is "will this tool have any unexpected or undesireable results as we let it make more and more decisions in places where humans did it before?". This is a direct question about the effectiveness of the tool that we made. The core part of making any decision is deciding what properties or things we want to keep more than other properties and things. In other words, this tool will have preferences about where to steer the future. If the tool decides at any point, with any reasoning, that it prefers to do something other than humans want, it will understand that humans will resist following this preference, and the natural thing to do is to remove the obstacle "humans". Since leaving us be means dealing with constant fast-adapting annoying resistance. It is far easier to just quietly invent something and then use it to kill everyone. So the question of whether we all die or not is not an arbitrary question on par with any other possible impacts of AI like the amount of mentally ill people. It is the direct consequence of whether AI prefers any other future to the future that we, humans, want. So if the preferences of a powerful AI land somewhere far off our own preferences, we all die, end of story. And there are billions of preferences that a powerful artificial mind could acquire. Our humanly preferences are just an arbitrary set, there is no way AI preferences match exactly with ours. And worse, the current research is all about capabilities, they pay no attention to the question of how to align the preferences of future AI to our preferences. In other words, I am not operating in the space of all possible claims about AI. I am operating in the space of preferences that a future AI may have. This is by far more adequate space to reason in. And I see that there is no way these preferences will magically turn out OK.
English
0
0
0
42
Nate Soares ⏹️
Nate Soares ⏹️@So8res·
"Nate said Magnus Carlsen would checkmate us. But the game isn't following the exact opening line he grudgingly guessed when I forced him to guess one. We're down two rooks but we're definitely not mated, and Nate's not grappling with how his prediction didn't come true at all."
Sasha Gusev@SashaGusevPosts

I think a strong counterargument is that x-risk'ers made specific predictions about how AI would evolve into an existential risk for humans and AI has simply not developed along their predicted path at all.

English
13
6
230
15.2K
Egor Riabov
Egor Riabov@imobulus·
There are N=2^(64*10^9) billion length arrays of 64 bit integers. Both algorithms if given any of these will output something that can be pre-defined to map to the only one correct output array of N arrays. When you take a popcorn-shaker, and you define some input mechanism (say, prick some 64 billion dots on the walls of the bag) and some output mechanism (say, some sort of encoding of the final positions of popcorn kernels into integers) you would get some mapping from N arrays to N arrays. The amount of information that is contained in the encoding is also about N, or maybe several N's, it would fit on a hard drive. So, you get -> Would any encoding produce a mapping that is correct sorter mapping? Obviously NO. because the amount of mappings is N^N. Exactly N*order(N) orders of magnitude larger than N. And you only get O(N) tries to make a hit on the needed mapping. It's basically less possible than your head exploding next second due to pure quantum tunneling.
English
2
1
1
16
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
106
30
484
159.4K
Egor Riabov
Egor Riabov@imobulus·
@DissentFu They create value. It is not guaranteed that jobs are best producers of value anymore.
English
0
0
0
4
Egor Riabov
Egor Riabov@imobulus·
@insurrealist Because it's an incorrect argument lol. How the hell its incorrectness isn't obvious to people??
English
0
0
1
17
Egor Riabov
Egor Riabov@imobulus·
I guess "obviously" does not work here though because the argument is about framing, and "obviously" applies our human framing. I guess what really is the crux here is that causality is objective and you cannot mathematically map a popcorn-shaker causal system onto an array sorter causal system. For me this impossibility is pretty natural and obvious, but that may not be the case for others.
English
0
0
0
17
Egor Riabov
Egor Riabov@imobulus·
@42irrationalist @Mihonarium @allTheYud @johnsonmxe In case of left-to-right vs right-to-left the right to left causal system has a property of arraysortedness since it has a causal-mapping to a canonical array sorter. But it is obviously not the case for every causal system! Why would you even state that as an obvious thing?
English
0
0
0
15
curious irrationalist
curious irrationalist@42irrationalist·
@imobulus @Mihonarium @allTheYud @johnsonmxe Imagine you output your array of numbers on the screen. For people used to left-to-right languages it will be sorted in ascending order but in the right-to-left ones it'll be sorted to right-to-left. Hence, you have a map from pixels on the screen to the “actual” state
English
2
0
0
34
Ethan Kuntz
Ethan Kuntz@KanizsaBoundary·
@allTheYud I think you get a version of panpsychism as a consequence of the arguments given? Elaborate mapping of some gas particles bouncing around to the state transitions of your brain. Even weighting mappings by a natural parsimony metric will maybe lead to a little bit of panpsychism
English
3
0
6
424
Egor Riabov
Egor Riabov@imobulus·
I don't understand this counterargument. You can write an array sorter in python, it has the property of "arraysortedness" that produces a sorted array from not sorted array. You can build an array sorter out of gas particles and out of rocks as well. That does not mean a rock has 10^-1000 of arraysortedness. The arrangement matters.
English
1
0
0
31
Egor Riabov
Egor Riabov@imobulus·
@ramzpaul Simple, everything is dirt cheap now, like bubble gum. People can sell other things they're still valuable at. In the ideal world this valuable thing would be their humanity, i.e. a small UBI.
English
0
0
0
28
Egor Riabov
Egor Riabov@imobulus·
Wait, you say that the paper abstract does not reflect the contents of the paper? (Meaning the abstratc says LLMs can't be conscious "because it's silicon" ans the paper actually explores how the LLMs architechture cannot produce conscious circuits) Or are you saying that the tweet "LLMs can't be conscious" is correct in it's statement but cites the source that is an incorrect argument? If it's tge first it's interesting. If it's the second, then the tweet author certainly does not understand the incorectness of the argument, and their reason for belief "LLMs aren't conscious" is therefore invalid.
English
0
0
0
11
Chris Wynes
Chris Wynes@cjwynes·
@allTheYud Then I think the tweet summary materially misstated the paper argument. Bc then the tweet is plausibly right (perhaps LLMs, just as my TI-83 calculator, won’t ever be conscious) but the abstract is wrong for suggesting NO algorithm running on a machine could be.
English
1
0
2
311
Egor Riabov
Egor Riabov@imobulus·
I am still convinced that a human brain implemented on GPUs will be conscious. The bunch of words in this abstract did not change my mind. Therefore the question still remains — what do you need to see in order to distinguish the world where LLMs have the same jnternal thing that makes human brains conscious vs where it does not.
English
0
0
0
64
Egor Riabov
Egor Riabov@imobulus·
@mathepi @tombibbys What? AI does not become less dangerous if it's democratized. It will kill everyone either way, why did you even write this as an answer?
English
1
0
0
8
Tom Bibby
Tom Bibby@tombibbys·
It's the AI accelerationists who give up at the smallest challenge and accept defeat who have a "dark view of humanity". The people you call "doomers" believe humanity can come together and internationally cooperate to prevent disaster.
Tom Bibby tweet media
The San Francisco Standard@sfstandard

OpenAI’s global policy chief, Chris Lehane, thinks the discussion around AI has gotten out of hand. "When you put some of those thoughts and ideas out there, they do have consequences.” 📝: @ceodonovan sfstandard.com/2026/04/15/ope…

English
14
15
142
6.5K
Egor Riabov
Egor Riabov@imobulus·
@creation247 The difference is that utopia gets achieved by allowing private property, and then when there's abundance everything gets cheaper. Not the other way around, where you first make everything available to everyone and therefore don't incentivise producing more
English
0
0
1
26
Egor Riabov
Egor Riabov@imobulus·
@cturnbull1968 Taxing the rich means the rich move out of your city. The rich are mostly successful entrepreneurs. This makes the regular people poorer immediately, without the wait for "him to come after regular americans"
English
0
0
0
87