Egor Riabov

6.4K posts

Egor Riabov

Egor Riabov

@imobulus

Math freak with trace amounts of musician

가입일 Mart 2012
137 팔로잉120 팔로워
Egor Riabov
Egor Riabov@imobulus·
@DissentFu They create value. It is not guaranteed that jobs are best producers of value anymore.
English
0
0
0
1
Egor Riabov
Egor Riabov@imobulus·
@insurrealist Because it's an incorrect argument lol. How the hell its incorrectness isn't obvious to people??
English
0
0
0
4
Egor Riabov
Egor Riabov@imobulus·
There are N=2^(64*10^9) billion length arrays of 64 bit integers. Both algorithms if given any of these will output something that can be pre-defined to map to the only one correct output array of N arrays. When you take a popcorn-shaker, and you define some input mechanism (say, prick some 64 billion dots on the walls of the bag) and some output mechanism (say, some sort of encoding of the final positions of popcorn kernels into integers) you would get some mapping from N arrays to N arrays. The amount of information that is contained in the encoding is also about N, or maybe several N's, it would fit on a hard drive. So, you get -> Would any encoding produce a mapping that is correct sorter mapping? Obviously NO. because the amount of mappings is N^N. Exactly N*order(N) orders of magnitude larger than N. And you only get O(N) tries to make a hit on the needed mapping. It's basically less possible than your head exploding next second due to pure quantum tunneling.
English
0
1
1
8
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
104
29
474
154K
Egor Riabov
Egor Riabov@imobulus·
I guess "obviously" does not work here though because the argument is about framing, and "obviously" applies our human framing. I guess what really is the crux here is that causality is objective and you cannot mathematically map a popcorn-shaker causal system onto an array sorter causal system. For me this impossibility is pretty natural and obvious, but that may not be the case for others.
English
0
0
0
12
Egor Riabov
Egor Riabov@imobulus·
@42irrationalist @Mihonarium @allTheYud @johnsonmxe In case of left-to-right vs right-to-left the right to left causal system has a property of arraysortedness since it has a causal-mapping to a canonical array sorter. But it is obviously not the case for every causal system! Why would you even state that as an obvious thing?
English
0
0
0
14
curious irrationalist
curious irrationalist@42irrationalist·
@imobulus @Mihonarium @allTheYud @johnsonmxe Imagine you output your array of numbers on the screen. For people used to left-to-right languages it will be sorted in ascending order but in the right-to-left ones it'll be sorted to right-to-left. Hence, you have a map from pixels on the screen to the “actual” state
English
2
0
0
31
Ethan Kuntz
Ethan Kuntz@KanizsaBoundary·
@allTheYud I think you get a version of panpsychism as a consequence of the arguments given? Elaborate mapping of some gas particles bouncing around to the state transitions of your brain. Even weighting mappings by a natural parsimony metric will maybe lead to a little bit of panpsychism
English
3
0
6
414
Egor Riabov
Egor Riabov@imobulus·
I don't understand this counterargument. You can write an array sorter in python, it has the property of "arraysortedness" that produces a sorted array from not sorted array. You can build an array sorter out of gas particles and out of rocks as well. That does not mean a rock has 10^-1000 of arraysortedness. The arrangement matters.
English
1
0
0
24
Egor Riabov
Egor Riabov@imobulus·
@ramzpaul Simple, everything is dirt cheap now, like bubble gum. People can sell other things they're still valuable at. In the ideal world this valuable thing would be their humanity, i.e. a small UBI.
English
0
0
0
25
Egor Riabov
Egor Riabov@imobulus·
Wait, you say that the paper abstract does not reflect the contents of the paper? (Meaning the abstratc says LLMs can't be conscious "because it's silicon" ans the paper actually explores how the LLMs architechture cannot produce conscious circuits) Or are you saying that the tweet "LLMs can't be conscious" is correct in it's statement but cites the source that is an incorrect argument? If it's tge first it's interesting. If it's the second, then the tweet author certainly does not understand the incorectness of the argument, and their reason for belief "LLMs aren't conscious" is therefore invalid.
English
0
0
0
10
Chris Wynes
Chris Wynes@cjwynes·
@allTheYud Then I think the tweet summary materially misstated the paper argument. Bc then the tweet is plausibly right (perhaps LLMs, just as my TI-83 calculator, won’t ever be conscious) but the abstract is wrong for suggesting NO algorithm running on a machine could be.
English
1
0
2
304
Egor Riabov
Egor Riabov@imobulus·
I am still convinced that a human brain implemented on GPUs will be conscious. The bunch of words in this abstract did not change my mind. Therefore the question still remains — what do you need to see in order to distinguish the world where LLMs have the same jnternal thing that makes human brains conscious vs where it does not.
English
0
0
0
55
Egor Riabov
Egor Riabov@imobulus·
@mathepi @tombibbys What? AI does not become less dangerous if it's democratized. It will kill everyone either way, why did you even write this as an answer?
English
1
0
0
8
Tom Bibby
Tom Bibby@tombibbys·
It's the AI accelerationists who give up at the smallest challenge and accept defeat who have a "dark view of humanity". The people you call "doomers" believe humanity can come together and internationally cooperate to prevent disaster.
Tom Bibby tweet media
The San Francisco Standard@sfstandard

OpenAI’s global policy chief, Chris Lehane, thinks the discussion around AI has gotten out of hand. "When you put some of those thoughts and ideas out there, they do have consequences.” 📝: @ceodonovan sfstandard.com/2026/04/15/ope…

English
14
15
142
6.5K
Egor Riabov
Egor Riabov@imobulus·
@creation247 The difference is that utopia gets achieved by allowing private property, and then when there's abundance everything gets cheaper. Not the other way around, where you first make everything available to everyone and therefore don't incentivise producing more
English
0
0
1
26
Egor Riabov
Egor Riabov@imobulus·
@cturnbull1968 Taxing the rich means the rich move out of your city. The rich are mostly successful entrepreneurs. This makes the regular people poorer immediately, without the wait for "him to come after regular americans"
English
0
0
0
87
Egor Riabov
Egor Riabov@imobulus·
@AivokeArt @d33v33d0 True AGI also isn't required to put maximum effort into every question it gets asked, because it would be a waste of energy
English
0
0
0
31
Aivo
Aivo@AivokeArt·
@d33v33d0 true AGI shouldn't get tripped up on the dumbest tests either though "but tokenization" isn't an argument anyone will care about
English
8
0
35
1.6K