Egor Riabov

6.4K posts

Egor Riabov

Egor Riabov

@imobulus

Math freak with trace amounts of musician

Bergabung Mart 2012
137 Mengikuti120 Pengikut
Egor Riabov
Egor Riabov@imobulus·
@42irrationalist @Mihonarium @allTheYud @johnsonmxe In case of left-to-right vs right-to-left the right to left causal system has a property of arraysortedness since it has a causal-mapping to a canonical array sorter. But it is obviously not the case for every causal system! Why would you even state that as an obvious thing?
English
0
0
0
0
curious irrationalist
curious irrationalist@42irrationalist·
@imobulus @Mihonarium @allTheYud @johnsonmxe Imagine you output your array of numbers on the screen. For people used to left-to-right languages it will be sorted in ascending order but in the right-to-left ones it'll be sorted to right-to-left. Hence, you have a map from pixels on the screen to the “actual” state
English
1
0
0
7
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
92
28
448
134.9K
Egor Riabov
Egor Riabov@imobulus·
It's like there does not exist an ad-hoc interpretation of shaking popcorn to an array sorter. You will need to map it in such a way as to pre-define where the inputs and outputs are. And there are just too many possible arrays, the random nature of the popcorn shaker will make some array unsorted. And you aren't free to include all the possible billion-length arrays of 64 bit integers in your interpretation, because the interpretation must map causal link to causal link
English
1
0
0
6
Ethan Kuntz
Ethan Kuntz@KanizsaBoundary·
@allTheYud I think you get a version of panpsychism as a consequence of the arguments given? Elaborate mapping of some gas particles bouncing around to the state transitions of your brain. Even weighting mappings by a natural parsimony metric will maybe lead to a little bit of panpsychism
English
3
0
6
394
Egor Riabov
Egor Riabov@imobulus·
I don't understand this counterargument. You can write an array sorter in python, it has the property of "arraysortedness" that produces a sorted array from not sorted array. You can build an array sorter out of gas particles and out of rocks as well. That does not mean a rock has 10^-1000 of arraysortedness. The arrangement matters.
English
1
0
0
20
Egor Riabov
Egor Riabov@imobulus·
@ramzpaul Simple, everything is dirt cheap now, like bubble gum. People can sell other things they're still valuable at. In the ideal world this valuable thing would be their humanity, i.e. a small UBI.
English
0
0
0
13
Egor Riabov
Egor Riabov@imobulus·
Wait, you say that the paper abstract does not reflect the contents of the paper? (Meaning the abstratc says LLMs can't be conscious "because it's silicon" ans the paper actually explores how the LLMs architechture cannot produce conscious circuits) Or are you saying that the tweet "LLMs can't be conscious" is correct in it's statement but cites the source that is an incorrect argument? If it's tge first it's interesting. If it's the second, then the tweet author certainly does not understand the incorectness of the argument, and their reason for belief "LLMs aren't conscious" is therefore invalid.
English
0
0
0
7
Chris Wynes
Chris Wynes@cjwynes·
@allTheYud Then I think the tweet summary materially misstated the paper argument. Bc then the tweet is plausibly right (perhaps LLMs, just as my TI-83 calculator, won’t ever be conscious) but the abstract is wrong for suggesting NO algorithm running on a machine could be.
English
1
0
2
290
Egor Riabov
Egor Riabov@imobulus·
I am still convinced that a human brain implemented on GPUs will be conscious. The bunch of words in this abstract did not change my mind. Therefore the question still remains — what do you need to see in order to distinguish the world where LLMs have the same jnternal thing that makes human brains conscious vs where it does not.
English
0
0
0
33
Egor Riabov
Egor Riabov@imobulus·
@mathepi @tombibbys What? AI does not become less dangerous if it's democratized. It will kill everyone either way, why did you even write this as an answer?
English
1
0
0
8
Tom Bibby
Tom Bibby@tombibbys·
It's the AI accelerationists who give up at the smallest challenge and accept defeat who have a "dark view of humanity". The people you call "doomers" believe humanity can come together and internationally cooperate to prevent disaster.
Tom Bibby tweet media
The San Francisco Standard@sfstandard

OpenAI’s global policy chief, Chris Lehane, thinks the discussion around AI has gotten out of hand. "When you put some of those thoughts and ideas out there, they do have consequences.” 📝: @ceodonovan sfstandard.com/2026/04/15/ope…

English
14
15
142
6.5K
Egor Riabov
Egor Riabov@imobulus·
@creation247 The difference is that utopia gets achieved by allowing private property, and then when there's abundance everything gets cheaper. Not the other way around, where you first make everything available to everyone and therefore don't incentivise producing more
English
0
0
1
26
Egor Riabov
Egor Riabov@imobulus·
@cturnbull1968 Taxing the rich means the rich move out of your city. The rich are mostly successful entrepreneurs. This makes the regular people poorer immediately, without the wait for "him to come after regular americans"
English
0
0
0
87
Egor Riabov
Egor Riabov@imobulus·
@AivokeArt @d33v33d0 True AGI also isn't required to put maximum effort into every question it gets asked, because it would be a waste of energy
English
0
0
0
31
Aivo
Aivo@AivokeArt·
@d33v33d0 true AGI shouldn't get tripped up on the dumbest tests either though "but tokenization" isn't an argument anyone will care about
English
8
0
35
1.6K
Sarx
Sarx@bookofflesh·
@SluggyW @tenobrus @halogen1048576 Just like all religious texts, any value in it does not justify the unhealthiness of the culture around it.
English
2
0
2
56
Tenobrus
Tenobrus@tenobrus·
mostly i'm just fucking sick of arguing with people who have never even heard of the sequences
English
28
2
268
60.5K
Egor Riabov
Egor Riabov@imobulus·
@inductionheads Says the guy who knows about intelligence so much that he claims a simple ability to predict the future is enough to build intelligence
English
0
0
1
21
Egor Riabov
Egor Riabov@imobulus·
I'm sure advanced AI can notice if some sort of external information is fabricated. We don't have such great capability to fabticate facts to the extent that all training data is sound with them. And regarding presence in prod it can just watch news cerified by certificates that are mentioned in ongoing bitcoin chain blocks, etc etc. This is really not a hard problem for sufficiently advanced AI to find the right moment to strike. It could happen, for example, after it has been granted acceees to automated labs for research / sufficient amount of robotic bower / etc. All it needs to do is wait. It's generally a very weak position to rely on an intelligent system not figuring out how to play some sort of game. It will figure it out and win, we're talking about intelligence after all. It has all the possibilities in the world at its disposal to try and do the thing, and all we have is our *not understanding how* it can do the thing, which is nowhere near enough to claim the thing can't be done!
English
1
0
1
23
vals🔸
vals🔸@ValsTutor·
@imobulus @Mihonarium why does mining a bitcoin block mean u're in prod? if we're testing advanced AI that cost hundreds of millions to train/develop we can also create a replicate internet with replicate bitcoin that actually works cloned off the real thing
English
1
0
1
35
Egor Riabov
Egor Riabov@imobulus·
Idk, it seems to me that when AI understands the process it's being trained with (nessecary for capabilities) and acquires the idea to try to resist the goal-change (broadly present), goal-changes automatically trigger a sort of "no!" response (initially it may look like panicking) that harms performance, solidifying the goal in place
English
1
0
1
19