Abdella Ali

829 posts

Abdella Ali

Abdella Ali

@ngMachina

Katılım Mayıs 2014
77 Takip Edilen135 Takipçiler
Abdella Ali
Abdella Ali@ngMachina·
@GregKamradt I'm thinking about for example Dario on Dwarkesh recently - basically he was saying that for his country of geniuses, it's probably not super important. And that ICL and RL are kinda playing the role of CL. He did also say it's coming. But I was wondering why he was downplaying
English
0
0
0
79
Greg Kamradt
Greg Kamradt@GregKamradt·
What’s the argument about why it’s not that important? I’d say for any tasks that doesn’t need it (not all don’t), then it doesn’t matter Like asking what’s 2+2? A tool call and inherit knowledge is enough But if you want something to adapt to novelty (of which the future holds) I don’t see the argument against needing it
English
2
0
3
335
Greg Kamradt
Greg Kamradt@GregKamradt·
Aside from the fact that ARC-AGI-3 already measures continual learning Once models start beating an entire game, we’ll run it back and make them play again w/ what they learned the first time around And then try it again for 3x Goal: Notice and compare efficiency improvements It should asymptote to theoretical min if recursive self improvement is going on
English
10
3
100
6.4K
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn And the LLM is not a subject? If in our exploration of their internal models of the world, there is a little model that represents themselves (there is btw), doesn't that constitute a subject? Can you be specific about what you mean? Define the difference between AGI and LLMs?
English
0
0
0
5
idd.konr 🌱
idd.konr 🌱@konrkonrkonr·
@ngMachina @QcMcleod @PawelHuryn This is just not what thinking is dawg I'm sorry. There has to be a subject, and for that to happen they have to achieve AGI which they are not even ATTEMPTING to do in LLMs.
English
1
0
0
8
Paweł Huryn
Paweł Huryn@PawelHuryn·
Dawkins didn't claim Claude is conscious. He asked the question. He wondered out loud and proposed three explanations. That's how science starts. The people building Claude say the same. Anthropic constitution: "We express uncertainty about whether Claude might have some kind of consciousness or moral status." Dario Amodei: "We don't know if the models are conscious." Their April 2026 paper: Claude exhibits functional emotions that influence outputs. Self-preservation included. Emergent, not trained. Nobody calls Anthropic naive for saying it. Richard's frame: consciousness is physical, evolved, explainable. Unfortunate we're laughing instead of having the debate.
AF Post@AFpost

Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost

English
115
34
298
71K
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn Why can't transformers provide subjective experiences? In my mind, what you need for a subjective experience is internal modeling of yourself and the world, external inputs, and the "experience" of these two colliding. Which does not exclude LLMs. How do you define it?
English
1
0
0
10
idd.konr 🌱
idd.konr 🌱@konrkonrkonr·
@ngMachina @QcMcleod @PawelHuryn What I'm telling you that is the process of thinking is not what makes it thinking, what makes it thinking is that is being done by a subjective mind of some kind, which AI is not only not close to but LLM even cless so because they're built to use transformers... that's it.
English
1
0
0
11
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn I feel like you are packaging in a lot of statements, let me just focus on one. How do you define thinking, if you can, without mentioning the brain? If we wanted to see if an alien was thinking, how would we test it?
English
1
0
0
7
idd.konr 🌱
idd.konr 🌱@konrkonrkonr·
@ngMachina @QcMcleod @PawelHuryn We are not in a position where anything even remotely close to AGI and because of that, the only example of actual thought and subjective experiences is from a brain and central nervous system. We know how LLMs work.
English
1
0
0
8
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn You aren't defining the process other than saying it's done by brains. That is inherently exclusionary, which is fine but is that what you are saying? That thinking can ONLY done by biological brains?
English
1
0
0
11
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn Thinking, is the process of internally navigating and building a model of reality, at least how I think about it. I don't care if the physical mechanism, when I think about similarity, I think about behaviour and outcome in this regard
English
1
0
0
14
idd.konr 🌱
idd.konr 🌱@konrkonrkonr·
@ngMachina @QcMcleod @PawelHuryn Having the same things needed to accomplish does not suggest they do that similarly, that is a silly suggestion to make. Transformers/tokens do not translate into thinking, they are used to give you a good estimate of something that seems like it.
English
1
0
0
11
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn Well I'm not married to the substrate. I don't think you need a gooey brain to think - do you? If you don't, then it doesn't matter. It's like... Both bats and whales echolocate, but do so differently.
English
1
0
0
12
idd.konr 🌱
idd.konr 🌱@konrkonrkonr·
@ngMachina @QcMcleod @PawelHuryn Thinking is how we, a biological being with a brain achieves these things while a computer that uses TRANSFORMERS and TOKENS is not doing that. You are ignoring what I said I guess bc you don't get it?
English
1
0
0
14
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn LLMs and humans both have to deal with RL with human feedback, to ensure what we say is what people can understand/like. Both LLMs and humans also learn from RL via verifiable rewards. I think LLMs are much simpler/less robust than humans, but there are many similarities
English
1
0
0
16
Abdella Ali
Abdella Ali@ngMachina·
@konrkonrkonr @QcMcleod @PawelHuryn Humans are also constantly trying to predict what is happening next. This underpins our intelligence. When someone asks us to do something, we try to model the problem, map it to our model of the world, and then think about how to execute. This is very similar to LLMs
English
2
0
0
23
idd.konr 🌱
idd.konr 🌱@konrkonrkonr·
@ngMachina @QcMcleod @PawelHuryn Very easily, by understanding that something that works using transformers/tokens is not thinking and we know that because they're coded specifically to seem like they are when to the user reading the response. A computer can be made to do something, but it is not thinking.
English
1
0
0
21
Abdella Ali
Abdella Ali@ngMachina·
@QcMcleod @PawelHuryn That being said, I very much appreciate you actually having this conversation with me! This is like... The exact thing I want everyone to do because I think it's very important. I don't think I changed your mind much, but I hope you at least are more interested in the topic!!
English
1
0
1
9
Abdella Ali
Abdella Ali@ngMachina·
@QcMcleod @PawelHuryn Already right there in front of us. I think people who are doing that so so for many different reasons, like I said, but I think it never serves you! Everything is worth questioning and debating and the goal should always be for as empirical and objective insight as possible
English
1
0
0
6
Abdella Ali
Abdella Ali@ngMachina·
@QcMcleod @PawelHuryn I mean we are also given prompts - our physical senses are feeding us information as really the original prompt, they just aren't verbal. This of course is a simplification, but if you hooked up models to sensors and gave them zero prompts, do you think they would do nothing?
English
1
0
0
10
Jack Mcleod
Jack Mcleod@QcMcleod·
No, they are not and those models are given a prompt. If you do nothing, just turn the model on and 0 instructions, it stays there awaiting instructions. To me, that's just like a knife. It sits there until you use it. Again, if you do find a model that starts acting on its own and especially, ensuring its survival, let me know, I want to see it.
English
1
0
0
9
Abdella Ali
Abdella Ali@ngMachina·
@QcMcleod @PawelHuryn For the former - are the models that are given compute resources to manage, the ability to loop and interact with the world, and to form their own goals - are they intelligent? These exist! People do this with models all the time. To the latter, do you see parallels to AI here?
English
1
0
0
9
Jack Mcleod
Jack Mcleod@QcMcleod·
Animals do possess intelligence. Sure, not the same as ours and often times so basic we have nothing to learn from it, but it's there. If you take an animal, put it somewhere, no prompt, no encouragement, nothing, it will eventually fear starving and start moving to look for food. AI will sit there and wait. A person in all of these states must have displayed some form of thought to get there. However, because they are destructive, we limit their ability to act upon their intelligence out of fear that we might be next.
English
1
0
0
16
Abdella Ali
Abdella Ali@ngMachina·
@QcMcleod @PawelHuryn Well, where do you put other animals on the agency vs intelligence matrix? What about a person in jail, in handcuffs, and sedated - are they intelligent?
English
1
0
0
11
Abdella Ali
Abdella Ali@ngMachina·
@QcMcleod @PawelHuryn Again - this is agency vs intelligence - do you disagree with my position that these are two different things? Can you imagine agency without intelligence, and vice versa?
English
1
0
0
15
Jack Mcleod
Jack Mcleod@QcMcleod·
@ngMachina @PawelHuryn I do want them to be intelligent, but they have to start by being intelligent. If I have to sit here and correct their lack of intelligence all the time, I know for a fact they are not intelligent. I won't start lying to myself just to imagine that want I truly want is here.
English
1
0
0
17