Hebb Rule is Enough for AGI/A Creative I - Jayan

1.5K posts

Hebb Rule is Enough for AGI/A Creative I - Jayan banner
Hebb Rule is Enough for AGI/A Creative I - Jayan

Hebb Rule is Enough for AGI/A Creative I - Jayan

@isolvedagi2

creation = subjective higher order novelty maximization = obj function of all life/intelligence/AGI. A Hebbian SNN reservoir does this by default. see blog

Katılım Mayıs 2015
1.1K Takip Edilen544 Takipçiler
Sabitlenmiş Tweet
Hebb Rule is Enough for AGI/A Creative I - Jayan
Creation = Higher order novelty Maximization = Objective of all intelligence/ AGI. Random action may create at times too, but a Hebbian SNN reservoir is much more efficient and has the instinct to maximise creativity. Other abilities emerge automatically youtu.be/COV0yWfcQME
YouTube video
YouTube
English
14
23
229
1.5M
Sophia
Sophia@sopharicks·
Is the current AI research approach wrong? Karl Friston thinks that the reward function is the wrong incentive for agents. The value function should be the expected information gain by an agent with embedded constraints that would allow it to navigate a given environment. Similar to babies having a comfortable temperature, deviation from that temperature (constraints) gives out a signal that something is wrong. Karl calls this kind of agents "curious baby AGIs," who are trying to learn as much as possible about the world, interact with each other and with humans. Humans are very important in this loop, they play the role of parents and allow to transfer the human mental world to baby AGIs. My question is – how do we translate this approach into practical AI research?
English
24
19
95
7.6K
Stanford S
Stanford S@stanfordseq·
@rpraggnachess Dude, where is India’s flag and the word India? Adani is your sponsor that is not India.
English
4
0
25
3K
Praggnanandhaa
Praggnanandhaa@rpraggnachess·
Wearing this jersey feels like carrying a piece of India’s dream with me Garv Hai is more than just an initiative. It’s a belief in all of us who’ve worked our way up from the grassroots, chasing excellence on the global stage. Proud to be part of this journey.
Praggnanandhaa tweet media
English
754
1.9K
20.9K
1.3M
Dewy.dee
Dewy.dee@deebayleaf·
This is so embarrassing, Dal Gadot was ignored by Dua Lipa, Jake Gyllenhaal, and others at a Bulgari luxury event. Celebrities should ignore such war apologists.
English
526
1.1K
37.4K
9.8M
Hebb Rule is Enough for AGI/A Creative I - Jayan
@indicmawntee Why choose this video from this jealous guido midget, rushing to ridicule the first positive news about indian truckies here. The fact that a lot of Indians don't even understand this snakes intentions is what makes us deserving of all that we get here.
English
0
0
0
432
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
@aran_nayebi I think I have to disagree. I do think the vast majority of scientists have a causal component to their meaning of understanding - and for a reason, we need causality to successfully intervene.
English
3
2
18
2.8K
Aran Nayebi
Aran Nayebi@aran_nayebi·
I don't see why prediction has to be framed as necessarily at odds with "understanding". The two naturally go hand-in-hand. Prediction is the *minimal* scientific prereq for anything you want to further investigate. We didn't even have successfully predictive systems of large-scale neural population responses in the neurosciences until ML started working. Furthermore, "understanding" isn't an objective measure -- it's aesthetically in the eye of the beholder. So it's not clear there's a well-defined global notion here to begin with, besides prediction alone. If you ask 10 scientists what they mean by "understanding", you'll get > 10 different answers 🙂 Not to mention, causal manipulations are naturally supported in ANNs because they're mechanistic models by construction: you have the entire network graph available to you to perturb as you choose. As the saying goes: “Everything should be as simple as it can be, but not simpler.” And it's quite clear there isn't anything simpler than ANNs without losing tons of predictive power. Why bother "understanding" a system that doesn't even predict the scientific phenomenon at hand?
The Transmitter@_TheTransmitter

Neuroscience has become increasingly concerned with prediction, and machine learning with causal explanation, with each field adopting methods from the other, writes @gershbrain. Will this bring us closer to understanding neural systems? thetransmitter.org/the-big-pictur…

English
4
7
48
9.2K
RapperPandit
RapperPandit@RapperPandit·
🚨G.N. Ramachandran (Iyengar) did one of the fundamental discoveries in molecular biology-The Shape of PROTEINS •Every Protein structure for Drug discovery is validated using his work. •He deserved a Nobel Prize. Did you know about him? An Indian genius who quietly shaped modern biology is the Founder of Molecular Biophysics in India from Kerala Ernakulam.
RapperPandit tweet media
English
15
390
926
12.2K
TrdrJrnl
TrdrJrnl@TrdrJrnl·
@RapperPandit @PMOIndia @OfficeDp Don’t associate too much to ‘Nobel’ :). We will recognize and remember, especially now thanks to handles like yours. Likely in sometime, Indian rich houses will set up own awards. If there are already some Indian awards, we should make those famous.
English
2
2
17
417
SagasofBharat
SagasofBharat@SagasofBharat·
@maverick_nk Another thing was responsible for industrial revolution that u missed. The money stolen from us, and the resources stolen from us.
English
2
0
22
181
AGIHound
AGIHound@TrueAIHound·
@thereisnome369 @isolvedagi2 It's all sci-fi fruitcake talk. The machines will do exactly what we tell them to do. They will not be conscious and they will not have free will.
English
2
0
1
41
AGIHound
AGIHound@TrueAIHound·
"Former OpenAI researcher Scott Aaronson: AI superintelligence may render humans as obsolete as baboons in zoos. It is a potential successor species." Another sci-fi fruitcake heard from. Dear Lord. 🤦‍♂️ I wonder. Why did Scott leave OpenAI? OpenAI is a cesspool of hardcore sci-fi fruitcakes working hard on achieving superintelligence and machine consciousness. He should have felt right at home, no? Could it be because the fruitcake-in-chief, Sam Altman, didn't think that Scott's fruitcake level was high enough? 😀
Rohan Paul@rohanpaul_ai

Former OpenAI researcher Scott Aaronson: AI superintelligence may render humans as obsolete as baboons in zoos. It is a potential successor species. Says, global leadership is unprepared to manage this existential shift over the next 25 years.

English
7
4
46
2.4K
Hebb Rule is Enough for AGI/A Creative I - Jayan
Hehehe, so which part is illogical, is it that AGI is possible, or that we would create AGI or that it may have a preference? Oh I know it's gonna be a 'collective' AGI coming out of all the cellphones of the world and it's gonna have a 'Soul' and be a Buddhist and hence good right?
English
1
0
0
46
AGIHound
AGIHound@TrueAIHound·
@isolvedagi2 Dude. You speak like a sci-fi fruitcake. Yo know I don't like sci-fi fruitcakes. Why do you follow me? 🙄
English
1
0
2
142
Earl K. Miller
Earl K. Miller@MillerLabMIT·
This paper argues against higher-order theory (HOT) of consciousness. But its conclusion matches my understanding of HOT: conscious experience requires both a higher-order model and its linkage to posterior cortex. doi.org/10.1080/095150… #neuroscience
English
8
10
41
3.7K
Hebb Rule is Enough for AGI/A Creative I - Jayan
Just wondering if there is a homeostatic super rule that drives other phenomenon like Hebb rule for weight update, stdp, neurogenesis, forming connections, axonal delays, firing thresholds etc...
English
0
0
0
91
Hebb Rule is Enough for AGI/A Creative I - Jayan
Thirukkural - 7 words per kural and all consecutive words have to have edhugai monai relationship. 1330 kurals in 133 adhikarams. Aathichudi - 2 words per chudi. Naladiyar - 4 lines per verse Agananooru - 400 verses for home life Pirananooru - 400 verses for outside life. The list goes on..
English
0
0
2
71
melancholy
melancholy@Ashok_19·
@Fintech03 Shankaracharya has written a lesser known work comprised of 54 sutras (Sadachar - or how the Realised Person lives in the world). It fits into 2 pages but a whole book has been written about it in Marathi (yet to see an English translation) and it still can’t do justice to it
English
1
2
13
1.9K
Parimal
Parimal@Fintech03·
The most extreme historical example of High-IQ Writing is the Indian Sutra style. Ancient Indian grammarians, specifically Pāṇini (c. 4th century BC), were so obsessed with brevity that there is a famous verified maxim among them: "A grammarian rejoices as much over the saving of half a short vowel as over the birth of a son". They believed that truth should be coded in the most compressed format possible. Pāṇini’s Aṣṭādhyāyī, the foundation of Sanskrit grammar is essentially a computer program written 2000 yrs before computers. It uses algebraic meta-rules so dense that it can be recited in just 2 hrs, yet it describes the entire infinite complexity of a language.
David Shapiro (L/0)@DaveShapi

Books would be shorter if the average IQ was higher. Higher fluid intelligence seems to be highly correlated with more rapid generalization from fewer samples. Thus, the length of books is often just to repeat ideas enough times, and enough different ways, for more typical brains to grasp it.

English
24
221
1.8K
90.4K
AGIHound
AGIHound@TrueAIHound·
Superintelligence (the know-it-all AGI in the cloud) is the wet dream of the Singularity people and other sci-fi fruitcakes. The attentional and situational bottlenecks forbid it. Only a collective or society of cooperating intelligences can be superintelligent. Moreover, knowledge is built on a foundation that is learned early. It's hard to add structures that the foundation was not designed to support. This is why adults find it difficult to learn a new language fluently. The same will be true of intelligent machines. PS. No, LLMs are not intelligent. They are massive statistical cheating machines.
English
6
5
29
1.4K
Pedro Domingos
Pedro Domingos@pmddomingos·
You can’t get to superintelligence by mimicking humans.
English
114
34
397
42.2K