Post

SingularityNET
SingularityNET@SingularityNET·
Transformer-based large-language models display striking competencies, yet they lack flexible, dynamic long-term memory, explicit goal systems, and the self-reflective loops needed for open-ended adaptation. Symbolic systems, in contrast, offer clarity and self-modification but have struggled to match the perceptual breadth of deep learning. OpenCog Hyperon seeks to bridge this divide by embedding neural, logical, and evolutionary processes inside a single metagraph whose elements can rewrite one another in real time
SingularityNET tweet media
English
5
10
122
6.1K
SingularityNET
SingularityNET@SingularityNET·
Hyperon is a very flexible implementation substrate. However, one of the guiding motives behind its development has been the implementation of a specific cognitive architecture known as PRIMUS. PRIMUS structures the Atomspaces of a Hyperon instance into modules for episodic memory, working memory, procedural memory, perception, and action selection, associating specific learning and reasoning methods with each of these aspects, carefully designed for “cognitive synergy” between these methods. The dynamics are designed to mirror human cognitive cycles while also supporting robust long-term background thinking and reflective self-improvement.
SingularityNET tweet media
English
3
0
22
5K
Dariusz Swierk PhD
Dariusz Swierk PhD@swierk·
For now, it seems like the most sensible solution. LLMs lack memory and certain functions (e.g., self-learning, meta-learning) — if we want to move toward superintelligence, our solutions must have them. Symbolic systems are based on what we teach them manually. And as I write this, I already know that we can try to teach them using LLMs. That seems to make sense. On the other hand, LLMs may not be the final solution; in the future, there may be solutions based on sub-quadratic systems or, for example, Hilbert spaces. A very interesting lecture.
English
0
0
0
25
Paylaş