OneOrigine

2.1K posts

OneOrigine banner
OneOrigine

OneOrigine

@OneOrigine

Software Engineer by day, Independent Author by night. With one completed manuscript and several expansive universes in development,I thrive at the intersection

🧠 In your mind Katılım Mart 2023
1.8K Takip Edilen177 Takipçiler
OneOrigine
OneOrigine@OneOrigine·
I regret to announce—and I know barely anyone was waiting for this project anyway—that while S.A.V.O.I.R. yielded conclusive test results, it isn't complete enough yet to achieve actual perception. Either something is missing, or the architecture itself is just an organizational conceptualization of knowledge. I don't know. It honestly hurts a bit, but the project is being suspended, if not canceled entirely. Time will tell if this was a waste of time or just an absurdity, but I am officially announcing this in all humility. The visual results I showed you are real and based on S.A.V.O.I.R. theories—no simulations. The test results are real too; nothing in the core of S.A.V.O.I.R. was faked. But as of right now, it still doesn't perceive 🍎. It hasn't even reached Stage 0 of perception... I'm going to take a step back, reevaluate, or maybe just forget about it.
OneOrigine tweet media
English
0
0
0
13
OneOrigine
OneOrigine@OneOrigine·
Looks like I wasn't the only one who thought that knowledge is fundamentally geometric through abstraction, and then physical through composition and characteristics. Anyway, I'm currently exploring a new AI architecture based on my SAVOIR graph, and I'd love for you to take a look at it. It handles the creation of new knowledge, sleep, and dreaming—the graph is already capable of having these types of internal experiences. x.com/OneOrigine/sta…
English
0
1
1
176
Mathelirium
Mathelirium@mathelirium·
A Neural Network Can Grow New Neurons Where It Is Confused? In 1994, Bernd Fritzke published A Growing Neural Gas Network Learns Topologies. He introduced a network that starts small, follows incoming data, and inserts new neurons where its error is highest. In the animation, the fog is the drifting data. The glowing nodes are neurons. The fibers are learned connections. The network grows into a living skeleton of the manifold.
English
18
61
377
31.5K
OneOrigine
OneOrigine@OneOrigine·
OneOrigine@OneOrigine

Today I observed an important step in my SAVOIR prototype. At the beginning, there was only basic knowledge: elementary concepts, a few axioms, and simple relations. Nothing that looked like a global intelligence yet. Then I started the evolution. And something remarkable happened: the graph did not simply add nodes. It began to form a structure. In this image, we can see a topology of knowledge stabilizing itself. There are 220+- nodes, 53+- active frontiers, 83+- crystallized knowledge nodes, 16 axioms, and 7 sublations. These are not just numbers. They are traces of a process: hypotheses appeared, some faded, others found enough support to become knowledge bricks. What strikes me most is that the structure does not look like a random cloud. It forms regions. Near the top, there is a dense, almost mineral zone around concepts like identity, truth, proof, boundary, and stability. This region seems to act as a logical foundation. It does not behave like a loose hypothesis. It anchors the rest of the graph. Elsewhere, we see more dynamic regions: temporality, causality, information, change, emergence, world model, codex, memory, hallucination, compression. These zones are less compact and more distributed. They are not dead. They are exploring. They look like conceptual continents still forming. What I saw step by step matters even more than the final image. Nodes appeared as frontiers. They were not knowledge yet. They were directions: oriented possibilities. Then some of these nodes met. Triangles formed. Simplexes began to close relations. When coherence, axiomatic compatibility, and geometric stability became strong enough, the system crystallized a new brick. This is where the phenomenon becomes fascinating. Knowledge does not fall from the sky. It is not simply written into a database. It emerges from tension between axioms, energy, neighbors, trajectories, contradictions, and simplex closure. Knowledge appears when a region of the graph becomes coherent enough to stop being only a hypothesis. The latest visible crystallization in the image is a brick with a score close to 0.77. What is interesting is that it has perfect coherence, perfect ADN compatibility, zero closure defect, and zero inconsistency, even though its average energy is relatively low. This means the system is not confusing power with local truth. An idea can become stable not because it dominates energetically, but because its structure is clean. For me, this is a strong signal. A more naive system would simply keep the most active nodes. Here, some nodes fade, some remain hypothetical, and others become bricks. The system is beginning to distinguish excitation from possibility, and possibility from stability. We can also see seven sublations. This may be one of the most important aspects. A sublation means that a tension or collision is not merely rejected. It can be worked through. If it does not violate the fundamental axioms, it can produce a new derived rule. In other words, the system is not only storing knowledge. It can locally modify the way it stabilizes knowledge. That is why I call this cognitive metallurgy. Concepts are not only classified. They are heated, tested, brought together, sometimes blocked, sometimes fused, sometimes transformed into higher rules. Knowledge becomes a material. This image also shows the tri-spectral logic of the system. We can see an axiomatic layer acting like the gravitational field of laws. We can see an elementary knowledge layer, denser and more stable. And we can see a complex layer, more fluid, where concepts such as world model, memory, hallucination, codex, intention, compression, and agency begin to appear. The beauty of the phenomenon is that these three layers are not separate. They cross through one another. Strong knowledge appears when it respects axioms, aligns with elementary concepts, and reaches into a more complex region without losing coherence. That is exactly what I wanted to observe: a transition from basic knowledge into emergent structure. This is not proof of consciousness. It is not a mystical claim. It is an experimental observation: from a small axiomatic and conceptual base, the system produces an internal organization that does not look like simple accumulation. It produces cores, bridges, peripheral regions, extinctions, crystallizations, and derived rules. In other words, the graph is beginning to have a history. That may be the most important point. A database does not really have an internal history. It contains entries. Here, the system preserves traces of becoming: what was tried, what failed, what resisted, what was validated, and what was transformed. When I look at this image, I do not only see a graph. I see a memory forming. I see a topology where basic knowledge becomes an environment, where that environment produces frontiers, and where some frontiers close into knowledge. I see the birth of a conceptual ecology. My provisional conclusion is simple: SAVOIR does not behave like a model continuing tokens. It behaves like a field where concepts are propelled by axioms, tested by relations, validated by simplexes, and stabilized through crystallization. In this experiment, I watched knowledge being created. Not as a generated sentence. As a structure appearing.

QME
0
0
0
12
OneOrigine
OneOrigine@OneOrigine·
Wooooow, what's crazy is that what you're saying is actually the foundation of my S.A.V.O.I.R. architecture. What's even crazier is that I had no idea anyone else had ever approached it as an 'evolving intelligence.' I thought I was just imagining things, but I guess other people have had the exact same intuition. 🍎
English
1
0
1
24
Katherine Graham
Katherine Graham@KateXGate·
Score one for the “intelligence is emergent” team. The first hints of cognition may not appear as “thought” at all but as geometry learning how to organize itself. Intelligence may emerge wherever sufficiently rich relational geometry begins recursively organizing information flow. Not programmed. Formed.
Mathelirium@mathelirium

A Neural Network Can Grow New Neurons Where It Is Confused? In 1994, Bernd Fritzke published A Growing Neural Gas Network Learns Topologies. He introduced a network that starts small, follows incoming data, and inserts new neurons where its error is highest. In the animation, the fog is the drifting data. The glowing nodes are neurons. The fibers are learned connections. The network grows into a living skeleton of the manifold.

English
34
35
207
15.5K
OneOrigine
OneOrigine@OneOrigine·
SAVOIR c3
OneOrigine@OneOrigine

Today I observed an important step in my SAVOIR prototype. At the beginning, there was only basic knowledge: elementary concepts, a few axioms, and simple relations. Nothing that looked like a global intelligence yet. Then I started the evolution. And something remarkable happened: the graph did not simply add nodes. It began to form a structure. In this image, we can see a topology of knowledge stabilizing itself. There are 220+- nodes, 53+- active frontiers, 83+- crystallized knowledge nodes, 16 axioms, and 7 sublations. These are not just numbers. They are traces of a process: hypotheses appeared, some faded, others found enough support to become knowledge bricks. What strikes me most is that the structure does not look like a random cloud. It forms regions. Near the top, there is a dense, almost mineral zone around concepts like identity, truth, proof, boundary, and stability. This region seems to act as a logical foundation. It does not behave like a loose hypothesis. It anchors the rest of the graph. Elsewhere, we see more dynamic regions: temporality, causality, information, change, emergence, world model, codex, memory, hallucination, compression. These zones are less compact and more distributed. They are not dead. They are exploring. They look like conceptual continents still forming. What I saw step by step matters even more than the final image. Nodes appeared as frontiers. They were not knowledge yet. They were directions: oriented possibilities. Then some of these nodes met. Triangles formed. Simplexes began to close relations. When coherence, axiomatic compatibility, and geometric stability became strong enough, the system crystallized a new brick. This is where the phenomenon becomes fascinating. Knowledge does not fall from the sky. It is not simply written into a database. It emerges from tension between axioms, energy, neighbors, trajectories, contradictions, and simplex closure. Knowledge appears when a region of the graph becomes coherent enough to stop being only a hypothesis. The latest visible crystallization in the image is a brick with a score close to 0.77. What is interesting is that it has perfect coherence, perfect ADN compatibility, zero closure defect, and zero inconsistency, even though its average energy is relatively low. This means the system is not confusing power with local truth. An idea can become stable not because it dominates energetically, but because its structure is clean. For me, this is a strong signal. A more naive system would simply keep the most active nodes. Here, some nodes fade, some remain hypothetical, and others become bricks. The system is beginning to distinguish excitation from possibility, and possibility from stability. We can also see seven sublations. This may be one of the most important aspects. A sublation means that a tension or collision is not merely rejected. It can be worked through. If it does not violate the fundamental axioms, it can produce a new derived rule. In other words, the system is not only storing knowledge. It can locally modify the way it stabilizes knowledge. That is why I call this cognitive metallurgy. Concepts are not only classified. They are heated, tested, brought together, sometimes blocked, sometimes fused, sometimes transformed into higher rules. Knowledge becomes a material. This image also shows the tri-spectral logic of the system. We can see an axiomatic layer acting like the gravitational field of laws. We can see an elementary knowledge layer, denser and more stable. And we can see a complex layer, more fluid, where concepts such as world model, memory, hallucination, codex, intention, compression, and agency begin to appear. The beauty of the phenomenon is that these three layers are not separate. They cross through one another. Strong knowledge appears when it respects axioms, aligns with elementary concepts, and reaches into a more complex region without losing coherence. That is exactly what I wanted to observe: a transition from basic knowledge into emergent structure. This is not proof of consciousness. It is not a mystical claim. It is an experimental observation: from a small axiomatic and conceptual base, the system produces an internal organization that does not look like simple accumulation. It produces cores, bridges, peripheral regions, extinctions, crystallizations, and derived rules. In other words, the graph is beginning to have a history. That may be the most important point. A database does not really have an internal history. It contains entries. Here, the system preserves traces of becoming: what was tried, what failed, what resisted, what was validated, and what was transformed. When I look at this image, I do not only see a graph. I see a memory forming. I see a topology where basic knowledge becomes an environment, where that environment produces frontiers, and where some frontiers close into knowledge. I see the birth of a conceptual ecology. My provisional conclusion is simple: SAVOIR does not behave like a model continuing tokens. It behaves like a field where concepts are propelled by axioms, tested by relations, validated by simplexes, and stabilized through crystallization. In this experiment, I watched knowledge being created. Not as a generated sentence. As a structure appearing.

Français
0
0
0
12
OneOrigine
OneOrigine@OneOrigine·
@mathelirium That strangely reminds me of the SAVOIR evolution graph... I guess great minds really do think alike. 🍎
English
1
0
1
118
OneOrigine retweetledi
OneOrigine
OneOrigine@OneOrigine·
„Vielleicht habe ich meinen Geist einfach darauf konditioniert, niemals zu ‚lieben‘ oder Gefangener der Seele einer Bewerberin zu sein – obwohl ich vielleicht bereits gefangen bin.“ one origine
OneOrigine tweet media
Deutsch
0
1
1
69
OneOrigine retweetledi
OneOrigine
OneOrigine@OneOrigine·
There are several concepts at play here. The initial implementations have already confirmed the theories so far, with 10 MVPs successfully validated. We've also run other types of tests, such as the Event Brain Alpha, which was designed to test the brain's capacity to ingest world hypotheses, establish causal and temporal relationships, and build entirely new knowledge through both observation and internal experience.
English
0
0
0
15
OneOrigine
OneOrigine@OneOrigine·
The project gave rise to the concept of the LEM (Large Expression Model). These expression models are a specialized compartment of the architecture dedicated entirely to output generation. The core model doesn't communicate through language natively; rather, language is merely a downstream interpretation or a directed expression. Because of this, the model is inherently non-anthropomorphic. It can express itself in a 32-dimensional space, which is then flattened into actions, text, voice, or other types of outputs. The system receives or perceives a temporal hypothesis of the world in the form of mini semantic graphs via the V.I.S.T.A. system. Its internal structure then reacts to the propagation of these stimuli through a human-brain-inspired graph. It even possesses its own HOX/PAX genes. The brain is structured with specialized compartments; even during its development, it creates a new region—not via a prompt command, but out of structural necessity to explore or tackle an unknown hypothesis. Since this theoretical architecture is non-anthropomorphic, it could allow two separate S.A.V.O.I.R. brains to communicate without using human language at all, simply by exchanging their raw expressions directly.
English
1
0
0
14
Mira Murati
Mira Murati@miramurati·
Today we're sharing our work on interaction models. A new class of model trained from scratch to handle real-time interaction natively, instead of gluing it onto a turn-based one. youtu.be/A12AVongNN4
YouTube video
YouTube
English
309
915
8.7K
1.1M
OneOrigine
OneOrigine@OneOrigine·
This shows that S.A.V.O.I.R. has the potential to become an AI that is fundamentally "environmental" or natural in its very structure. I can’t stress this enough: S.A.V.O.I.R. needs a complete physical body to fully unlock and maximize its capabilities. It isn’t some chatbot that learns from a prompt; it is a genuinely evolutionary intelligence that learns in response to a stream of stimuli. These stimuli come in the form of raw, mini semantic graphs, which also function as temporal, perceptive episodes. Of course, this is currently just a hypothesis based on my Theory of Knowledge and the successful tests we've run so far.
OneOrigine@OneOrigine

x.com/i/article/2053…

English
0
0
0
22
OneOrigine
OneOrigine@OneOrigine·
To be completely honest with you, and without feeding into any hype about LLM consciousness: I’ve actually written a paper and run an experiment under a strict scientific framework, even managing to define consciousness.Honestly, even asking the question of whether LLMs are conscious is almost absurd. But let me ask you this: do you even know how many families of neurons actually exist? Don't fall for the oversimplified picture they sell you—the idea of binary logic gates spitting out a 1 or a 0 is a heavily diluted version of reality. The brain's complexity isn't its only incredible attribute; its efficiency is physically engineered down to the scale of individual cell bodies.What you’re suggesting is hoping that consciousness will somehow emerge from directed probabilistic uncertainty—essentially, something that mimics or trends toward the concept of consciousness without ever actually reaching it. Hoping for that is like hoping a calculator will one day develop its own free will. Through pure logical projection, it’s completely absurd.
English
0
0
0
38
vixhaℓ
vixhaℓ@TheVixhal·
Could consciousness emerge from logic gates? Modern AI systems ultimately run on just 3 basic logic gates: - AND Gate - OR Gate - NOT Gate Individually these gates are extremely simple. But when billions of them are combined together in complex systems, they can process language, generate code, recognize patterns, and simulate human-like reasoning. If intelligence-like behavior can emerge from massive combinations of simple logic gates, could consciousness emerge too? And if human brains are also made from simpler units like neurons, is consciousness just an emergent property of complexity?
English
233
21
247
22.6K
OneOrigine
OneOrigine@OneOrigine·
Honestly, it’s disgusting to see today’s scientists just chasing celebrities or refusing to ever explore alternative paths. Just following the hype train—it's honestly a bit disheartening. My project, S.A.V.O.I.R., just showed it could solve the "coffee cup problem," which is essentially object permanence (a childhood psychology concept developed by Jean Piaget). Yet, all I get is radio silence. Either it's the X algorithm burying it, or people just don’t actually care or think for themselves anymore. They’re just waiting for some mile-long paper to come out of MIT, Google DeepMind, or some other elite lab. It’s like people have outsourced their critical thinking and reasoning to a select group of gatekeepers, and frankly, it’s depressing. But then again, I get it. Right now, social networks are completely flooded—drowning in information overload. People's brains literally don't have the time to filter or verify what’s actually real or fake anymore.🍎
English
1
0
2
188
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
The AI hype crowd has falsely predicted the following as imminent, every single year since 2022: - "AGI" - "ASI" - "The Singularity" - Mass unemployment of radiologists, developers, lawyers, truck drivers etc. At what point will society just stop taking them seriously?
English
48
18
249
7.5K
OneOrigine
OneOrigine@OneOrigine·
But honestly, just out of curiosity—is anyone actually seeing my posts, or am I just shouting into the void? Because seriously, are you all just waiting for someone to present yet another massive LLM to finally pay attention? S.A.V.O.I.R. has literally just solved the problem of object permanence (a childhood psychology concept developed by Jean Piaget). We won't even need a base on the moon if S.A.V.O.I.R. proves reliable enough to replace LLMs. Classic Transformer architectures are a monumental waste of energy because they burn through the exact same massive amount of compute to say "the cup is on the table" as they do to solve a complex equation. An architecture that optimizes compute by only processing changes or anomalies respects the principle of least action. 🍎
OneOrigine@OneOrigine

S.A.V.O.I.R. in the Event Brain Alpha test series just solved what I call the "coffee cup problem." >Actually, this was the very foundation of S.A.V.O.I.R.—where it all began. The question we asked ourselves was: What if we created an artificial intelligence that actually KNOWS? >Here was the problem: if a robot running an LLM places a cup at coordinate X, turns around, and you ask it where the cup is, we naturally assume it knows. But that’s actually false. What it's really doing is retrieving data from its context window, recalculating, and re-estimating the position. It estimates every single time, even when it should just know.Because S.A.V.O.I.R. perceives things in episodes, it is able to know that the position was at coordinate X at moment Y. If the position suddenly changes, the system detects the inconsistency and immediately starts searching for possible solutions.🐚🍎 In my humble opinion, this architecture already demonstrates a massive reduction in compute costs compared to classic LLMs. And that makes perfect sense, because its architecture is partly based on the conservation of energy: nothing is lost, nothing is created, everything is transformed.

English
0
0
0
15
Physics In History
Physics In History@PhysInHistory·
Einstein’s 1905 relativity paper originally had no citations. The paper “On the Electrodynamics of Moving Bodies” (Annalen der Physik, 1905) was about 9,000 words (30 pages in journal format), but it had no references at all — very unusual for a physics paper.
Physics In History tweet media
English
34
65
485
27.2K
OneOrigine
OneOrigine@OneOrigine·
S.A.V.O.I.R. in the Event Brain Alpha test series just solved what I call the "coffee cup problem." >Actually, this was the very foundation of S.A.V.O.I.R.—where it all began. The question we asked ourselves was: What if we created an artificial intelligence that actually KNOWS? >Here was the problem: if a robot running an LLM places a cup at coordinate X, turns around, and you ask it where the cup is, we naturally assume it knows. But that’s actually false. What it's really doing is retrieving data from its context window, recalculating, and re-estimating the position. It estimates every single time, even when it should just know.Because S.A.V.O.I.R. perceives things in episodes, it is able to know that the position was at coordinate X at moment Y. If the position suddenly changes, the system detects the inconsistency and immediately starts searching for possible solutions.🐚🍎 In my humble opinion, this architecture already demonstrates a massive reduction in compute costs compared to classic LLMs. And that makes perfect sense, because its architecture is partly based on the conservation of energy: nothing is lost, nothing is created, everything is transformed.
English
0
0
1
34
OneOrigine
OneOrigine@OneOrigine·
This is a great breakdown, especially points 3 and the transition to 'Software 1.0' hybrid artifacts. Regarding video/animation generation: instead of waiting for heavy, resource-intensive diffusion networks to render every pixel, a highly efficient bridge today is having the LLM generate scene scripts for lightweight JS animation libraries (like GSAP, Three.js, or Remotion). The environment just runs the JS code locally, rendering smooth, interactive presentations or animations instantly. It bypasses the massive compute cost of neural video gen while giving us that high-bandwidth visual output right now. 🍎
English
0
0
1
21
Andrej Karpathy
Andrej Karpathy@karpathy·
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
Thariq@trq212

x.com/i/article/2052…

English
802
1.7K
16.6K
2.1M