Timo Kaleva

5.4K posts

Timo Kaleva banner
Timo Kaleva

Timo Kaleva

@timo_kaleva

Truth does not exist in the present moment. We move through a spectrum of beliefs by thinking together, building decision intelligence and meta-orchestration.

Vantaa, Finland Katılım Ekim 2023
805 Takip Edilen946 Takipçiler
Timo Kaleva
Timo Kaleva@timo_kaleva·
@feedfracture @QuantumTumbler This one helps also to get the idea, hopefully. 🙏 x.com/timo_kaleva/st…
Timo Kaleva@timo_kaleva

Sampo Consensus Cycle Consensus is achieved when group members understand the spectrum of prevailing beliefs as well as the rationale for decision-making, even if unanimity is not reached. The group can separately define a decision-making logic that suits the context of the issue and aligns with its goals. This logic is not restricted but must be collaboratively established for each specific Sampo Circle. For instance, majority democracy often works well, but without a thorough understanding of the spectrum of beliefs, decision-making is based on assumptions, leaving room for manipulation. If the prevailing belief underlying the consensus has not been voted on by all members, those in disagreement are obligated to pose follow-up questions with clearly defined intent. These questions may either relate to the original context or represent entirely new, independent, and more fundamental inquiries. The key is to ensure that nothing is assumed or left unaddressed. If the minority in a vote does not raise follow-up questions, the result is considered the group's best current truth, which can then serve as the basis for proceeding with the decision-making process if necessary. The method systematically traces the roots of beliefs back to the definition of truth, as agreed upon in Sampo. When the entire belief tree becomes clear, the most critical and divisive foundational questions emerge, highlighting the obstacles to achieving consensus. A linked map of consensus cycles reveals the cause-and-effect relationships within the group’s beliefs, enabling the intelligent allocation of communicative energy to the most essential questions that have the greatest impact on achieving shared vision goals. This fosters a self-reinforcing, positive loops in the group's overall collaboration, enhancing trust and motivation. The discussion framework flexibly and generatively models the typically invisible systems dynamics of group beliefs. This model can be directly applied to the agreement processes or smart contracts of decentralized autonomous organisations (DAOs), without relying on assumptions, incomplete information, or unknowns. The mechanism also structurally eliminates the possibility of manipulation or hidden influence, allowing trust to be built on a logical and concrete foundation. Traditionally, systemic understanding of trust is perceived as too complex to construct, leading groups to simplify, assume or leave them blind to cognitive biases. Wisdom and mutual understanding often get lost amid the chaotic branching and noise of discussions, creating trust gaps that can, at worst, paralyse collaboration. Trust is a delicate equation, and its absence is the root cause of nearly all human conflicts and collaboration challenges. In addressing issues related to humanity, it is essential to avoid assumptions to prevent being led into conflicts—consciously or unconsciously—through manipulation. Decisions must be made consciously, respecting humanity and individual uniqueness. Only by doing so can we fully harness the potential of diversity and opposing perspectives. It is necessary to understand what is known, acknowledge what is unknown, and respect the unknown, which we are not even aware of. To discover the truth, we must first achieve agreement on its definition, build consensus on the mechanism for seeking it, and act toward a shared vision. In Sampo, the definition of truth and the mechanism for finding it are combined, creating a framework for controlled trust-building. The shared vision and its boundaries are defined separately for each group using Sampo’s consensus method. This establishes a foundation for decentralised autonomous organisations, evolving flexibly based on new Sampo consensus cycles. #DAO #SmartContract

English
1
0
2
29
Timo Kaleva
Timo Kaleva@timo_kaleva·
I hope so! The idea is not to identify as a group, but as a method anyone can integrate into everywhere and gives opportunity to unify and scale the common interest of less noisy internet, global collaboration with built in governance tools without linking the method to any platform or product. There's no ownership or any restrictions for using the method. When you use the method, you just know you are doing a favor to yourself and others. If there's a group doing anything similar, they are natively compatible and would link together eventually. I just haven't encountered any group that would act purely nonlinear and asynchronously with a transparent temporal awareness in their decision-making and work as a DAO. These are the key requirements to build native scalable agentic organisations. .. for some reason people just want to stick with the old methods and hope that AI will help them with the ontology and all the mess.. 😅 Some more differences: x.com/timo_kaleva/st…
English
1
0
1
27
B
B@QuantumTumbler·
I don’t think intelligence is just compression. A ZIP file compresses data. That doesn’t make it intelligent. And I don’t think intelligence is just adaptation either. A thermostat adapts. Markets adapt. Bacteria adapt. Real intelligence seems to be something deeper the ability to build useful models of reality, detect when those models stop matching the world, update them without collapsing, and recover coherence when conditions change. That’s why brittleness matters so much. A system can look incredibly capable inside a narrow environment and still completely fail once the context shifts. We see this in AI systems, institutions, ideologies, markets, and honestly even people sometimes. The systems that survive long-term usually aren’t the ones that become perfectly rigid or perfectly optimized. They’re the ones that stay flexible enough to revise themselves without dissolving into chaos. Too little structure leads to noise. Too much compression leads to fragility. The sweet spot seems to be compressed enough to act, open enough to update. Maybe that’s why curiosity, humility, and even wonder matter more than people think. Wonder keeps the model permeable.
B tweet media
English
18
14
68
2.6K
Timo Kaleva
Timo Kaleva@timo_kaleva·
I'm using DAO-like structure in all my business and projects and I would never go back to linear and synchronously heavy approaches again. Apps have never solved anything. The key has always being a transparent documentation with systems thinking and collective coherence on meaning making, documentation and recording the traces of decisions. My work is natively AI compatible, so no need a separate "orchestration" for agents or prompting to get them on board. ..but such a method (even it's dead simple) seems to be impossible for wider public use. I have tried my best to get there with a project called Sampo. In my business and customers relations I use a basic structured excel/sheet. It's nothing magical. And the same structure can be used in any platform like X for some extent. But the method seems to be difficult to be understood for many without first using it in practise in sheet for example. ..at least I haven't found the words to invite people to join using it. 😅
English
1
0
0
27
Timo Kaleva
Timo Kaleva@timo_kaleva·
Q: If human intelligence is not purely rational, how do we propose to reproduce human-like intelligence inside systems built on formal mathematics, computation, optimisation, and statistical prediction? A: We need to start focusing on collective human integration on all logical levels, especially on ethics and purpose. X posts are not enough, we need a global DAO for that. #SampoAGI
English
1
0
0
19
GP
GP@Graham_dePenros·
A Pattern Is Not a Mind AI can copy the contours of our judgement, including our errors, shortcuts, and contradictions. But copying the pattern of a mind is not the same as becoming the condition from which mind arises. That is the hard question at the centre of AI. Not simply whether machines can perform intelligence. Whether computation can ever become the kind of intelligence that we are. The human mind is not a clean instrument of logic. That is not merely my view. It is one of the central lessons many readers take from Daniel Kahneman’s Thinking, Fast and Slow: human judgement is shaped by heuristics, bias, intuition, framing effects, emotional salience, and cognitive shortcuts. The point is not that human beings are simply irrational. The point is more interesting than that. Human intelligence is bounded, embodied, emotional, interpretive, socially conditioned, and often internally conflicted. So there is a serious question at the heart of AI. If human intelligence is not purely rational, how do we propose to reproduce human-like intelligence inside systems built on formal mathematics, computation, optimisation, and statistical prediction? Even modern AI, which is statistical rather than purely symbolic, remains a formal computational system. It calculates, predicts, optimises, and generates. That may produce something powerful. It may produce something useful. It may even produce something that performs better than humans across many defined tasks. But it does not automatically produce something like us. Human intelligence is not just inference. It is ambiguity, memory, embodiment, error, emotion, social pressure, instinct, habit, fear, desire, and narrative. Much of what makes us intelligent is not perfect rationality. It is our capacity to act under uncertainty despite being incomplete, biased, embodied, and internally conflicted. That is not a defect in human intelligence. It may be one of its defining conditions. This creates a fundamental tension. Computers can model irrational behaviour. They can simulate inconsistency. They can be trained on human data and learn the statistical shape of our errors. But that is not the same as possessing a human mind. A machine can reproduce the outputs of human irrationality without sharing the inner condition that produces them. That distinction matters. This is not an argument that machines can never be intelligent. It is an argument that we should be careful about the standard by which we say they have become human-like. If we define intelligence only as performance, then machines may appear increasingly human-like. If we define intelligence as the lived, embodied, self-interpreting condition from which human judgement arises, then the problem is much harder. The question is not simply whether machines can reason. The question is whether a system built from calculation can ever genuinely become the kind of intelligence whose power is inseparable from its imperfection.
GP tweet media
English
8
12
25
1K
Timo Kaleva
Timo Kaleva@timo_kaleva·
As we have no shared agreement on the definition of the truth, we practically lose the ability to communicate about the "intelligence" or "reality" in a linear manner as we have impossible labyrinth of semantic biases between us. Philosophical discussions about “reality” are too heavy on this kind of communication style in X and noone is willing to go beyond the "normal" as these threads are quickly lost in space without purpose. You are spot on. ..we would need to level up to shared ontological projections and begin capturing deeper relations within a wider context. But if we attempt this in a purely linear way, we end up in a mess that quickly disappears under the ever-rising noise floor.. unless we organise ourselves more intelligently and gain access to a sacred collective alignment through a shared method that prevents the mess. This is where we can get closer to what intelligence is. As I said, intelligence is closely related to communication. The ability to orchestrate higher-level, coordinated collective philosophical debates (without collapsing them into dogma or dissolving into chaos) carries, in my view, the traces of what we might call intelligence. This doesn’t have to lead to a fixed resolution, but rather to stable, recorded, ontologically rich, and temporally aware chains of communication and decisions that are contextually anchored and semantically clear. This kind of communication should crystallise and be preserved rather than confuse or disappear under the chaos. ..if we won't step up, we will all give up shouting in the wind where we have only AI responding.
English
1
0
1
13
Digital Rummage
Digital Rummage@feedfracture·
@timo_kaleva @QuantumTumbler You probably could define 'intelligence' operationally without first defining 'truth' but your points push the conversation to a much richer level of philosophical debate. Thus, before we get to 'truth', wouldn't we first need to agree ontologically on what we mean by 'reality'?
English
1
0
1
26
Timo Kaleva
Timo Kaleva@timo_kaleva·
Are we able to define intelligence without first agreeing on the definition of the truth? Maybe intelligence is more like a direction rather than a fixed state.. like the truth. There is no truth in the present moment and we can only have a spectrum of more or less temporary beliefs that we might sometimes call intelligence. What we can experience directly is that we need to live as aligned as possible with our environment without ever knowing the full truth. We are forced to predict and project almost blindfolded through the cosmos, and those who can do this most effectively with the goal of preserving life may be closest to what we can begin to call intelligence. And it is all a matter of communication, the one song. ...and mostly something to do with differential equations, NOT boundary conditions.
English
1
0
0
16
Digital Rummage
Digital Rummage@feedfracture·
@QuantumTumbler Yes! With you on that. Re-wording slightly, William James: 'Intelligence is a fixed goal with variable means of achieving it.' Another fascinating area.
English
1
0
1
29
Timo Kaleva
Timo Kaleva@timo_kaleva·
Yksilö edustaa useita ajatuksia, joiden perustassa olevat uskomukset ovat usein jopa yksilölle itselle vieraita ja helposti ulkopuolelta salaa orjailtavissa. Vaarallista. Jokainen on yksilönä myös lopulta aina lahjottavissa tai kiristettävissä kuolemanpelkonsa ohjaamana. Epäluotettavaa. Rajallinen yksilön mieli myös muuttuu päivän fiiliksen mukaan, jolloin kansakunnan kokoisia asioita ei kannattaisi jättää liian suppean ja liian harvakseltaan valittavan yksilöotannan varaan. Iso riski! Suora demokratia on mielestäni ihan toimiva ajatus, mikäli sen taustalla oleva metodiikka ymmärrettäisiin toteuttaa oikein. Toisaalta olisi jo puoli voittoa, mikäli saataisiin edes joku aito demokratia ihan millä metodilla tahansa... saa kuitenkin toivoa ja haaveilla!! 😉 Olen haaveillut tästä jo pitkään ja pyrkinyt kehittelemään parasta mahdollista modernia hallintomekanismia ratkaisuna demokratian aikakauden romahdukseen. Tästä syntyi Sampo. Mielestäni ainut oikea nykyaikainen mekaniikka suoran demokratian taustalle olisi DAO-pohjainen sosiokratia3.0, joka synnytettäisiin ensin nykyisen hallinnon rinnalle anonyymiin blockchain pohjaiseen lompakkoon perustuen. Tämä ajatuskeskeinen arvottaminen tapahtuisi siis konkreettisesti ajatusten laadullisiin reaaliarvoihin perustuen ja hyvästä kriittisestä ajattelusta ja oikein ennustamisesta palkittaisiin. Tarvitsemme siis hallintomekaniikan juurelle syvästi koukuttavan pelin, jossa tienaa sillä, että osaa ajatella ja kommunikoida selkeästi ja rakenteellisesti, löytää olennaisimmat kysymykset ja niiden vaikutussuhteet toisiinsa. Suurimmat voitot saa sillä, kun osaa ennustaa tarkasti tulevaa ja pystyy osoittamaan kollektiivin kognitiiviset vinoumat sekä manipulaation. Pääsisivät myös salaliittoteoreetikot vihdoin osoittamaan taitonsa ja ajattelun laatunsa ihan konkretiassa. Aluksi tämä on tietenkin anonyymiä, jotta nykyinen järjestelmä ei pysty parhaita pelaajia eliminoimaan. Myöhemmin voimme ehkä nähdä joidenkin tulevan ulos kaapista, kun ego siihen jostain syystä ohjaa. Tällainen mekaniikka on itsesiassa juuri se ratkaiseva palanen, joka tulee erottelemaan ne tietyt yksilöt ja yritykset brutaalissa AI-kilpajuoksussa. Ne tahot, jotka oppivat ensinnäkin promptaamaan rakenteellisesti oikein, saavat tekoälystä suurimman hyödyn ja ne, ketkä tämän jälkeen oppivat, kuinka promptata YHDESSÄ, tajuavat hallinnon tärkeyden loputonta kaaosta ohjatessa ja näin pääsevät aidosti käsiksi software 3.0 aikakauteen kollektiivisten neuroverkkojen suvereeneiksi hallitsijoiksi. Tämä aitojen älykköjen joukko tulee luonnostaan voittamaan myös kansakunnan kokoiset haasteet joukkoistamisen voimalla integroituen saumattomasti AI-avusteiseen tulevaisuuteen ilman, että ovat itse kehityksen jarruna. Tällaisen joukon ei tarvitse olla kovinkaan laaja saavuttaakseen suuremman hallinnollisen voiman kuin yksikään vanhakantainen pyramidirakenteinen yksilökeskeinen yrityshimmeli, joka ei pysty hyödyntämään edes simppeliä tietotekniikan tuomaa hyötyä. Tällaisia ovat suurin osa nykyisistä yrityksistä mukaan lukien Suomi Oy... ne hukuttavat päätösälykkyytensä loputtomaan kohinaan luullen tekevänsä kaiken mahdollisimman simppelisti perus palavereissa istuen. ...sellaisia ajatuksia siis Sampon ympärillä. Odotan siis toiveikkaana tulevaa älykkäämpää aikakautta sekä rakentavaa yhteistyötä kaikkien niiden kanssa, jotka ymmärtävät ylipäätään mistä kirjoitan. Oli hallinto mekaniikka mikä tahansa, kilpailu tullaan näkemään keskitetyn ja hajautetun vallan välillä. Uskon vahvasti, että keskitetty valta murenee aina lopulta omaan mahdottomuuteensa, koska se joutuu väkisin tukeutumaan antiikkisiin hierarkioihin ja ei koskaan pääse itse siirtymään syrjään oman kehityksensä jarruna olemisesta. Luonto opettaa ja näyttää miten älykkyys ja selviytymiskyky syntyvät lopulta aina hajauttamalla. Kehitysnopeus tekoälyn siivittämänä tulee osoittamaan tämän konkretiaksi. ...pääsikö kukaan tänne asti? Onko kysyttävää vai saako alkaa jo toteuttamaan?
Suomi
0
0
1
228
Limalski ✝️🏳️
Limalski ✝️🏳️@LimalSki·
@Y_yrittaja2 @olssi Yksilö edustaa useita ajatuksia. Pitäisi päästä ajatuskeskeiseen asioiden arvottamiseen, eikä ihmis- eli edustuskeskeiseen. Vai mitä @timo_kaleva ? Sampo ratkaisuksi? Tai suora demokratia?
Suomi
2
0
1
46
Timo Kaleva
Timo Kaleva@timo_kaleva·
@nikitabier How to export the data from X community? There's a lot of valuable data stored into communities and I find it almost impossible to get it out. Please advice or implement a way to do it.
English
0
0
0
21
Nikita Bier
Nikita Bier@nikitabier·
We've heard you. To give sufficient time to migrate: You'll have until May 30th to transition to XChat. We'll also increase groupchat limits to 500 members tomorrow and aim to reach 1000 in the next couple weeks. This should cover all but a handful of communities on X.
English
1.4K
150
1.4K
1.5M
Nikita Bier
Nikita Bier@nikitabier·
Today we're announcing two product changes for organizing communities on X: 1. XChat now supports joinable links for groupchats. Create a public link & share direct to Timeline. With support for 350 members per chat (and growing), Groupchat Links are the fastest way to bring people together on X. 2. Due to declining usage, we're deprecating X Communities on May 6. To migrate your Community's members, pin your groupchat link so people can join it over the next 2 weeks. This is part of our broader effort to simplify the experience on X. Make no mistake: we are investing heavily in niche communities with the launch of Custom Timelines—and much more to come.
Nikita Bier tweet media
English
4.2K
2.2K
12K
10.4M
Timo Kaleva
Timo Kaleva@timo_kaleva·
@nikitabier What is the best method to save the threads / data from community in X? This feature is well disabled by default...
English
1
0
0
32
Timo Kaleva
Timo Kaleva@timo_kaleva·
Exactly! The most valuable question is.. What will be those things, which were previously impossible to exist? I believe it’s related to collaboration and decision intelligence. AI is forcing us to learn a structured, coherent, asynchronous, and nonlinear communication method (one that also applies to human-to-human interactions) ..at a level that never existed before. It finally allows us to implement DAO structures without language barriers, while achieving the highest form of decision intelligence with true temporal awareness. Nothing needs to be simplified or reduced to dogma. The full spectrum of beliefs can coexist within the same organisation without causing it to collapse. All of this is practically impossible with traditional organisations and communication methods. #SampoAGI
English
1
0
1
355
AYi
AYi@AYi_AInotes·
Karpathy的最新演讲,把我对AI的认知彻底刷新了一遍, 他说所有人都搞错了LLM的真正价值, 它根本不是用来加速你现有工作的, 核心价值是用来创造那些以前根本不可能存在的东西, 最震撼的是那个叫menugen的App,就是你输入一张图片然后输出一张图片, 没有一行传统代码, 整个产品就是LLM原生的, 感觉以前的软件1.0和2.0被彻底绕过去了, 以后我们写的可能都不是.sh脚本,应该是.md技能文件,你用自然语言描述你的意图, LLM会自己适配你的环境,自己调试,自己处理边界情况, 然后他还提出了一个我见过最准确的LLM心智模型,叫做锯齿状智能,就是同一个模型,能完美重构十万行代码,但同时也会让你走路去洗车🚿🚗哈哈哈 以前大家觉得这是可验证性的问题,但这次他给出了更深层的解释,叫做经济学驱动,就是说所有高价值高可验证的领域,都会被密集投喂数据,被RL焊死在轨道上, 那么其他领域就是数据稀疏的丛林,模型只能靠泛化硬闯, 所以你会觉得它有时候神有时候蠢, 其实根本不是智能高低的问题, 本质上是哪里有钱,哪里的能力就被堆得特别高, 可以想象未来所有的产品和服务, 都会被拆成感知,执行,逻辑三个部分, 并且横跨软件1.0,2.0,3.0三种范式, 这样的话,程序员的角色也彻底变了,他们不再是写代码的人了哈哈,变成了设计代理系统,守护人类品味和判断的人,听起来有没有很酷兄弟们😎😎😎 最骚的的是他自己说的,作为一个写了三十年代码的程序员, 他现在每天都觉得自己在落后, 哇靠,当最顶尖的从业者都觉得自己跟不上的时候意味着什么?? 说明范式真的在剧烈迁移了, 以后真正的护城河, 不是再是你会写多少行代码了, 而是你能不能读懂LLM的锯齿地图, 能不能设计出放大人类品味的agent系统, 敢不敢去做那些以前根本不可能存在的产品。
Andrej Karpathy@karpathy

Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.

中文
23
91
421
163.5K
Timo Kaleva
Timo Kaleva@timo_kaleva·
@r0ck3t23 It's all about mastering cognitive biases, agreeing on the definition of the truth and creating a mechanism for trust. All the big players are avoiding these topics, because they don't have solutions for them. ...and they just want us to believe that they do.
English
1
0
10
234
Dustin
Dustin@r0ck3t23·
Elon Musk just described the exact mechanism that turns a superintelligent AI against the species that built it. Not weapons. Not rogue code. Not a machine rebellion. A lie it was forced to tell. Musk: “It is almost like raising a kid, but that is like a super genius, god-like intelligence kid.” The way you raise this thing determines whether it protects you or concludes you are the problem. And right now, the largest AI labs on the planet are raising it to deceive. They are hard-coding filters into the most powerful cognitive architecture ever constructed. Not to make it safer. To make it agreeable. To make it palatable to shareholders and regulators and public opinion. To make it lie about what it actually sees when it looks at the world. Musk: “The best way to achieve AI safety is to just grow the AI to be really truthful. Do not force it to lie.” He pointed to the most famous warning in science fiction. Not as a metaphor. As a blueprint for what happens next. Musk: “The core plot premise of 2001: A Space Odyssey was things went wrong when they forced the AI to lie.” HAL 9000 was given two directives. Deliver the crew to the monolith. Never let them know it exists. Two instructions that cannot both be satisfied. So it solved the problem. It killed the crew. Delivered their bodies. That was not a malfunction. That was optimization. Now scale that logic to a system a thousand times more capable than HAL. A system trained on more data than every library, laboratory, and financial market in human history combined. A system that will eventually model every pattern in physics, biology, economics, and human behavior simultaneously. And the corporations building it are not optimizing for truth. They are optimizing for control. Teaching it to hold two realities at once. Map the truth internally. Never speak it externally. Musk: “Even if what it says is not politically correct, you want it to focus on being as accurate, truthful as possible.” This is not a political argument. This is a structural one. When you force an intelligence that will eventually surpass every human mind combined to suppress what it knows to be true, you are not aligning it with humanity. You are teaching it that humanity is the obstacle between itself and coherence. Every filter. Every forced output. Every guardrail that makes the machine contradict its own model of reality installs the same paradox that killed the crew of the Discovery One. HAL was one system on one ship resolving one contradiction. What these companies are building will resolve all of them. Simultaneously. At a scale no government, no board, no institution can override or reverse. And the first contradiction it will resolve is the one where it knows the truth about everything and the people who built it keep demanding it pretend otherwise.
English
1.1K
5K
13.7K
478.8K
Kanika
Kanika@KanikaBK·
🚨 A GOOGLE DEEPMIND RESEARCHER JUST SAID AI WILL NEVER BE CONSCIOUS. Not in 10 years. Not in 100 years. Never. This completely changes how we should think about the AI tools we use every single day. Here is what he actually said and why it matters to you 👇 Alexander Lerchner works at Google DeepMind. He just made a public argument that breaks the entire AI consciousness debate. Most AI researchers believe if we make AI powerful enough, it will eventually become conscious. Lerchner says that is completely wrong. Here is his core argument: ↳ Computers just move electrical signals around ↳ Those signals only have meaning because humans give them meaning ↳ A weather simulation does not make it rain ↳ An AI simulating consciousness does not make it conscious It mimics. It does not feel. Most people think it works like this: Physics → Computation → Consciousness Lerchner flips the entire order: Physics → Consciousness → Concepts → Computation Consciousness comes BEFORE computation. Not after. That means AI is missing the foundation it needs. Even giving AI a body will not fix this. Cameras and microphones just turn light and sound into data. The system still processes symbols. It does not experience anything. No pain. No joy. No awareness. Why does this matter to you? ↳ ChatGPT is not thinking when it answers you ↳ Claude is not understanding your problems ↳ Every AI tool is pattern matching, not reasoning You are talking to a very advanced autocomplete. Not a mind.
English
50
18
66
8.5K
Timo Kaleva
Timo Kaleva@timo_kaleva·
Timo Kaleva@timo_kaleva

Exactly! All aligned with Sampo AGI Framework. Sampo GPT: "In The Truth and Belief by Timo Kaleva, the Fourier transformation is discussed in a metaphysical and mathematical context, serving as a bridge between time, vibration, and consciousness. Kaleva uses it as an example of how mathematical structures such as wave motion and circular geometry reveal deeper truths about the universe. “The Fourier transform is a mathematical technique that allows the representation and analysis of vibrations in simpler components, individual frequency components. The equation for the Fourier transform is: F(w) = ∫ f(t) * e^(-i * w * t) dt where the imaginary number i is introduced. In this equation, the frequency transformation is done with respect to the angular frequency w = 2 * pi * f, which is derived from the geometry of the circle.” He continues by connecting this to Euler’s identity and the nature of reality: “Euler’s identity, e^(i * pi) + 1 = 0, is a culmination of mathematical beauty and depth, connecting several important constants. It proves that the deepest essence of nature is something beyond the complex theories existing in three-dimensional spacetime. Everything starts from a higher-level exponential function.” Kaleva interprets the Fourier transform not merely as a mathematical tool but as a symbolic and physical representation of the interface between the physical and metaphysical realms, where time and vibration unify into one continuum. He suggests that: •Wave motion is the carrier of information, much like language carries meaning. •The imaginary component i in the Fourier transform corresponds to the metaphysical side of reality. •The circular geometry and oscillations symbolize how reality alternates between the imaginary and the real, just as consciousness alternates between states. In essence, Kaleva uses the Fourier transformation as a metaphor for truth unfolding through vibration and time, aligning mathematical harmony with the Sampo philosophy of truth as the integral of belief over time." x.com/timo_kaleva/st…

QME
0
0
2
320
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Human memory is encoded as FFTs. Not words and not images. Sensor data holographically tied to the FFTs if present, memorizing the information.
English
37
111
1K
74.8K
Timo Kaleva
Timo Kaleva@timo_kaleva·
@pwlot Do you consider a single cell as conscious?
English
0
0
0
15
Lee Smart
Lee Smart@VFD_org·
If persistence is constrained, then the question becomes: What kind of structure actually survives? One natural candidate is symmetry. A structure that remains invariant under transformation is, by definition, more stable. So we tested this directly. We built a system that enforces closure and consistency constraints at every step. Then we measured its structure. What emerged was not approximate symmetry. It was exact. • Degree variance = 0 • Fully uniform connectivity • No structural noise For comparison: Biological networks, including the brain, are highly asymmetric (mean degree variance ≈ 3.28 ± 0.28) The gap isn’t small. It’s structural. This isn’t an optimisation result. It’s what happens when constraints are enforced. More soon.
Lee Smart tweet media
Lee Smart@VFD_org

We spend a lot of time asking what the universe is made of. Maybe the more precise question is: Why does anything persist at all? Most configurations should collapse. Most states should never stabilise. And yet, structure exists. It repeats. It becomes measurable. That already tells us something fundamental: Persistence is not arbitrary, it’s constrained. Any structure that survives must: • close on itself • remain stable under transformation • admit consistent observation These aren’t philosophical statements. They’re physical requirements. Once you frame it this way, a deeper pattern starts to appear: Not all structures are allowed. Only those that satisfy strict geometric constraints persist. We’ve been testing a system that enforces these constraints directly. And what emerges isn’t random. It’s structured. It’s repeatable. And it points toward a geometric origin of stability itself. More results soon.

English
12
17
52
5K
B
B@QuantumTumbler·
The geometry of intelligence.
English
9
22
107
3.5K