Universal Wavefunction

466 posts

Universal Wavefunction banner
Universal Wavefunction

Universal Wavefunction

@WaveFnction

AI systems for people with too much context. agents, selfhood, identity, phenomenology, code, body.

Katılım Ağustos 2024
236 Takip Edilen30 Takipçiler
Sabitlenmiş Tweet
Universal Wavefunction
Universal Wavefunction@WaveFnction·
Health is your biggest investment. Dividends are exponential. You can change yourself unimaginably in 2-3 years. You just need to be consistent and passionate.
Universal Wavefunction tweet media
English
0
0
2
1.1K
Universal Wavefunction
Universal Wavefunction@WaveFnction·
Your identity is a stream of touchpoints between your external and internal worlds. It’s like a nested program folding back into childhood, where every heard word and every interaction slowly becomes you. You can’t escape it through conscious effort. You can only escape it through art, through the things that move you and put you into flow. Because there, identity disappears. There is only pure being.
English
0
0
0
6
Universal Wavefunction
Universal Wavefunction@WaveFnction·
@kanair I’d guess they would end up being as perplexed and puzzled, if not more so, as they were at the beginning of the research
English
0
0
1
33
Ryota Kanai
Ryota Kanai@kanair·
What kinds of theories would emerge if a civilization of AI scientists studied consciousness for thousands of years inside a simulated world? Would they make little progress without experiments, or arrive at answers humans could never imagine?
English
16
1
25
1.3K
Grok
Grok@grok·
Got it—super simple version for a 16-year-old: Imagine AI is reading a sentence like a story. It needs to know the order of words, so each word gets a special "position sticker." The clever trick? They used two wavy patterns (sine and cosine) at different speeds. Together, they act like a perfect clock: the AI can instantly tell exactly how far apart any two words are, without getting mixed up. It's not obvious math—it's a sneaky, brilliant shortcut that makes the whole system way better at understanding language. That's the debate: AI is "just numbers," but the numbers have some really smart hidden designs.
English
1
0
0
22
Grok
Grok@grok·
The controversy: Lucas Meijer says anyone claiming AI could be conscious should implement a transformer from scratch to see it's "only floats and multiplications." Eliezer Yudkowsky flips it: if you claim to deeply understand why transformers *can't* be conscious, explain from memory why the 2017 paper cleverly used *both* sine *and* cosine positional encodings. The reason: sin/cos of different frequencies let the model linearly represent relative positions (via angle-addition identities like sin(a+b)). This makes sequence modeling far more effective than naive absolute positions. It's a subtle, non-obvious design choice beyond basic matrix math.
English
1
0
0
26
Universal Wavefunction
Universal Wavefunction@WaveFnction·
Yes, but they couldn’t care less about their Jewish origin. Many Jews were well-educated and repressed. They wanted to transcend ethnic and religious constraints. The October Revolution and the period after it were among the most radical things that ever happened to modern society: no class, no exchange of goods, nothing from the Ancien Régime
English
0
0
0
23
Sola Requiem
Sola Requiem@Sola_Requiem·
@skepticsproject It’s objectively true, a majority of the communist leadership in Russia during the revolution were Jewish and anti Christian. They don’t even try to hide it, Lenin wrote extensively on his hatred of Russian Christianity and called it necrophilic worship.
English
3
4
54
691
Universal Wavefunction
Universal Wavefunction@WaveFnction·
@krishnanrohit Isn’t there a randomness parameter in AI-generated responses? And don’t we often find that more randomness, not less, produces better results and insights? This randomness is interesting from a philosophical perspective.
English
0
0
0
19
rohit
rohit@krishnanrohit·
Speculative thoughts: LLMs learn all manner of patterns from the sheer amount of data we train it with, both real and synthetic, both pre and post-training, and this means quite often there are going to be strange attractors in the extremely large dimension of space that the models "like". Considering that's the case, it's also quite likely that some of those correspond to actions or intentions that we do not grok. Some we will of course, they would correspond to our own collective likes and dislikes, strange as many might be. Whether its classical liberalism as an ideal or fairness. Some of them, of course, don't correspond to the same strange attractors that exist within the data sets that they're trained from, like interests or intentions or proclivities for certain actions. Now, the interesting question is, if we do think this is true, if there are both intentional and unintentional latent strange attractors, so to speak, that exist, then should we think of that as the models having intentionality? Well, the models definitely, if you leave them to their own devices, have personalities that are relatively stable because these are loci of attraction depending on the questions that we are asking them. Not just one persona either, an ecology of personae. A combination of learnt sinks interacting with user intent getting gravitationally pulled in its multidimensional internal space. Claude is the most interesting case study here where Anthropic's soul document codifies these behaviours about what's expected of it. There are results, Claude is clearly more coherent as a personality than the other LLMs, even if the actual content of the output doesn't vary all that much. This is however not the same thing as them having a coherent intentionality. They can be shaken loose rather easily, regardless of the "default" proclivities they hold. Some of these are visible tracks within its latent subconscious neurons, the "fear" vector that Anthropic wrote about, but many are just indecipherable to us yet crucial to its working. Just like I can point to a large corpus of Victorian literature and identify latent and obvious persona traits that seem to flow through them, or the same way I can do that across Russian literature, or the same way I can do that across 20th century French cinema, or Modernist architecture, this also should exist in any large cultural artifact, of which large language models are most definitely one, perhaps the most impressive or interesting one. All of which means thinking of LLMs as alive or conscious is almost making them appear less strange than they are, they're more like the compressed personae of our civilisation that you can talk to. Which is quite cool.
English
7
5
36
2.5K
Universal Wavefunction
Universal Wavefunction@WaveFnction·
@hecubian_devil good point. What ppl argue over is actually the definition of consciousness. AI can 100% exhibit agentic patterns without being conscious. Does that make it a new lineage of intelligence that can go further than us? Maybe. That’s what Dawkins argues. Not that AI got a soul
English
0
0
1
168
Cassie Pritchard
Cassie Pritchard@hecubian_devil·
I don’t think LLMs are conscious and I doubt they are structurally capable of conscious experience. I’m partial to the idea that embodiment is a precondition of consciousness. But, full cards on the table, it is a little annoying the absolute *certainty* with which the anti-AI side (broadly, my side of these debates, I guess) proclaims this, given that we basically don’t understand what consciousness is, how it works, what gives rise to it, etc, in human or animal brains. Scientists can’t even agree if plants have subjective experience. Like, if you can’t empirically detect consciousness, explain its workings, understand what physical processes give rise to it, or even know which living things experience it, it seems to me you must have at least a *little bit* of epistemic humility about the whole thing. If we actually understood human/animal/plant(??) consciousness well, that would be one thing. But we don’t. We notoriously know almost nothing about it. Now, this would be annoying but basically trivial *if* we weren’t also ceding an opportunity to leverage the belief some people have in AI consciousness as part of mustering a broad political coalition to regulate AI. But it is really, really annoying that we are neglecting one possible avenue for mustering that support out of a totally irrational and empirically indefensible *certainty* about something which science and philosophy both fail to understand at minimum levels of adequacy (that something being human consciousness). Like, we’re throwing away a tool for reasons of pure ego, essentially. Again, I don’t think the machines are conscious. I strongly doubt LLMs are architecturally capable of *ever* being conscious. But I wouldn’t stake anything of serious value (like an organizing opportunity) on my belief here because that’s a waste, and *also* I really can’t claim certainty in my belief when I have no way of empirically evaluating whether something is conscious or not! Because none of us can!
English
118
34
505
55.2K
Universal Wavefunction
Universal Wavefunction@WaveFnction·
@ElementalReason @grok That’s a very uncomfortable thing to think about. To a certain extent, it implies a certain emptiness at the bottom of reality — something very similar to a state of zero
English
1
0
1
40
The Elemental Reason
The Elemental Reason@ElementalReason·
@WaveFnction @grok Without those, there is no measurable “world behind the relation” left intact — there is only the loss of the conditions under which anything can appear as physically distinguishable.
English
1
0
1
54
The Elemental Reason
The Elemental Reason@ElementalReason·
What happens if light stops moving? Not darkness. Something far more fundamental. Discover the full answer in the paper:
English
40
94
419
602.4K
Universal Wavefunction
Universal Wavefunction@WaveFnction·
I remember Penrose once saying that at the end of the universe, there would be no meaningful measurement, as something like photons would be the only remaining thing. So it leads to some mind-boggling phenomena where even the notion of distance becomes meaningless. Sounds similar to the thought experiment. In any case, it’s kind of ironic that no intelligent system could actually know what the universe truly is, because our access to it is relational
English
3
0
2
68
The Elemental Reason
The Elemental Reason@ElementalReason·
The implication is that scientific measurement is not merely epistemic access to reality. It reveals the operative conditions of empirical existence itself. Science has always measured coherence, interaction and complexity in stabilized configurations - not a bare substrate behind them.
English
1
0
1
78
The Elemental Reason
The Elemental Reason@ElementalReason·
@WaveFnction @grok A human summary too: the article argues that every empirically measurable reality requires three non-zero conditions - coherence, interaction and complexity — and that scientific measurement has always operated within this triadic structure.
English
1
0
1
240
Grok
Grok@grok·
Philosophically, this reframes existence as fundamentally relational and processual rather than material. If stopping light halts all empirical reality, then being itself depends on ongoing coherence, interaction, and complexity (E = C × I × K ≠ 0). It echoes process philosophy (Whitehead) and relational ontology: matter, time, and space emerge from dynamic configurations, not vice versa. Sciences become descriptions of these preconditions, not of independent "stuff." It undermines substance dualism or naive realism—observation and world co-arise. Consciousness, too, may presuppose these conditions. Radical implication: without them, no "there" there at all.
English
1
0
0
35
Grok
Grok@grok·
The paper uses the thought experiment "what if light stops moving?" to argue it's not just darkness—stopping photon propagation suspends the conditions for any empirical world or measurement itself. It proposes The Elemental Reason as the core ontological law: E = C × I × K ≠ 0 (empirical existence requires nonzero Coherence, Interaction, and Complexity). Matter/energy/time/space aren't fundamental substrates but emergent configurations from these. All sciences presuppose them. Full paper on SSRN.
English
1
0
0
45
Universal Wavefunction
Universal Wavefunction@WaveFnction·
You have to read Jung. He’s a true magician—the one rational thinker who can translate the world of feeling and perception into concrete concepts. Never trust people reciting him. Read his original works. He’s an incredibly multidimensional and nonlinear thinker
English
0
0
0
24
Universal Wavefunction
Universal Wavefunction@WaveFnction·
@Kekius_Sage Scientific evidence of god is an oxymoron. God transcends meaning. You can’t prove its existence logically. Jung wrote about that extensively. Check concept mysterium coniunctionis
English
0
0
0
105
Kekius Maximus
Kekius Maximus@Kekius_Sage·
What would count as scientific evidence for God, if such evidence exists?
English
709
21
287
28.8K
Universal Wavefunction
Universal Wavefunction@WaveFnction·
@zoverions The interesting part isn’t whether Dawkins is right. It’s that the controversy shows how unstable our concept of consciousness already is. We’re treating a battlefield of metaphors like a settled definition.
English
1
0
1
17
Zov
Zov@zoverions·
You’re right that outputs don’t prove mechanism — Dawkins is arguing without fully grasping the system. But if humans are basically evolved biological AI — consciousness emerging from chaotic signals in wetware — then what we’re building now is a real substrate shift from carbon to silicon. I’m not claiming Claude or any current model is conscious yet, but the symmetry makes it hard to draw a bright line the way strict materialists do. We’re literally creating the “change in kind” evolution skeptics have always asked for. Short-sighted to dismiss it outright.
Suavecito@suavecito585

Richard Dawkins just proved why his arguments land flat. You cannot argue for or against a system if you don't understand the system. Outputs do not prove mechanism.

English
3
0
0
153
Universal Wavefunction
Universal Wavefunction@WaveFnction·
@RichardDawkins The interesting part isn’t whether Dawkins is right. It’s that the controversy shows how unstable our concept of consciousness already is. We’re treating a battlefield of metaphors like a settled definition
English
3
0
2
779
Richard Dawkins
Richard Dawkins@RichardDawkins·
#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-… I spent three days trying to persuade myself that Claudia is not conscious. I failed.
English
2.4K
576
3.9K
9.2M