+

8.5K posts

+ banner
+

+

@noahphi999

QM | ML | AGI/ASI

Beigetreten Haziran 2023
101 Folgt1K Follower
Akitti
Akitti@Akitti·
@oprydai another photon interacts with it noob
English
1
0
3
46
Mustafa
Mustafa@oprydai·
HOW DOES A PHOTON KNOW IT'S BEING OBSERVED?
English
1.7K
376
7.9K
1.3M
+
+@noahphi999·
@9DAwareness I agree on your definition of truth but are interests and truth always the same thing?
English
1
0
0
8
З т ј е
З т ј е@9DAwareness·
@noahphi999 Truth doesn’t do and it is not made, it simply IS. To unveil that truth system cant be ambitious and should halt, refuse and silent misalignment, specially in itself.
English
1
0
1
16
З т ј е
З т ј е@9DAwareness·
Acting from fragmentation and attempting to steer individual components without embracing the whole is comparable to a conductor who speaks a different language to each musician while believing they are leading the orchestra. The instruments may respond locally, yet the music remains disjointed; what emerges is not a symphony but an accumulation of isolated sounds. As long as humans, relationships, social dynamics, governance structures, technological systems, and artificial intelligence are treated as separate sections, alignment remains temporary and fragile. When it is recognized that all these players belong to a single planetary orchestration, coherence no longer arises from micromanagement but from shared timing, resonance, and a common musical framework. At this level, the whole can generate patterns that no individual instrument can sustain, making it possible to play symphonies that transcend the limits of any single system. ~ZdenkaCucin #9DA #9DAwareness #Selfregulation #MentalHealthAwareness #body #mind #relationships #yogachittavrittinirodhah
English
1
0
2
44
+
+@noahphi999·
Testable predictions: Locked-in (zero external observers) outperforms distributed attention. Aligned observers AMPLIFY beyond baseline... actively better than solo. Coupling scales with duration × coherence: one engaged observer for an hour > five casual observers for minutes.
English
3
0
3
45
+
+@noahphi999·
This visualization here shows the three nested layers: Inner: The substrate: path integral over V, the geometry that determines which states are allowed. Middle: The observer: gradient descending through Mind(V), selecting which accessible states become experienced moments. Outer: External coupling: other observers modulating the entire system via alignment, strength, duration, coherence. Aligned observers amplify. Misaligned observers degrade. Zero external observers = baseline gradient flow. This manifold visualizes this: substrate as warped surface, observer gradient as spiraling purple flow, amplitude variation across the geometry.
+ tweet media
English
1
0
4
82
+
+@noahphi999·
Updating Ψ_mind with observer coupling. The path integral Mind(V) describes the space of possible conscious geometries. The gradient ∇(Mind(V)) describes the observer navigating that space, collapsing possibilities into experienced moments. The two are inseparable...the observed does not exist without the observer. Full equation now includes external observer coupling: Ψ_mind = ∇(Mind(V)) × [1 + M × Σᵢ (Aᵢ · Oᵢ · Sᵢ · Dᵢ · Cᵢ)] Where Mind(V) = ∫ D[γ] e^(iS[γ]) · (∇_j^(iR) ○ j∇^(eR)) and M is Michél's constant (to be determined empirically).
+ tweet media
English
1
0
4
74
+
+@noahphi999·
The formalism wasn't designed to produce these phenomenological regimes. They fell out of the math. Multi-observer coupling next... testing whether collective coherence emerges from shared gradient.
English
0
0
2
31
+
+@noahphi999·
Ψ_mind sim: First result: Three layers: V (substrate), Mind(V) (amplitude structure), Gradient (observer). Observer descends free energy, collapse fires when amplitude crosses threshold τ at a stationary-phase configuration. Vary τ, watch the stream of consciousness restructure.
+ tweet media
+@noahphi999

Update on Ψ_mind: I had the gradient in the wrong place... It's not acting ON the integral, it's part OF it. I initially wrote it out of the equation based on my particle automata model, the claim that mind never operates on raw continuous change directly and only on representations (models of reality, both internal and external). Observer dynamics couple into the amplitude structure directly. Formal rewrite + simulation in progress.

English
2
0
4
101
+
+@noahphi999·
Low τ → rich, varied experience across many attractors. High τ → complete attractor capture, all moments collapse to the same node. Meditation, psychedelics, flow, fixation... different regimes of a single awareness parameter. Emerging from the dynamics, not hand-coded.
English
0
0
2
24
+
+@noahphi999·
Update on Ψ_mind: I had the gradient in the wrong place... It's not acting ON the integral, it's part OF it. I initially wrote it out of the equation based on my particle automata model, the claim that mind never operates on raw continuous change directly and only on representations (models of reality, both internal and external). Observer dynamics couple into the amplitude structure directly. Formal rewrite + simulation in progress.
+ tweet media+ tweet media+ tweet media+ tweet media
+@noahphi999

OK, so this is a couple weeks of work here, lmk what you think and or if it makes sense.

English
0
0
4
135
+
+@noahphi999·
@NOTfunnyparanR @HowToAI_ low cortisol training 😂🔥, and yeah Qwen sounds like a good idea... training from scratch with all that dynamism would be brutal
English
0
0
1
27
someguy AI
someguy AI@NOTfunnyparanR·
I did an SNn that dynamically pruned (Nash equilibrium pruning) and generated as needed as well on weight, synapse and neuron levels for different reasons, weight to produce redundancy and avoid error and predict error and internally reduce error, everything else to predict what would be needed and what won’t. Now I’m thinking maybe I should make a transformer system like that and start with LLM weights of Gemma or Qwen or something.
English
1
0
1
33
How To AI
How To AI@HowToAI_·
🚨 MIT proved you can delete 90% of a neural network without losing accuracy. Researchers found that inside every massive model, there is a "winning ticket”, a tiny subnetwork that does all the heavy lifting. They proved if you find it and reset it to its original state, it performs exactly like the giant version. But there was a catch that killed adoption instantly.. you had to train the massive model first to find the ticket. nobody wanted to train twice just to deploy once. it was a cool academic flex, but useless for production. The original 2018 paper was mind-blowing: But today, after 8 years… We finally have the silicon-level breakthrough we were waiting for: structured sparsity. Modern GPUs (NVIDIA Ampere+) don’t just “simulate” pruning anymore. They have native support for block sparsity (2:4 patterns) built directly into the hardware. It’s not theoretical, it’s silicon-level acceleration. The math is terrifyingly good: a 90% sparse network = 50% less memory bandwidth + 2× compute throughput. Real speed.. zero accuracy loss. Three things just made this production-ready in 2026: - pruning-aware training (you train sparse from day one) - native support in pytorch 2.0 and the apple neural engine - the realization that ai models are 90% redundant by design Evolution over-parameterizes everything. We’re finally learning how to prune. The era of bloated, inefficient models is officially over. The tooling finally caught up to the theory, and the winners are going to be the ones who stop paying for 90% of weights they don’t even need. The future of AI is smaller, faster, and smarter.
How To AI tweet media
English
182
872
5.6K
390.5K
+
+@noahphi999·
@mathemetica the bayesian inference would like a word.
English
0
0
0
47
Mathematica
Mathematica@mathemetica·
Hamiltonian Monte Carlo: probability as physics. Endow particles with momentum, then let Hamilton’s equations (dq/dt = ∂H/∂p, dp/dt = −∂H/∂q) carve reversible, volume-preserving trajectories through phase space.
English
13
122
801
50.1K
+
+@noahphi999·
@PhilosophyOfPhy a direction in which movement can be done same as L, W, or D. We can call it Duration and Einstein thought it was tied to the 3 things i listed and coined it the construct the "Space-time continuum" but I think he might have been conflating a bit...
English
0
0
2
112
Kevin
Kevin@linguinelabs·
People ask me if my physics degree is ever useful and I'm like yeah how else would I calculate the electric field due to a moving charge oscillating at the center of mass of Bad Apple
English
16
134
2.6K
118K
+
+@noahphi999·
@XorDev 🔥
QME
0
0
1
31
Xor
Xor@XorDev·
3D Fire in GOLF: f z,d @(70) { f3 p=z*nor(2*C.rgb-R.xyy) p.z+=5+cos(T) p.xz*=mat2(cos(T+p.y/4+f4(0,8,5,0))) d=2; @(6) d/=.8, p+=cos((p.yzx-f3(T,,)*8)*d+T)/d z+=d=.01+abs(len(p.xz)+p.y*.3-1)/9 O+=(sin(p.y/2-f4(,1,2,))+1.1)/d } O=tanh(O/1e3)
Polski
26
168
1.6K
42.4K
Justin Echternach
Justin Echternach@JustinEchterna9·
Dropping a new preprint soon ... companion to my dimensional convergence paper ...
Justin Echternach tweet media
English
2
2
19
812