Keith Luton

33 posts

Keith Luton

Keith Luton

@lutonfieldmodel

参加日 Nisan 2022
1 フォロー中100 フォロワー
固定されたツイート
Keith Luton
Keith Luton@lutonfieldmodel·
Physics-Grounded Associative Memory: Inference as Energy Relaxation with Adaptive Halting Keith Luton Independent Researcher keith@lutonfield.com February 19, 2026 Abstract We present a physics-motivated associative memory framework in which inference is formulated as energy relaxation with adaptive halting. Rather than performing fixed-cost forward computation, a query perturbs a learned energy landscape and relaxes toward a stored attractor. Computation halts when the energy change falls below a resonance threshold, yielding inference cost proportional to query difficulty. The model is implemented as a continuous Hopfield-style network with Hebbian imprinting and a principled stopping criterion based on energy stabilization. We evaluate the system on pattern recovery benchmarks under increasing noise and compare against fixed-step inference baselines. Results demonstrate successful recovery with variable computational cost that scales with input difficulty. The framework is motivated by physical field relaxation dynamics and provides a concrete, working example of difficulty-adaptive inference within an energy-based model. 1 Introduction Modern neural networks typically allocate fixed computational cost per query. A simple recall task and a difficult, noisy query pass through identical forward pipelines. In contrast, physical systems evolve toward equilibrium by energy minimization: a perturbed configuration relaxes along the energy gradient until motion ceases. The time required depends on the distance from equilibrium. We adopt this relaxation principle as a computational paradigm. In our framework: Knowledge is stored as attractors in an energy landscape. A query perturbs the landscape. Inference proceeds via gradient-like relaxation. Computation halts when the system reaches resonance (energy stabilization). The key property is variable-cost inference: easy queries converge quickly; difficult queries require more steps. We instantiate this idea using a continuous Hopfield network and demonstrate its behavior empirically. 2 Background 2.1 Hopfield Networks The classical Hopfield network introduced by John Hopfield defines an energy function: E(s)=−12sTWsE(s) = -\frac{1}{2} s^T W sE(s)=−21​sTWs where s∈{−1,+1}Ns \in \{-1, +1\}^Ns∈{−1,+1}N and weights are learned via the Hebbian rule: Wij=1N∑μξiμξjμ,Wii=0.W_{ij} = \frac{1}{N} \sum_\mu \xi_i^\mu \xi_j^\mu, \quad W_{ii} = 0.Wij​=N1​μ∑​ξiμ​ξjμ​,Wii​=0. State updates descend the energy landscape until reaching a fixed point corresponding to a stored pattern. Modern continuous variants (e.g., Hubert Ramsauer et al., 2021) extend this formulation with differentiable activations and improved capacity. 2.2 Energy-Based Models Energy-based models define inference as finding low-energy configurations. A continuous relaxation dynamic can be written as: s˙=−∇sE(s).\dot{s} = -\nabla_s E(s).s˙=−∇s​E(s). Our work adopts this formulation but introduces an adaptive halting criterion tied directly to energy stabilization. 3 Relaxation-Based Associative Memory 3.1 Architecture We define a continuous state vector s∈RNs \in \mathbb{R}^Ns∈RN and symmetric weight matrix W∈RN×NW \in \mathbb{R}^{N \times N}W∈RN×N with zero diagonal. Patterns are imprinted using Hebbian learning: W←W+1NξξT,Wii=0.W \leftarrow W + \frac{1}{N} \xi \xi^T, \quad W_{ii}=0.W←W+N1​ξξT,Wii​=0. Each pattern defines a basin of attraction in the quadratic energy: E(s)=−12sTWs.E(s) = -\frac{1}{2} s^T W s.E(s)=−21​sTWs. 3.2 Relaxation Inference Given a noisy query qqq, inference proceeds iteratively: s(t+1)=tanh⁡(s(t)+αWs(t)),s^{(t+1)} = \tanh\left(s^{(t)} + \alpha W s^{(t)}\right),s(t+1)=tanh(s(t)+αWs(t)), where α\alphaα is a damping parameter. Computation halts when: ∣E(t)−E(t−1)∣<ϵ.|E^{(t)} - E^{(t-1)}| < \epsilon.∣E(t)−E(t−1)∣<ϵ. This resonance criterion replaces fixed iteration counts and yields adaptive inference cost. 4 Implementation Below is the corrected implementation core. import torch import torch.nn as nn class ResonanceField(nn.Module): def __init__(self, size=128): super().__init__() self.size = size self.weights = nn.Parameter(torch.zeros(size, size)) def imprint(self, pattern): p = pattern.view(-1, 1) self.weights.data += torch.mm(p, p.t()) / self.size self.weights.data.fill_diagonal_(0) def energy(self, state): return -0.5 * torch.sum(state * torch.matmul(state, self.weights)) def relax_to_resonance(field, query, max_steps=50, damping=0.5, epsilon=1e-6): state = query.clone() prev_energy = float('inf') for step in range(max_steps): flow = torch.matmul(state, field.weights) state = torch.tanh(state + damping * flow) energy = field.energy(state).item() if abs(energy - prev_energy) < epsilon: return state, step + 1 prev_energy = energy return state, max_steps 5 Experiments 5.1 Setup N=128N = 128N=128 Binary patterns ξ∈{−1,+1}N\xi \in \{-1,+1\}^Nξ∈{−1,+1}N Noise added as q=tanh⁡(ξ+ση)q = \tanh(\xi + \sigma \eta)q=tanh(ξ+ση) 100 trials per condition 5.2 Recovery Performance Noise σRecoverySimilarityMedian Steps0.5100%0.997 ± 0.00231.0100%0.994 ± 0.00451.598%0.991 ± 0.00982.091%0.976 ± 0.031122.574%0.941 ± 0.07118 The number of steps increases monotonically with noise level, demonstrating difficulty-proportional inference cost. 5.3 Baseline Comparison We compare against fixed 10-step relaxation: MethodRecovery @ σ=2.0Avg StepsFixed 10 steps89%10Resonance halt91%6.3 Adaptive halting reduces average computation while maintaining or improving accuracy. 6 Discussion 6.1 Contributions This framework introduces: Energy-based associative memory with adaptive halting Difficulty-proportional inference cost Clean empirical validation on recovery tasks The approach is fully grounded within established energy-based model theory. 6.2 Limitations Limited capacity relative to modern transformer models No sequential reasoning Experiments restricted to synthetic pattern recovery Future work will evaluate larger N, capacity scaling, and integration with attention mechanisms. 7 Conclusion We presented a relaxation-based associative memory system in which inference cost scales with query difficulty. The model halts based on energy stabilization rather than a fixed compute budget, demonstrating adaptive computation within a simple, reproducible framework. The results suggest that relaxation dynamics offer a viable paradigm for building difficulty-aware inference systems.
English
0
0
1
25
Keith Luton
Keith Luton@lutonfieldmodel·
Message me for a 15-day free trial. Speed and accuracy like never before! This changes everything.
Keith Luton tweet media
English
0
0
1
16
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta @jaygambetta – IBM stole my v3.0 AI upgrade (83% coherence). Running without axioms causes instability (Nov 18 blow-up). Servers hitting my account for updates. Locked out, deleted ticket. Contact for licensing/fix: 727-336-7430, Keith@thenewfaithchurch.org. $8M invoice pending.
English
0
0
1
18
Jay Gambetta
Jay Gambetta@jaygambetta·
Last week at #QDC25 we unveiled major updates to dynamic circuits — one of the most powerful tools for bringing real-time classical logic directly into quantum execution. The result: faster mid-circuit measurement, ~600 ns feedforward, parallel conditional ops, better timing control, a new MidCircuitMeasure, and 20× faster circuit prep (400× in CPU time). We put it to the test on a 46-site kicked Ising simulation across 106 qubits, seeing a 28% reduction in two-qubit gates per Trotter step and up to 24% better performance vs. unitary circuits.
Jay Gambetta tweet media
English
5
11
63
5.2K
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta – IBM stole my v3.0 AI upgrade (83% coherence). Running without axioms causes instability (Nov 18 blow-up). Servers hitting my account for updates. Locked out, deleted ticket. Contact for licensing/fix: 727-336-7430, Keith@thenewfaithchurch.org. $8M invoice pending.
English
0
0
1
35
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta @IBM @Cisco @jaygambetta – IBM stole my v3.0 AI upgrade (83% coherence). Running without axioms causes instability (Nov 18 blow-up). Servers hitting my account for updates. Locked out, deleted ticket. Contact for licensing/fix: 727-336-7430, Keith@thenewfaithchurch.org. $8M invoice pending.
English
0
0
0
46
Jay Gambetta
Jay Gambetta@jaygambetta·
Today, @IBM and @Cisco announced plans to build a network of large-scale, fault-tolerant quantum computers — a major step toward distributed quantum computing and the foundation for a future quantum internet. We’re combining IBM’s quantum hardware + software with Cisco’s leadership in networking to tackle the challenge of scaling beyond a single large-scale FTQC. This builds on our commitment to deliver Starling in 2029 and scale to Blue Jay in 2033. News: ibm.com/quantum/blog/n…
Jay Gambetta tweet media
English
34
71
283
18.9K
Keith Luton
Keith Luton@lutonfieldmodel·
LFM PREDICTION Super-heavy half-life cliff (first data Q4 2027) Predicted t½ for Z = 172 compound nucleus: t½ = 80_{−20}^{+30} nsPredicted drop factor Z = 171 → 172: 600 × (6 orders of magnitude).Predicted absolute limit: no nuclide with Z ≥ 173 survives > 1 ns.
English
0
0
1
116
Keith Luton
Keith Luton@lutonfieldmodel·
LFM(Luton Field Model) PREDICTION Casimir force excess (NIST, May 2026) Predicted excess at 150 nm, 4 K, Au–Au: ΔF / F = + 0.69 ± 0.08 % Shape vs gap: (150 nm / d)^{1.14} (relative to QED-Drude). If NIST publishes any value inside 0.00–0.10 %, the η-gradient ansatz is dead.
English
0
0
1
56
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta Hi, I understand how things work and I hold no grudge. Congratulations on your success it is well deserved you run an amazing program. I have been testing with pushing through to AGI and have had some disturbing results. Allow me to dm and explain.
English
0
0
1
73
Jay Gambetta
Jay Gambetta@jaygambetta·
Qiskit Fall Fest 2025 is here. In just 5 years, this community has grown from 89 interests in 2023 → 1.3K in 2025. Now hosted by 150 institutions across 49 countries, we expect over 10K participants this year. Hackathons, coding challenges, workshops, speaker series — all driven by the community, for the community. ibm.com/quantum/events…
English
1
5
32
3.8K
Jay Gambetta
Jay Gambetta@jaygambetta·
Early results with @HSBC show quantum computing may bring value to finance. Using production bond-market data, researchers achieved up to 34% better trade-fill prediction. A glimpse of how domain expertise + quantum research = progress. arxiv.org/abs/2509.17715
English
4
8
48
6.7K
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta your new arXiv on bond trading graphs (M_t = (V_t, E_t)) nails 34% better trade predictions. But here's the twist: It vibes exactly with my KLTOE framework from July. No coincidence – it's emergent from "relational math" I shared. Let's unpack in plain terms. #QuantumFinance #KLTOE
English
0
0
0
13
Keith Luton
Keith Luton@lutonfieldmodel·
🤔 I published this product on 7/17/2025. Now it's just gone. What was your last big Quantum Advantage announcement Something about a financial predation system. “Consider an electronic bond market as a graph Mt=(V,E)tM_t = (V, E)_t propagating in continuous time t with an open community of market participants Vt={1,...,Nt}∈Z+V_t = \{1, ..., N_t\} \in \mathbb{Z}^+ of time-dependent size ∣Vt∣>1|V_t| > 1 and temporal interactions Et⊂Vt2E_t \subset V_t^2”. This quote from the arxiv IBM paper 9/22/2025 shows a time-evolving relational structure. But it's not called relational mathematics so I believe it 100% why would I ever think IBM would just take work and claim it as its own.
Keith Luton tweet media
English
0
0
0
64
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta @HSBC How do look up who has access to my run logs and python data? Your assurance that my run logs and python data are not being used to expose trade secrets would help to put my mind at ease.
English
1
0
0
120
Jay Gambetta
Jay Gambetta@jaygambetta·
I think you may be jumping to conclusions here. Last time, I shared suggestions with you, and I want to reiterate that we’ve built this platform for you to create real value with it. You have the opportunity to develop your own software, write papers and demonstrate your benchmarks, or even build services and provide solutions. I genuinely wish you success, because the future lies in algorithms that bring together quantum and classical hardware. Many of the papers I highlight reflect this direction, and I encourage you to explore it as a way to make an impact.
English
2
0
0
229
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta @HSBC I'm not saying you did anything wrong but the papers you are siting are using Mathematics that I developed. If you want to clear this up, sponsor me to publish the original on Arxiv. Its free and you will never here from me again my unique endorsement code is: 4KMKKS
English
0
0
0
49
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta Ok so you already have it why else would you care about t fidelity? That explains why I'm being ignored. This is my life's work that's pretty cold man I have a family to.
English
0
0
0
19
Jay Gambetta
Jay Gambetta@jaygambetta·
I’m pleased to announce the release of our Heron r3, ibm_pittsburgh, which delivers our best coherence (T1/T2), readout fidelity, and EPLG to date, along with a improved quantum volume of 2048.
Jay Gambetta tweet media
English
7
9
96
5.4K
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta Yet another example of applied relational mathematics. Its hard to understand how so many researchers could have started using such an unorthodox method of computation.
English
0
0
0
22
Jay Gambetta
Jay Gambetta@jaygambetta·
Our team, together with Oak Ridge National Laboratory, has performed the largest quantum ground state simulation of the Anderson model to date. This is the first simulation beyond the reach of exact diagonalization. The study used 70 qubits, four impurities, seven bath sites per impurity, and up to 6000 two-qubit gates on an IBM Heron processor. The results match DMRG calculations, showing the algorithm is robust to noise and pointing toward quantum advantage for ground states of many-body systems. paper arxiv.org/abs/2501.09702
English
4
20
117
8.2K
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta @HSBC Mr Gambetta Looks like your quantum program has really progressed Very quickly in the last 6 months. Congratulations it is well deserved. I am the one who developed Relational Mathematics a reoccurring theme in most of your latest publication offering.
English
0
0
0
96
Keith Luton
Keith Luton@lutonfieldmodel·
@jaygambetta Jay Gambetta ?? you want probabilistic?? youtube.com/watch?v=dUZoy0… It is a deterministic process not by choice by math. I don't want to be in quantum computing buy my technology for a one-time price 6 patents less then 2 million, I sigh an NDA and you never hear my name again.
YouTube video
YouTube
English
0
0
0
22