James Kovalenko

588 posts

James Kovalenko banner
James Kovalenko

James Kovalenko

@deburdened

Author of the Progress Function. I convert epistemic debt into usable structure. 0% noise, 100% signal.

Charlottesville شامل ہوئے Kasım 2024
834 فالونگ132 فالوورز
پن کیا گیا ٹویٹ
James Kovalenko
James Kovalenko@deburdened·
Structure is compressible regularity that survives verification.
English
1
0
1
373
James Kovalenko
James Kovalenko@deburdened·
@DavidePiffer @charlesmurray Cheap answers just increase variation. Most of it is noise. The scarce skill is: knowing which outputs survive stress, recombination, and falsification. A good question generates candidates. A strong operator kills most of them.
English
0
0
1
12
Davide Piffer
Davide Piffer@DavidePiffer·
AI is about to change research in a subtle but radical way. When answers become cheap and abundant, the scarce skill isn’t producing them. It’s knowing which questions are actually worth asking. davidepiffer.com/p/ai-and-the-c…
English
5
4
15
2.6K
James Kovalenko
James Kovalenko@deburdened·
@QuantumTumbler Hilbert space encodes possibilities. Physics is the invariants preserved under evolution and observation. Without that, the question "what can be true" is not predictive.
English
0
0
0
5
B
B@QuantumTumbler·
Most people think physics is about tracking objects in space. It’s not. At the deepest level, it’s about mapping what can be true before anything even happens. That’s what Hilbert space actually is. And once you see it that way, a lot of things click. I wrote a clear breakdown it’s free right now👇 open.substack.com/pub/omnilensco…
B tweet media
English
9
16
81
4.7K
James Kovalenko
James Kovalenko@deburdened·
@StuartHameroff Binding is not causation. Discrimination requires necessity and sufficiency. If microtubules are the locus, modulating them alone must control consciousness in vivo.
English
0
0
0
15
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
Anesthetics do bind to many molecules nonspecifically, ‘promiscuously’ and yet their actions are specific and selective, affecting consciousness almost exclusively. Binding to membrane receptors and ion channels aren’t the cause of loss of consciousness. The cause is quantum binding to microtubules. The studies showing multiple receptor effects for ketamine, propofol and other soluble anesthetics don’t recognize they are also binding and acting on microtubules. As for your contention that GNW has predictions (ignition, global broadcast…. ) fMRI correlates with neither metabolism nor neural activity. So WHAT is GNW broadcasting??Firings, local field potentials, synaptic transmissions, traveling waves, ephaptic fields…??? The origin of EEG is unknown and likely comes from microtubules Predictive coding happens at multiple scales including among microtubules IIT and causal structure? Of what? Collapse is causal.
B@QuantumTumbler

Meyer–Overton is just a correlation though. It doesn’t point to a single target, and definitely not specifically to tubulin. The “one target” idea kinda fell apart because anesthetics clearly hit multiple systems GABA, NMDA, K2P, etc. And those effects have actually been tested directly and line up with people losing and regaining consciousness. That’s the difference for me. Those mechanisms show up in real brains, you can measure them, tweak them, and watch behavior change. The tubulin side just isn’t there yet. Right now it’s mostly correlations and modeling, but no clear demonstration that anesthetics are actually disrupting microtubules in living neurons in a way that tracks consciousness. That’s the gap. Saying Orch OR predicts it is one thing, but until it shows up clearly in vivo, it’s not really competing with what we already know it’s just layered on top. So it’s not about bias, it’s just a simple check. If tubulin is the main thing, it should show up clearly. So far it doesn’t.

English
4
3
19
1.5K
James Kovalenko
James Kovalenko@deburdened·
@kennethd_harris Fitness can rise while coherence decays. Propagation is not validity. Only what remains self-consistent under iteration survives without drift.
English
0
0
0
33
Kenneth D Harris
Kenneth D Harris@kennethd_harris·
New preprint, on a very different topic: a mathematical theory of evolution for self-designing AI. AI is increasingly designed by AI. What systems might emerge after generations of self-designing AIs competing for computing resources? ↓ arxiv.org/abs/2604.05142
English
4
14
67
4.5K
Susan Zhang
Susan Zhang@suchenzang·
the problem with knowing too much or too little is that both are prone to entropy collapse
English
7
3
108
6.5K
James Kovalenko
James Kovalenko@deburdened·
@anderssandberg Verification of composability: internal consistency, constraint satisfaction, error detectability, stability under iteration .
English
0
0
0
10
Anders Sandberg
Anders Sandberg@anderssandberg·
This is a cool scale framing: categorizing civilizations by their ability to run a simulation of a simpler civilization.
Anders Sandberg tweet media
English
4
2
28
2K
James Kovalenko
James Kovalenko@deburdened·
/3 The cost of debt grows faster than debt. This is the first axiom you should disbelieve until you test it.
English
0
0
0
24
James Kovalenko
James Kovalenko@deburdened·
/2 Debt is a stock, not a flow. Its units are claims, not claims per time.
English
1
0
0
28
James Kovalenko
James Kovalenko@deburdened·
/1 Epistemic Debt is what a system carries when it generates faster than it verifies.
English
1
0
0
24
James Kovalenko
James Kovalenko@deburdened·
The paper accurately models the collapse, but treating AI automation purely as a Pigouvian externality (pollution) misdiagnoses the substrate. This is a textbook Sheaf Condition Failure. Each firm’s local decision to automate is mathematically rational and locally verified. The global interaction topology (consumer demand) is incompatible with these isolated local patches. The reconciliation cost diverges to infinity, tearing the macroeconomic sheaf apart. A robot tax just adds arbitrary friction. To survive the fold, we have to structurally rebuild institutional Verification to match the new, unbounded Variation AI provides.
English
0
0
0
10
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
🤯BREAKING: Researchers just mathematically proved that AI layoffs will collapse the economy: and every CEO already knows it. The AI Layoff Trap. A game theory paper from UPenn + Boston University is glaringly important! 100K+ tech layoffs in 2025. 80% of US workers exposed. And no market force can stop it. → Every company fires workers to cut costs → Every fired worker stops buying products → Revenue collapses across every sector → The companies that fired everyone go bankrupt It's a Prisoner's Dilemma with math behind it. Automate and you survive short-term. Don't automate and your competitor kills you. But everyone automating destroys the demand that makes all companies viable. UBI (universal basic income) won't fix it. Profit taxes won't fix it. The researchers found only one solution: a Pigouvian automation tax "robot tax" The AI trap on the economy is here!
Priyanka Vergadia tweet media
English
535
2.1K
8.7K
1.4M
James Kovalenko
James Kovalenko@deburdened·
@rapid_rar2 @ElliotLip independence is a structural constraint that makes repeated composition stable. That’s why it sits at the center of probability and barely matters in raw measure theory.
English
0
0
0
15
Elliot Lipnowski
Elliot Lipnowski@ElliotLip·
Recently reminded of a beloved line from Terrence Tao's blog: "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate."
English
9
68
1.2K
53.8K
James Kovalenko
James Kovalenko@deburdened·
Green numbers let you feel good today. The shape of the curve tells you whether tomorrow will be easier or harder.
English
0
0
0
21
James Kovalenko
James Kovalenko@deburdened·
@anderssandberg Exploration increases state space. Verification must scale with it or drift dominates. Without that balance, exploration accelerates collapse.
English
1
0
0
12
Anders Sandberg
Anders Sandberg@anderssandberg·
@deburdened I think you are assuming the simulation is intended to follow a particular trajectory rather than exploring new states (which was the reason in the original marathon bluesky thread).
English
1
0
0
31
James Kovalenko
James Kovalenko@deburdened·
@ToKTeacher Two modes: Theory-dominant You only see what your framework allows. Novel signals get filtered out as noise. Data-dominant You generate patterns without constraint. You overfit noise and accumulate contradictions. Both are incomplete.
English
0
0
0
56
Brett Hall
Brett Hall@ToKTeacher·
As if Popper never existed (again). A crucial sense in which theory comes first in science is: any data collected will be collected according to pre-existing theories whether anyone acknowledges them or not. Eg: how data collection devices work, theories of uncertainties, etc.
Itai Yanai@ItaiYanai

There's a strange myth about science: that theory comes first, and that data cannot show anything new. But anyone who's ever done science knows the truth that there's a long conversation between data & hypotheses. Back & forth.. until the discovery. And if you think about it, it has to be this way! (Night Science recap, Day 6)

English
5
6
105
17.3K
James Kovalenko
James Kovalenko@deburdened·
The distinction matters only if it tracks a real difference in what must be explained. The easy problems concern functions that are externally observable and verifiable. The hard problem claims that even after every one of those functions is fully explained, a separate fact of subjectivity remains. To make this a rigorous analysis, one must identify what specifically is left unaccounted for once the functional account is closed.
English
0
0
1
36
James Kovalenko
James Kovalenko@deburdened·
@prathoshap yes, the real work is building systems where those equations enforce themselves. If correctness still depends on interpretation, intuition, or cleanup after the fact, it doesn’t scale.
English
0
0
0
73
prathosh ap
prathosh ap@prathoshap·
@deburdened Of course. It's a necessity for becoming a good researcher, but not sufficient. You obviously know the difference.
English
1
0
4
735
prathosh ap
prathosh ap@prathoshap·
The tech industry convinced an entire generation of developers that they can skip the math. They are wrong. You cannot build foundational architecture with a "GenAI in 5 days" bootcamp. Real engineering requires staring at the equations until they make sense. If you actually want to build state-of-the-art models, you cannot skip the math. No fluff, no quick-bytes. Just the mathematical foundations of Deep Generative Modeling. Here is a course I designed to put everything you need to learn.
prathosh ap tweet mediaprathosh ap tweet media
English
34
159
1.4K
78.4K
James Kovalenko
James Kovalenko@deburdened·
Asserting the primacy of consciousness while biology serves as its coupling fails to define a mechanism. The central problem is the maintenance of coherence. Systems must generate candidates, verify them, propagate structure without loss, and retain verified forms. If variation outpaces verification, errors accumulate and the system collapses. This failure mode is independent of the substrate. Biological systems persist because they regulate intake, correct deviations, and preserve structure. This reliability sustains coherence under continuous change. Persistence depends on viability. A system that cannot filter, correct, and preserve its own structure will not last.
English
0
0
1
34
OCEAN
OCEAN@OCEANVIN10·
If consciousness is bottom-up — preceding life rather than emerging from it — then the cognitive light cone ran in reverse from our usual assumptions. The boundary wasn't expanding outward from a brain. It was contracting *toward* biology, phase-locking to a substrate that could sustain it. Fine tuning as a coupling problem, not an anthropic selection effect.
English
17
6
65
2.8K