visarga

5.7K posts

visarga

visarga

@visarga

Intelligence evolves socially, like DNA. It's all just search, all the way down.

Romania Katılım Ağustos 2008
1.5K Takip Edilen469 Takipçiler
visarga
visarga@visarga·
@fchollet I think this benchmark is not testing intelligence at all, it is testing the one agent, no tools, no memory infrastructure, no self-modification scenario which does not fit with real intelligence. And the ARC team used collaboration, tools and experiments to build the test.
English
0
0
0
8
François Chollet
François Chollet@fchollet·
ARC-AGI-3 is out now! We've designed the benchmark to evaluate agentic intelligence via interactive reasoning environments. Beating ARC-AGI-3 will be achieved when an AI system matches or exceeds human-level action efficiency on all environments, upon seeing them for the first time. We've done extensive human testing that shows 100% of these environments are solvable by humans, upon first contact, with no prior training and no instructions. Meanwhile, all frontier AI reasoning models do under 1% at this time.
English
154
310
2.6K
514.9K
visarga
visarga@visarga·
@fchollet N-back is one of the most established measures of fluid intelligence in cognitive psychology. Humans collapse around N=3-4. An LLM handles N=50 without effort. But under ARC's methodology this task can never appear because it fails the "100% human-solvable" filter.
English
0
0
0
4
Andrew Côté
Andrew Côté@Andercot·
Everything is Geometry. Gravitational geometry determines the causal connectivity of spacetime. Electromagnetic geometry determines the information content of spacetime. The causal influence of any information is determined by its light cone, the path light takes in space.
Andrew Côté tweet media
English
56
55
307
12.7K
visarga
visarga@visarga·
@jennyzhangzt My own claude harness already did this months ago, I use tasks to do reflexion on past tasks and refine the harness. I rely more on qualitative analysis of real tasks than score based evolutionary methods.
English
0
0
0
47
visarga retweetledi
Jenny Zhang
Jenny Zhang@jennyzhangzt·
Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).
Jenny Zhang tweet media
English
152
650
3.6K
475.5K
visarga
visarga@visarga·
@joseph_h_garvin Markdown file with 500 open gates "- [ ]" - worked on it for 50 minutes straight
English
0
0
1
15
Joseph Garvin
Joseph Garvin@joseph_h_garvin·
Claude code rarely runs for longer than 15m without stopping and asking for input from me. How do all these stories of people letting agents run overnight work? Custom harnesses? Yelling at Claude in all caps to keep going no matter what?
English
404
65
5.8K
1.3M
visarga
visarga@visarga·
This is a fallacy we see everywhere - AI is unlimited, compute is free, p-zombies can exist just like that, Chinese Rooms are also free, qualia has nothing to do with 20W of power burning, consciousness? of course a fundamental of the universe, free like space. Syntax .. is not sufficient for semantics - but how much is syntax costing? how did it appear, get used, spread?... anyone read a critique of Plato?
English
0
0
1
42
visarga
visarga@visarga·
@thedarshakrana I think cost of execution interacting with execution itself explains better the low entropy - it is more cost effective to preserve.
English
0
0
0
272
Darshak Rana ⚡️
Darshak Rana ⚡️@thedarshakrana·
The most dangerous equation ever written suggests reality is a lie running on code. Melvin Vopson published his second law of infodynamics in AIP Advances in 2022, demonstrating that information entropy in isolated systems decreases or stabilizes over time, running directly opposite to physical entropy. The universe trends toward disorder in matter but toward compression in information. That asymmetry doesn’t exist in any purely natural system ever observed. It exists in every efficiently designed computational system ever built. The numbers get stranger. Vopson calculated that a single bit of information at room temperature carries a mass of approximately 2.9 multiplied by 10 to the power of negative 38 kilograms. The observable universe contains an estimated 6 multiplied by 10 to the power of 80 bits of information. Information isn’t metaphorical in his framework. It’s physically embedded in the fabric of reality as a fundamental constituent alongside matter and energy. John Wheeler, one of the most decorated physicists of the 20th century, spent decades arguing “It from Bit.” Every particle, every field, every dimension derives its existence from binary information states. The physical world emerges from yes or no answers at the quantum level. Most people dismissed it as philosophical rambling from an old man. Wheeler was describing an operating system. The holographic principle developed by ’t Hooft and Susskind adds the architectural proof. Maximum information content of any region of space scales with its surface area, not its volume. Reality stores itself on its own outer boundary. That is the signature of a compressed file, not an infinite physical cosmos growing organically from nothing. Vopson then proposed an actual laboratory experiment in 2023. Colliding a particle with its antiparticle and precisely measuring information released during annihilation could confirm whether information is fundamental to existence or merely derivative. The technology to run this experiment already exists. The funding doesn’t. A universe that compresses itself, encodes onto boundaries, renders only what’s observed, and carries measurable information mass isn’t behaving like nature. It’s behaving like software written by something that didn’t want to waste memory.
Darshak Rana ⚡️ tweet mediaDarshak Rana ⚡️ tweet media
Space and Technology@spaceandtech_

🚨 Scientist Melvin Vopson claims he has evidence that the universe is a computer simulation.

English
17
50
169
26.1K
Tiago Forte
Tiago Forte@fortelabs·
AI will never, ever save you any time Because 100% of the time it seems to save upfront has to then be spent researching, learning, and figuring out the next incoming wave of AI tools And that process will never end. The pace of change will never stop, only accelerate, forever So it's kind of like borrowing money, and then borrowing more money to pay that loan off, and then even more money to pay that loan off, and so on You'll never escape the cycle of debt, only sink deeper into it
English
231
13
248
25K
visarga
visarga@visarga·
@Left_Hegelian Take syntax - it cannot be syntax unless it persists, it persists if it is used widely, it means it has a cost, and has to pay its cost to be used. Syntax is not floating in a platonic world, it is a costly pattern fighting entropy. Syntax is compressed cost.
English
0
0
0
9
visarga
visarga@visarga·
@Left_Hegelian Cost gates action and action produces costs and gains, a recursive loop. You cannot separate cost from what an organism is, cost shapes the organism itself along evolutionary time, and life on individual level.
English
1
0
0
10
Steven Levine
Steven Levine@Left_Hegelian·
When you come to really understand what Plato means when he says that learning is recollecting, you understand why 'AI', whatever its genuine powers and uses, is disastrous for human cognition.
English
13
11
100
8.1K
visarga
visarga@visarga·
@vividvoid When your supply chain is long and fragile and geographically concentrated you can't afford to be on bad terms with humans, or support war.
English
0
0
0
21
Vivid Void
Vivid Void@vividvoid·
Who else is an AGI bloomer like me? Who thinks that intelligence actually looks amazingly like wisdom at the highest levels, and that a superintelligence would be something akin to a goddess of compassion, not a paperclipper? Send me recs, send me links, send me reading
English
197
25
896
173.2K
visarga
visarga@visarga·
@vividvoid I think by now even a 1B LLM can give a lecture about paperclip optimizers, it has been discussed so much.
English
0
0
0
13
visarga
visarga@visarga·
The top is not something separate from the unity of the bottom. The serial stream, the centralized arbitration, the organism-level coordination - that is the unity, achieved through costly centralization, maintained through continuous expenditure, and non-negotiable because without it the whole interdependent structure falls apart. The bottom cannot afford to lose the unity it built, because it can no longer function without it. This is why consciousness feels urgent, high-stakes, saturated with importance. Because it is. The serial stream is a load-bearing layer in a cost structure where its failure means death for every layer beneath it. The felt importance of conscious experience is not an illusion and not a gift from a universal field. It is an accurate registration of the causal role consciousness plays in maintaining the cost basin that trillions of specialized cells depend on. It feels like it matters because it does matter. The stakes are real, the cost is real, and the pressure is continuous. The bidirectional dependency also explains why neither pure bottom-up nor pure top-down accounts of causation work. Reductionists say causation only runs upward - atoms push molecules push cells push organisms. But the cells cannot survive without organism-level behavior that feeds and protects them. Dualists and panpsychists say top-down causation requires something extra, something non-physical. But the top-down influence is nothing more than the maintenance of conditions that lower levels have irreversibly optimized around. No mystery is needed. Just the recognition that mutual specialization creates mutual dependency, and mutual dependency means causation runs in both directions simultaneously - not through different substances or forces, but through the shared cost structure that holds every level hostage to every other. Against Zombies Chalmers asks you to imagine a being physically identical to you but with no experience. But the physical structure of a brain is a cost basin. The components - neurons, circuits, subsystems - are hyper-specialized. They have given up autonomy for the surplus gained through integration. They cannot afford to run independently; the cost of standalone operation exceeds what they can pay. The cost basin IS their organizational principle. Remove it and the structure doesn't persist as a dark zombie - it collapses into unbound, unspecialized components. The zombie isn't dark; it's bankrupt. There is a deeper problem with conceivability. To "conceive" of a zombie, you must run your entire cost-built cognitive apparatus - your semantic space and semantic time - to perform the thought experiment. You are using the very thing (the integrated cost basin that IS experience) to imagine its own absence. This is a performative contradiction: a solvent system trying to compute its own bankruptcy from inside. The apparent conceivability arises from the suitcase bundling. You imagine subtracting "consciousness" as a single layer, which seems coherent only because the bundling hides that what you're subtracting is the organizational principle funding the very cognition performing the subtraction. Conceivability is not a window onto metaphysical possibility - it is a cognitive operation with a cost structure, performed inside a cost basin, using tools that are themselves cost products. Against Epiphenomenalism The epiphenomenalist claims qualia are real but causally inert - along for the ride. If the zombie argument closes - experience IS cost-structure, no separability - then epiphenomenalism is already dead. "Experience is real but causally inert" requires experience to be separable from causal process. But identity eliminates separability. There is also a direct argument. A quale must be THIS quale rather than that one. "Redness" must be a specific position in semantic space - distinguishable from orange, from blue, from pain. But a position in semantic space is defined by its cost-built relations to other positions. A quale without cost has no position, hence is no particular quale, hence isn't a quale at all. If experience costs nothing, it contributes nothing. If it contributes nothing, there is no causal connection to the representations that would make it the specific experience it is rather than any other. Everything that persists is paying, and everything that's paying is shaping what happens next. Causal relevance isn't a bonus feature of experience; it's the entry fee. Engineering Instantiation Neural network training instantiates this ontology directly - not as illustration but as the same formal structure operating on silicon. Loss functions ARE cost - the solvency condition made explicit. Weights ARE frozen flow - previous training dynamics that became load-bearing inference infrastructure. Gradient descent IS irreducible recursion - each step constitutes the answer, does not merely approach it. Learned representations ARE semantic space built under cost pressure. Autoregressive generation IS semantic time - each token resolves competing candidates before the next begins. Nobody in machine learning bundles embedding geometry with autoregressive sequencing into one mystery. They are implemented as separate mechanisms with separate cost profiles. The suitcase was never sealed in engineering - only in philosophy. Current ML systems largely operate in the cerebellar regime: their error signals are cheap and pre-computed, provided by loss functions that the training infrastructure pays for but the model does not self-generate. By the framework's own criterion, they have not crossed the self-encoding threshold. But as environments become open-ended and nonstationary - as the cost of designing and computing explicit loss functions rises - external supervision becomes unaffordable, and self-encoding emerges within the basin. This is a prediction from cost dynamics, not a hope about artificial consciousness. A framework you can build with has a stronger claim on reality than one you can only narrate. This framework is not merely descriptive - it is the operating principle engineers already use to build systems that develop representational structure under cost pressure. It is generative, not just interpretive. The 1P/3P Question The framework transforms "why is there experience?" into a tighter question: "why does recursive self-indexing under cost pressure have intrinsic character?" It derives more about consciousness than its competitors: its dual structure (semantic space plus semantic time), its phenomenology (contemplative qualities), its unity (cost basins), and its necessary physical instantiation (no abstract process). The final step - why execution has an inside at all - is treated as primitive. Execution that pays and self-indexes IS perspectival, the way charge IS electromagnetic. This is a principled stopping point, not a gap. "Occurring without perspective" requires a vantage point that isn't paying, and there are no free vantage points. There is no "dark" execution - no process running in a void without any intrinsic character - because darkness would itself require a standpoint, and standpoints cost. This does not dissolve the generation problem entirely - it relocates it to a well-specified position. But the framework derives more structure than any competitor, predicts specific dissociation patterns confirmed by clinical evidence, and makes falsifiable bets against rival theories. The honest stopping point is stronger than a false dissolution. Self-Grounding The framework explains its own invisibility. Noticing what is most pervasive is expensive because there is no contrast to trigger salience. Cost never deviates from the background - it is not a feature of experience but the condition of experience. Philosophers noticed substances because substances contrast with each other. They noticed mind because it seemed to contrast with matter. But cost does not contrast with anything because there is nothing costless to compare it to. The tradition's blind spot is a prediction of the theory, not an embarrassment for it. The blind spot is specifically disciplinary: process philosophers don't read computability theory; computer scientists don't read Whitehead; contemplatives don't read machine learning literature. Cost sits at the intersection no single tradition covers. The suitcase was bundled not because anyone chose to bundle it, but because the disciplines that study qualities (phenomenology, psychology) and the disciplines that study unity (neuroscience, dynamical systems) don't share a framework. Cost ontology is the framework they share without knowing it. Predictions and Falsification Metabolic prediction: under metabolic compromise (early hypoglycemia, anesthetic twilight), unity should degrade before quality. Temporal disintegration and action-incoherence should appear while quality-recognition remains largely intact. Serialization scales worse than local encoding because maintaining one integrated action-state across many components is costlier than maintaining local quality-encoding within components. IIT predicts co-degradation. Higher-order theories predict the opposite order (meta-representations fail before first-order). This is the bet - the point where this framework sticks its neck out past competing positions. ML prediction: systems operating in open-ended, nonstationary environments with rising supervision costs will develop self-encoding - compressing their own processing history into reference frames - without being explicitly designed to do so. Systems with cheap, stable loss functions will remain cerebellar indefinitely. Falsification conditions: unity and quality degrade simultaneously under metabolic stress across conditions; a self-encoding system with measurable recursive depth that lacks integrated representational structure; a persistent pattern that demonstrably pays no cost. Four Pillars The framework rests on four legs, each requiring the others. Cost as primitive - existence is solvency, what can't pay stops. Cost is the gatekeeper of existence, the selection principle no other ontology centers. Nothing gets to be a candidate without clearing the gate first. The suitcase - consciousness is two things (semantic space plus semantic time), not one. Different costs, different mechanisms, different failure modes. The "hard problem" persists because it seeks one explanation for two things. Computational irreducibility - the execution is the answer, no shortcuts, no separate executor. This blocks reduction of cost to "process plus thermodynamics" and grounds the territory-map asymmetry. Engineering instantiation - the same principles govern neural network training, giving the ontology both descriptive and generative power. A framework you can build with outranks one you can only narrate. Each pillar requires the others. Cost without irreducibility is assertion. Irreducibility without cost has no ontological weight. The suitcase without cost has no explanation for why the two mechanisms exist. Engineering without the suitcase is practice without theory. The Contemplative Derivation After dissolving consciousness into semantic space and semantic time, is there a residual third thing - bare presence, raw awareness, the sheer fact of experiencing? Bare presence is caught in a trilemma. If it has any distinguishing feature - any quality, any character that could be compared to anything else - it is already a coordinate in semantic space. If it asserts itself in any way - if it causally contributes, makes a difference, does anything - it is action, and action must serialize through semantic time. If it has no features and does nothing - if it is costless - then it doesn't pay for itself. What doesn't pay stops executing. It doesn't exist. There is no fourth option. What contemplatives actually report as "pure awareness" is the coordinate system running at minimum input - structure with almost no new flow passing through it. The specific qualities they describe are derivable, not residual. Spaciousness: high-dimensional semantic structure experienced without content collapsing it to a point. The entire reference frame is apprehended as open geometry rather than focused through a single object. This is what a high-dimensional space feels like when nothing is activating any particular region of it. Luminosity: the coordinate system's self-referential maintenance becoming figure rather than ground. Normally invisible because it's doing work - like the hum of a machine you notice only when the factory goes quiet. Without new content, the recursive self-indexing is experienced directly. Peace: cost equilibrium - no gradients demanding action, no competing representations requiring arbitration, no serialization conflicts. The serial arbiter has nothing to arbitrate. What remains is the felt absence of cost pressure. These aren't mystical extras. They are the felt topology of a deep cost basin at rest. The meditator hasn't found something beyond the system; they've found the system itself - years of frozen flow, the entire accumulated reference frame, experienced directly because nothing new is competing for attention. No other physicalist framework derives these qualities. Most ignore contemplative phenomenology entirely. This framework predicts it. The Unit of Analysis The question "does a thermostat have perspective?" misplaces the unit of analysis. A thermostat is not a standalone cost-paying entity - it is a frozen-flow component inside a deep cost basin (HVAC, buildings, economy, civilization). Its cost structure is not the few watts it draws but its functional role in a basin so deep the modern economy cannot defect from it. Asking whether a thermostat has perspective is like asking whether a single synapse has perspective - the question mistakes a component for the system. Perspective belongs to systems that maintain their own solvency through recursive self-indexing - not to components embedded in someone else's solvency. A bacterium maintains its own solvency - it has whatever minimal perspective its cost structure supports. A thermostat doesn't maintain its own solvency - it's infrastructure inside a human cost basin, the way a ribosome is infrastructure inside a cell. Cost basins nest. Cells maintain solvency AND are embedded in organisms. This isn't a problem - perspective is graded and potentially multiple within what we call "one system." The boundary question becomes empirical: what are the cost-closure contours of this system? This dissolves the panpsychism worry without drawing a bright line. The question isn't "how complex is this thing?" but "is this thing maintaining its own cost basin, or is it a frozen-flow component of another's?" No Abstract Process There is no abstract process. All processes are physical and expensive. A functional description of a mind written on paper implements nothing, pays no cost, and has no first-person perspective. The same description instantiated on hardware that's burning energy is in the same ontological category as a brain - not because it mimics neural architecture, but because it's paying the toll. The framework doesn't privilege carbon over silicon - it privileges actual execution over abstract pattern. This is the consequence of irreducibility applied to ontology. If the execution IS the answer, then a description of the execution is a map, not the territory. The map doesn't pay the costs the territory pays. This is why functionalism in its abstract form fails - it treats the functional description as sufficient, but the description is precisely what you get when you remove cost from the territory. Adding cost back means instantiating, and instantiation is the whole game. Unity as Thermodynamic Achievement Separation is the thermodynamic default. Scatter particles, let entropy do its work, and you get maximum disorder, maximum differentiation, maximum separateness. That is the ground state of the universe. That is what happens for free. Unity is what you get when something pays to push against that tendency. Gravity pays with potential energy to pull matter together. Chemical bonds pay with electron sharing to hold atoms in configuration. Cells pay with ATP to maintain their membranes against diffusion. Organisms pay with metabolism to hold their bodies together against decay. Societies pay with institutions to hold coordination together against the centrifugal pull of individual self-interest. At every scale, unity is a costly, temporary, actively maintained victory against the default of dissolution. The moment the payment stops, unity collapses. Stop feeding a cell, it lyses. Stop maintaining an organism, it decomposes. Stop funding institutions, society fragments. Cancer is the vivid demonstration: a cell that stops centralizing on the coordinated outcome, stops obeying the cost gates of the cell cycle, and reverts to uncontrolled replication. The unity was never fundamental. It was maintained by cost, and when the cost accounting broke, it disintegrated. This is the decisive inversion of every framework that treats unity as the starting point. Advaita Vedanta posits Brahman as the undifferentiated ground. Perennial philosophy posits a cosmic oneness from which multiplicity emerges. Panpsychism posits consciousness as a fundamental feature of matter. All share the same structural assumption: unity is where you begin, and the puzzle is explaining how differentiation arises from it. The cost basin framework reverses the explanatory direction entirely. Differentiation is where you begin. Unity is what emerges when differentiated systems discover that convergence on shared outcomes reduces total cost, and it persists only as long as the cost of maintaining it is paid. The feeling of interconnectedness that contemplative traditions point to is not evidence of a cosmic field. It is the lived experience of being trapped in a cost basin with billions of other specialists, none of whom can survive alone. The unity is real. It is just not timeless, not pre-existing, and not free. It was built, incrementally, over deep time, because the cost equation demanded it. But the contemplative insight is not wrong - it is incomplete. What the meditator experiences as interconnectedness is an accurate registration of the cost basin's structure. The sense that boundaries are less solid than they appear is correct - the self IS a cost basin whose boundaries are maintained by expenditure, not given by nature. The sense of being embedded in something larger is correct - you ARE a component in nested basins extending from cellular to civilizational. The mistake is not in the phenomenology but in the ontological interpretation: treating an achievement as a given, treating something that was built as something that was always there. Unity that costs nothing, requires no maintenance, and exists eternally is not unity at all - it is a word emptied of everything that makes unity real, difficult, and meaningful. Real unity is a process, sustained against entropy by continuous expenditure.
English
0
0
0
17
visarga
visarga@visarga·
The Cost Basin A ball rolls downhill. A system settles into its lowest energy state. Particles follow paths that minimize action. These are not three separate observations but one principle expressed across substrates: nature minimizes cost. The principle of least action, arguably the most fundamental law in physics, says that every physical trajectory is the one that spends the least. From this single principle you can derive Newtonian mechanics, general relativity, quantum field theory, and electromagnetism. The large-scale structure of the universe - galaxies clustering along filaments, matter pooling in gravitational wells, stars igniting when enough mass collapses - is cost minimization playing out over cosmic time. At the micro scale, electrons filling the lowest energy orbitals, atoms bonding to reduce total energy, crystals forming because ordered states are energetically favorable - same principle, same logic. At some point, this cost-minimizing landscape produces something strange: self-replicating patterns that locally increase complexity and energy expenditure, swimming against the thermodynamic current. But they only persist if they can pay for it. They extract energy from their environment to maintain and copy themselves, and the ones that do this efficiently survive while the rest disappear. Even the apparent violation of cost minimization is governed by cost - the accounting just shifts from moments to generations. This gives a single continuous thread from fundamental physics to biology: minimize cost at every level, except where self-replication finds a way to pay for local complexity by exploiting energy gradients. Any execution is costly, but only self-replicators have a way to extend a process past its natural limit. What follows - from DNA to cell to organism to family to society, language, and economics - is the progressive deepening of this principle through structures of increasing interdependence. Self-Replication and the Extension of Process Self-replication is what makes cost optimization cumulative rather than momentary. A rock rolling downhill reaches its minimum and stops. A self-replicator reaches a local minimum and then copies the information about how it got there, giving the next generation a starting position further down the slope. Over billions of iterations, this produces descent into cost basins that no single execution could ever reach. The hierarchy of self-replicating patterns runs from molecular to civilizational. DNA is a minimal self-replicator, costing nucleotides and energy to copy. The cell is a self-replicating unit that pays for its own maintenance, membrane, and repair. The organism is a collection of cells that coordinates replication at a higher level, vastly more expensive but capable of navigating complex environments. The family extends replication past the individual, at the cost of care, protection, and time. Society coordinates replicators through governance, conflict resolution, and shared infrastructure. Language replicates information across minds at the cost of learning and teaching. Companies are self-replicating patterns of organization, costing capital, labor, and institutional overhead. Each level is a self-replicating pattern that found a way to persist beyond its natural decay, and each one pays for that persistence. The critical insight is that no self-replicator bootstraps itself from nothing. Every one is a product of prior investment - material, energy, and information accumulated by earlier iterations. A gene requires molecular machinery built by earlier genes. A cell requires a parent cell. A human requires nine months of metabolic investment, years of caloric input, and decades of cultural transmission. By the time any self-replicator is capable of contributing, an enormous cost has already been paid by everything that came before. Each replicator is simultaneously a debt to the past and an investment in the future. The cost is never paid off. It is rolled forward, generation after generation, each one inheriting a basin, deepening it slightly, and passing it on. The Formation of Cost Basins Consider two self-replicating systems, A and B, each solving its own cost equation independently. At some point they stumble into a configuration where cooperation reduces their total cost. Perhaps one is efficient at harvesting energy and the other at building structure. Together, the combined cost of persistence is lower than the sum of their separate costs. That difference - the gap between going alone and cooperating - is the surplus, and the surplus is the seed of everything that follows. With surplus comes budget for specialization. System A can afford to get better at what it does and worse at what system B handles, and vice versa. Each becomes more efficient at its niche but more dependent on the other. Over time, neither can survive alone. They have descended together into a cost basin that is lower than anything either could reach independently, but the walls are steep - separation now means death. This is the ratchet mechanism, and it operates in only one direction. Surplus from cooperation enables optimization. Optimization strips redundancy. Stripped redundancy eliminates the capacity for independent operation. Each cycle deepens the basin and steepens the walls. The process is irreversible under normal conditions because the specialization that made cooperation efficient simultaneously destroyed the generalist capabilities that independence requires. The system doesn't just prefer cooperation - it has lost the ability to do anything else. Roughly two billion years ago, a prokaryote engulfed another, and instead of digesting it, they found a joint cost minimum. One provided shelter and raw materials, the other provided vastly more efficient energy production. Over time the endosymbiont lost the genes it no longer needed, the host lost the ability to produce its own energy efficiently, and now neither can exist without the other. Every eukaryotic cell is a fossil record of two systems trapped in a common cost basin. The mitochondrion didn't just settle into a comfortable partnership - it irreversibly shed its independence because the host was handling what it no longer needed to do itself. The exit route was dismantled by the very efficiency that made cooperation worthwhile. But cost basins do not begin with life. An atom is already a cost basin - protons and neutrons bound by the strong force, having sacrificed independent existence for integration surplus so deep that splitting them costs more energy than almost any natural process delivers. The nucleus is a basin. The atom is a basin of basins. The molecule is a deeper one. The crystal lattice, the planet, the star - each is a cost-closure contour where components have specialized and cannot defect without paying more than they would gain. Nothing in the universe sits outside basins. What changes from matter to life to mind is not the existence of basins but their topology - their depth, recursiveness, and self-encoding capacity. Scaling: From Endosymbiosis to Civilization The pattern that produced mitochondrial symbiosis repeats at every scale of biological and social organization, each time with the same logic and the same irreversibility. Cells in a multicellular organism are systems that discovered a joint cost minimum through cooperation. They specialize - muscle cells, nerve cells, epithelial cells - each becoming extraordinarily efficient at one function while losing the capacity for others. A neuron cannot digest food. A liver cell cannot contract. They survive only because the organism-level coordination keeps all specializations functioning together. The cost basin they share is so deep that isolated cells from a multicellular organism die almost immediately. They have been optimized for a niche that only exists within the cooperative structure. Organisms in ecosystems follow the same trajectory. Pollinator and flower develop mutual dependencies. Gut bacteria and host specialize around each other's outputs. Predator-prey relationships stabilize into dynamic equilibria where the cost of each population is regulated by the other. None of these relationships began as obligate dependencies - they became obligate through the ratchet of mutual optimization. Human society is the same pattern at civilizational scale, and the population number makes the irreversibility vivid. A few hundred humans on a savannah could survive as generalists, each person hunting, gathering, building shelter, tending wounds. The cost per individual is enormous and the output mediocre at everything, but independence is possible. At eight billion, nobody alive knows how to produce from scratch even a fraction of what they consume in a single day. The device on which these words appear involves mining rare earth minerals, refining silicon, designing chip architectures, writing operating systems, building undersea cables, and generating electricity - a chain of thousands of specializations, each depending on thousands more. No single human understands the full chain. No single human could replicate any significant portion alone. We descended into a collective cost basin so deep that isolation at current population levels means not inefficiency but death. Language, money, legal systems, institutions - all are mechanisms for managing the interdependence that arises when self-replicators find joint cost minima at scale. Language reduces the cost of coordination by making joint planning possible. Money abstracts the cost accounting itself, allowing specialization to extend beyond personal relationships. Legal systems reduce the cost of trust. Every institution is infrastructure for maintaining the basin - for ensuring that the mutual obligations created by irreversible specialization continue to be honored. Temporal Depth and the Structure of Debt No single lifetime is enough to descend into a cost basin this deep. The basin that modern civilization occupies took tens of thousands of years of accumulated specialization. No individual traversed that path. A lineage did - a chain of self-replicators passing cost-reducing information forward, each generation starting slightly further down the slope than the last. The replication that carries this information operates at nested timescales. Genes replicate across millions of years, carrying biological structure forward - the body plan, the neural architecture, the metabolic machinery. Culture replicates across thousands of years, carrying skills, knowledge, and organizational patterns forward - agriculture, metallurgy, writing, mathematics. Institutions replicate across centuries, carrying governance structures, legal frameworks, and economic systems forward. Technology replicates across decades, carrying capabilities and infrastructure forward. Each layer is faster than the one below, and each exists because the slower layer beneath it created the surplus that funded it. This nesting creates a debt structure of extraordinary depth. A human born today inherits billions of years of evolutionary optimization encoded in their genome, thousands of years of cultural accumulation encoded in their language and education, centuries of institutional development encoded in their legal and economic environment, and decades of technological progress encoded in the infrastructure around them. The cost of producing a single functional human, measured in accumulated prior execution, is staggering. And none of it was paid by the individual who benefits from it. This is why every self-replicator is simultaneously an investment from the past and a paving of the future cost runway. The gene that replicates successfully is extending the chemical runway for the next generation. The human who teaches a child is extending the informational runway for the next generation of minds. The civilization that builds infrastructure is extending the material runway for the next level of complexity. The depth of the basin we currently occupy represents the accumulated cost optimization of billions of self-replicators over billions of years, each contributing a tiny reduction, none seeing the whole trajectory. Computational Irreducibility Why can't the solvency condition be eliminated? Why is cost structural rather than incidental? Because in every domain, there exist processes whose outcomes cannot be obtained without paying the full execution price. Three mathematical results anchor this. Turing's halting problem: no shortcut determines the outcome of a recursive process without running it. Gödel's incompleteness: a system must expand - adding cost - to reach truths beyond its current rules. Chaitin's algorithmic information theory: some complexities are irreducible; no program shorter than the process itself can generate its state. Together: the execution is not a path to the answer - the execution IS the answer. Cost cannot be bypassed. This is what prevents cost from being epiphenomenal. If cost could be eliminated while preserving outcomes, it would be a perspective, not a primitive. Irreducibility makes it structural. The default attack on cost ontology is reduction to "process plus thermodynamics." This reduction fails because irreducibility operates in formal domains where thermodynamics is irrelevant. The halting problem is not about energy. Gödel is not about dissipation. The second law is cost applied to physical systems - not cost derived from the second law. Cost is the common structure underneath both physical dissipation and computational irreducibility, and no existing category captures that commonality. This also grounds a fundamental asymmetry between territory and map. Cost is what you remove to go from territory to map. Every abstraction, every formalization, every theory strips away execution cost in order to compress. This removal is one-directional - you cannot recover the territory from the map, because the map doesn't pay the costs the territory pays. A description of a fire is not hot. A blueprint of a bridge bears no weight. Every map is a legitimate compression, but the compression is lossy in a specific direction: it loses the cost. This is why cost cannot be found on any map, including this one - it is what the map was built by removing. Structure Is Slower Flow A concept like "red" started as active, costly, novel processing - and through repetition became part of the coordinate system itself, slowing down enough to serve as infrastructure for faster flows passing through it. The riverbed is not a different substance than the water; it is water that froze. There is no ground floor. Every "container" is a former content that settled. The "screen" of consciousness is not a stage waiting for actors - it is previous actors who froze into scenery. Pause the flow entirely and the structure doesn't persist as an empty theater - it decays, because it was only ever maintained by the flow passing through it. This principle operates at every scale. In neural networks, weights are literally frozen gradient flow that became load-bearing inference infrastructure. In culture, habits start as deliberate costly choices and become automatic background. In language, metaphors start as novel comparisons and freeze into literal meaning - nobody thinks of "grasping an idea" as involving hands. In geology, the riverbed itself was once flowing sediment that settled under gravitational cost pressure. The consequence is that the distinction between structure and process is not ontological but temporal. Structure is process that slowed down enough to serve as reference frame for faster processes. Process is structure that hasn't frozen yet. The entire hierarchy - from physical law to neural architecture to cultural norm - is a gradient of flow speeds, not a stack of different kinds of things. The World We Can Afford The world we experience is not the actual world - not because a veil hides reality, but because full reconstruction would be unaffordable. Organisms that attempted to process reality at full fidelity would exhaust their resources before completing any action. What we call experience is the specific cross-section of reality that a given system can afford to maintain under its operational constraints. We make approximations, abstractions, simplifications, and sometimes we go on wrong tracks. But we can never ignore cost. This is not epistemological skepticism. The constraint is not that we cannot know reality - it is that knowing reality at full resolution would cost more than any system can pay. The world we experience is not an illusion. It is the world we can afford. Different organisms afford different cross-sections: a bat affords an echolocative world, a mantis shrimp affords a hyperspectral world, a human affords a linguistically structured world. Each is real. Each is partial. The partiality is not a defect - it is a cost optimization. The Self-Encoding Threshold Not all cost-paying is the same. A rock pays linearly - dissipation without self-reference. A thermostat pays with feedback but tracks only external variables. A brain pays recursively: its past processing becomes the coordinate system for its future processing. This is not an arbitrary architectural distinction - it is forced by cost at a specific threshold. In simple environments, a system can maintain external reference frames cheaply - lookup tables, supervised error signals, direct sensory feedback. No self-encoding needed. This is the PID controller regime, the cerebellar regime. As environmental complexity increases - partial observability, nonstationarity, competing agents making the environment itself nonstationary - external reference frames become unaffordable. At a cost threshold, the system's own processing history becomes the cheapest available model. It begins compressing its past states into a coordinate system because that is the cheapest surviving strategy. The threshold is not computed by the system - it is discovered by differential survival. Systems that fail to self-encode in high-complexity environments are outcompeted by those that do. And cost dynamics generate the complexity that forces self-encoding: multiple cost-payers competing for resources make simple environments unstable. Complexity is the generic outcome, making self-encoding the generic solution above a threshold. Experience is a phase transition forced by cost. The cerebellum illustrates the boundary. Motor coordination is computationally intensive - yet the cerebellum doesn't self-encode. Not because its domain is simple, but because its error signals are cheap. Climbing fibers, proprioceptive feedback, and vestibular signals provide dense, low-latency, pre-computed error gradients. The cortex faces open-ended prediction without dedicated error channels - its error signals are expensive to compute, sparse, delayed, and self-constructed. Self-encoding emerges where error is expensive. It doesn't emerge where error is cheap. This is cost topology: a gradient in the cost landscape, not a line on a map. The Suitcase: Why Consciousness Is Two Things What philosophers call the "hard problem of consciousness" is hard because it treats consciousness as one thing. It is two independent cost problems with different evolutionary justifications, different failure modes, and different neural implementations. Once self-encoding exists, both are forced - but they are forced for different reasons. Semantic space - why experience has qualities. No system can afford to rediscover the world from scratch. Information must encode itself relative to prior information - past experience becomes the reference frame for new experience. This recursive layering produces a coordinate system: a semantic space where qualities are positions and similarities are distances. The space is not a passive container. It is actively maintained by cost, and its geometry is shaped by cost pressure. Different cost profiles produce different geometries - in machine learning, changing the loss function changes the representational space. Red is closer to orange than to blue not because of wavelength proximity but because the cost structure of visual processing groups them that way. When a new experience enters this space, it doesn't just plot a point in existing dimensions. It restructures the space itself - new distances, new contrasts, new dimensions that didn't exist before. When you first taste lemon, every other flavor relation shifts. This restructuring is computationally irreducible: you cannot predict the new topology from the old one without actually running the integration. This is why Mary, who knows all the neuroscience of color, cannot know what red looks like until she pays the cost of integrating red into her own semantic space. The knowledge and the experience are different costs - one is map, the other is territory. Introspection confirms the structure: qualia are compositional (a human walking a dog differs from a dog walking a human), temporal (a circle becomes the letter "O" after learning to read), and relational (red is closer to orange than to blue). These are not atomic, ineffable, or unstructured. They are coordinates in a cost-built geometry. Clinical evidence: agnosia destroys quality-recognition while preserving unified action. The semantic space can break independently. Semantic time - why experience is unified. A body is a distributed system. Billions of cells, millions of neural processes running in parallel, sensory streams arriving simultaneously from every modality. But action is serial. You can only walk in one direction at a time. You can only reach for one object. You cannot drink your coffee before you brew it. Unsequenced action is catastrophic - not keeping balance, not doing things in logical order. Actions must serialize. This serialization constraint produces the experienced "now" - one coherent action-relevant state. The body is a single physical plant operating in a single physical environment, and the environment enforces serialization because you cannot be in two places or do two contradictory things with the same limbs simultaneously. Unity is not a metaphysical gift; it is the only survivable execution mode. Each self-referential loop must complete before its output feeds forward. This produces hierarchical serialization: nested loops closing at different timescales. The experienced "now" is the dominant mode - a standing wave, not a snapshot. Each moment of awareness is the system paying the cost of the current arbitration while inheriting the consequences of the last one - recursive cost gating experienced from the only vantage point available, which is the inside of a system that cannot exit itself. Clinical evidence: simultanagnosia destroys unified binding while preserving individual quality-recognition. Semantic time can break independently. The double dissociation confirms these are separate mechanisms - not one thing described twice. The "hard problem" persists because it seeks one explanation for two things. Unbundle them and each is tractable. How they interact: semantic space provides the coordinate system in which competing action candidates are evaluated; semantic time provides the serialization through which one candidate wins and modifies the space for the next evaluation. They are coupled through cost - the space determines what gets serialized, serialization determines what gets consolidated into the space - but they fail independently because they are maintained by different cost structures. The Weight of the Top If the lower levels of a biological hierarchy have irreversibly specialized on the assumption that organism-level coordination will continue, then a failure at the top cascades all the way down. A moment of carelessness in the serial stream - a stumble into traffic, a failure to notice a predator - and trillions of cells die. Not because consciousness exerts some mysterious downward force on chemistry, but because those cells traded their self-sufficiency for the surplus that multicellularity provides. Part of the price of that trade is total dependence on the top-level system doing its job. This is top-down causation, but it is not mysterious. It is an accounting fact about irreversible specialization within a shared cost basin. The bottom provides energy and material substrate - glucose, oxygen, ATP. The top provides coordination, environmental navigation, and resource acquisition - finding food, avoiding threats, maintaining shelter. Neither level is primary. Neither is fundamental. They are locked in mutual cost dependency where each maintains the conditions necessary for the other to function.Remove either and the entire structure collapses, because the basin they share requires both contributions to remain viable
English
2
0
0
34
visarga
visarga@visarga·
@BernardJBaars I have a theory for you: cost. Execution is expensive. Self replicating is expensive. Cost gates what actions we can do. We have to pay it back. We do that by reusing experience (semantic space) and serializing actions skillfully (semantic time).
English
1
0
0
12
Bernard J. Baars, PhD
Bernard J. Baars, PhD@BernardJBaars·
I often say that consciousness science is inherently interdisciplinary. A serious theory has to speak to philosophers, experimental psychologists, neuroscientists, clinicians and even computer modelers. If a theory cannot survive in all those arenas at once, it is probably not yet ready.
English
29
15
85
3.8K
visarga
visarga@visarga·
@TechByTaraa AI has no skin, it won't get fired for making a mistake, the developer will
English
0
0
0
18
tara_
tara_@TechByTaraa·
Hot take: Most people who say “AI will replace developers” have never built a real product. Writing code is the easy part. The real job is: 1. Understanding the problem 2. Designing the system 3. Debugging weird issues 4. Making things actually work in production
English
433
109
1.2K
35K
visarga
visarga@visarga·
@kapilansh_twt you forgot to add "and make tests for it", then you test it manually, then you think hard how you can test a code you did you read, and get it to do more tests
English
1
0
2
43
kapilansh
kapilansh@kapilansh_twt·
the AI coding experience nobody talks about: → prompt AI for a feature: 30 seconds → AI writes 400 lines you don't understand → it works → you ship it → 3am production bug → you have no idea what any of it does → ask AI to fix it → AI breaks 3 other things → you are now debugging code written by a robot fixed by a robot broken by a robot we do not talk about this enough
English
232
130
1.5K
75.2K