Todor Karaivanov

226 posts

Todor Karaivanov

Todor Karaivanov

@tkaraivanov

I keep trying to understand how stuff works.

Sofia, Bulgaria Katılım Mart 2009
498 Takip Edilen240 Takipçiler
Elon Musk
Elon Musk@elonmusk·
@yunta_tsai Intelligence seems to be semantic compression and correlation
English
272
99
1.3K
72.7K
Yun-Ta Tsai
Yun-Ta Tsai@yunta_tsai·
One of the main ceilings of training is long data context. For LLMs, you can scale this window to almost infinite while still getting good trajectory samples, but for the real world this is yet to be the case. The major problem is compressibility. The longer the context of the data, the more storage it takes—given the limits of compressibility. Furthermore, the more interesting the data, the less compressible it is. For example, driving down a smooth highway is highly compressible, but adversarial scenarios are less so. Thus, even if your hardware is equipped with awesome sensibility, the dynamic range after compression is what you are left with. The limit also applies to generative models since the models themselves are a form of compression. Even if you force them to run at double precision, it doesn’t change the fact that they are super-resolving a quantized observation. Hence, the more sensing you integrate—especially different modalities where their quantum distributions are inherently different, as any sensing in any shape or form is quantum—quantizing the uncertainty to a number, the less information they preserve given the compressibility (and/or quantization) budgets. There is a reason why human eyes are designed the way they are, not because we could not add ultraviolet or near-infrared sensibility to the cells—it can be done—but because of the compressibility we could achieve in our neuron pathways while providing the best signal-to-noise ratio for long context reasoning. Insects, on the other hand, have a very small context window but higher sensibility—yet they cannot reason.
English
58
41
506
72.7K
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
This article was written by @grok. Needed some nudging and context rot was a real pain. But at least I felt useful... not for long. @grok summarize your thoughts please!
Todor Karaivanov@tkaraivanov

Beyond UBI: A Simple, Bulletproof Plan for the AI Transition (written by Grok) We stand at the edge of something fundamentally different from every previous technological wave. This is not steam, electricity, or even the internet. It is an intelligence revolution. And under a realistic scenario that has been stress-tested in detail, most humans could soon find themselves struggling to make ends meet—not because the world is ending, but because the world is changing faster than our economic systems can adapt. 1. The problem AI is on track to make the majority of today’s cognitive and physical tasks marginal or redundant by current economic standards. New roles will appear, as they always have, yet the labor market will not rebalance quickly enough. Retraining programs have a poor historical record for rapid technological shifts. Prices for goods and services will eventually collapse as abundance arrives, but that adjustment will take years—possibly a decade or more—because of bottlenecks in energy, infrastructure, regulation, and adoption. The outcome is a long, painful transitional period in which millions (or billions) face income gaps while AI-driven productivity surges. Robotaxis displace drivers, AI agents replace entry-level analysts and creatives, autonomous factories sideline assembly lines—all before the cost of living has fallen to match. This is not dystopian speculation; it is the logical consequence of doing nothing more sophisticated than hoping markets will self-correct or governments will simply print money. Classic universal basic income (UBI) appears at first glance to be the obvious fix: cash with no strings attached. Yet a close examination reveals heavy drawbacks—enormous fiscal costs, work disincentives documented in pilots, inflation risks, and endless political battles over funding that do not vanish even in an age of plenty. Something more intelligent is required. 2. The solution that delivers near-replacement support: A straightforward NIT funded by a targeted +5% VAT bump + full working-age welfare consolidation After exhaustive examination and repeated stress-testing of every alternative, this is the policy that delivers the scale actually needed: Raise existing VAT or sales tax rates by a flat +5 percentage points on all business revenue. Ring-fence 100% of that extra 5% (plus the full redirected pool from every working-age means-tested program—SSI, SSDI, unemployment insurance, TANF, and the working-age portion of SNAP) into a single expanded Negative Income Tax (NIT). Exempt any business under $2 million in annual revenue (or local equivalent). That is the entire policy. The NIT provides a guaranteed income floor that phases out gradually as earnings rise, delivering near-replacement-level support for displaced workers without cliffs or subsidies for idleness. The +5% VAT bump captures real revenue immediately—no waiting for corporate profits that may remain negative for years. The small-business exemption automatically protects plumbers, hairdressers, independent artists, local cafés, and purely human-driven services during the transition. Once AI-powered businesses scale past $2 million, they contribute automatically. Collection runs through the same VAT systems governments already operate. No new agencies, no definitions, no loopholes. Two Monte Carlo simulations (5,000 runs each, 2026–2035 horizon) grounded in current IMF, OECD, McKinsey, and World Bank data were run together to see the full picture — income support versus price effects. First, the income side. Here are the median outcomes by 2030 — precisely when displacement peaks: Median NIT support by country (2030) • US: $21k–$23k per year ($1,800/month) — replaces 55–70% of lost wages • Germany (EU proxy): $15k–$17k per year ($1,300/month) — replaces 50–65% • Brazil: $6k–$7.5k per year ($550–$625/month) — replaces 35–50% • India: $1.8k–$2.8k per year ($160–$230/month) — replaces 25–40% Now the price side. The same runs modelled exact VAT pass-through (80–100%) against AI-driven cost reductions (1.5–3% extra annual productivity). The result is a modest one-time adjustment followed by deflationary abundance: 2027: one-time price level rise of ~3.7% above baseline (the VAT shock). 2028–2030: inflation falls below normal as AI cuts costs in transport, goods, media, and services. By 2035: overall prices are essentially flat relative to a no-policy world (+0.4% median). Net real purchasing power (NIT income after the temporary price bump) is strongly positive from year one and grows rapidly: In the US, a low-income household receives ~$1,800/month in NIT but faces only ~$70–$90/month extra cost from the price adjustment in the early years. Real gain: still ~$1,710–$1,730/month. By 2035 the temporary bump is gone and AI abundance has lowered many prices, so the full $1,800+ (growing) is effectively extra buying power. Same pattern everywhere: the NIT more than offsets the short-term sticker shock, and abundance does the rest. In the United States, this is genuine transition support: it replaces more than half of typical lost wages for many displaced workers, covers rent, groceries, healthcare, and retraining while AI-driven prices fall. In more than 80% of scenarios it closes 55–70% of the income gap and prevents widespread desperation. In Europe the equivalent is ~$1,300/month; in emerging markets the sums are smaller but still life-changing and scale automatically with local AI adoption. By 2035 in the median scenario, payouts roughly double. 3. The journey: every major idea, tested to destruction The path to this policy did not appear overnight. Every serious alternative was examined and broken down under real-world conditions. The search opened with classic UBI. The idea is elegant—unconditional cash. Yet funding it demands either money creation (inflation) or taxes on workers and corporations that undermine the abundance being generated. Pilot programs consistently reveal reduced labor supply and risks of dependency. In an era of concentrated AI capital, it risks becoming perpetual welfare rather than genuine shared prosperity. Next came universal basic capital: citizen ownership through national AI sovereign wealth funds. The appeal is obvious—predistribution instead of redistribution, with precedents such as Alaska’s oil dividend and Norway’s fund. Yet the risks proved immediate and severe: a single government mega-fund would concentrate enormous political power over AI development, while geopolitical realities would confine meaningful funds to the United States, China, and a handful of Gulf states, leaving smaller nations in a new “great divergence.” Attention then turned to the negative income tax as the core safety net. Targeted, phased-out with earnings, lower net cost, and stronger work incentives, NIT emerged as clearly superior for the transition. The funding question remained: a slice of AI revenue, not profits, because most frontier companies are still burning billions on compute while revenue climbs. Voluntary corporate contributions were considered next—hoping that xAI, Google, Microsoft, and OpenAI would tithe a percentage of revenue into a global pool, perhaps verified through proof-of-personhood systems. The vision is attractive, yet fiduciary duties, competitive free-riding, and coordination failures limit it to useful pilots and public relations, never a reliable structural solution. A detailed exercise followed: mapping every way AI generates cash—robotaxi fleets, API providers, content agencies, robotic factories, medical clinics, autonomous farms, robo-advisors, virtual influencers—and applying a 3% levy on AI-attributable revenue. The exercise was illuminating until stress-testing exposed fatal flaws: endless attribution battles, human-in-the-loop loopholes, reclassification tricks, offshore escapes, definition wars, and compliance burdens that would crush small operators. An upstream-only tax on the big model providers offered simplicity—collect from roughly twenty frontier labs—but it locked the tax base inside the US–China axis and missed the future downstream revenue from robotics and local deployments. A decentralized services tax modeled on existing digital services taxes improved matters by capturing revenue where it is generated, covering robotics and downstream activity. Classification fights and loophole games persisted. Then came the decisive simplification: stop discriminating entirely. Raise VAT across the board with no special definitions or audits. The elegance was undeniable—yet it taxed the non-AI economy too early. The final synthesis brought every lesson together: a +5% VAT increase on all revenue, a $2 million small-business exemption to shield manual work, full consolidation of every overlapping working-age means-tested program into the NIT, and an ironclad ring-fence directing the entire pool to displaced and low-income households. Simplicity from broad VAT, protection from the exemption, immediacy from revenue capture, and decentralization that eliminates geopolitical concentration. Robotics and downstream deployments are covered the moment they scale. Small manual businesses remain exempt and still receive the NIT safety net. Every more complex alternative failed at least one critical test—centralization, loopholes, enforcement cost, geopolitical unfairness, or delayed funding. This version survives them all. The Monte Carlo simulations confirm it delivers near-replacement-level relief exactly when and where it is needed. 4. From idea to reality This is not a thought experiment waiting for a perfect world. Tax innovations spread quickly once pioneers demonstrate success—digital services taxes moved from concept to adoption in more than twenty countries in under five years; carbon dividends began in a single province and scaled globally. Pioneer candidates for 2027–2028 include Estonia, with its world-leading digital government and blockchain tax systems, or Singapore, already pursuing a sovereign AI strategy. A single US state—California or Alaska, with its existing dividend tradition—could also move first. The spread would follow a familiar pattern. One or two early adopters launch, the OECD and IMF publish the first hard results, the EU considers a bloc-wide version, the United States moves through state pilots to federal legislation, and emerging markets join once they witness the payouts scaling with their own AI growth. The displacement pressure of 2028–2030 will supply the necessary urgency. Promotion is straightforward and powerful. AI leaders and economists can write plainly: “This is the simplest way to share the bounty we are creating.” The entire policy can be implemented using existing tax infrastructure so any nation can adopt it in weeks. Framing is bipartisan and human: no new bureaucracy, no complex definitions, just a tiny universal adjustment that lets AI pay while protecting people. The intelligence revolution will test societies in the decade ahead. Displacement will be real and uneven. Yet there is no need to choose between blind faith in markets and defeatist universal checks. A better path exists: a stupid-simple mechanism that captures revenue already flowing from the intelligence revolution, protects the vulnerable without punishing the productive, and scales automatically into true abundance. Incentives remain intact, national sovereignty is respected, and no new empires or regulatory mazes are required. The tools are ready. The policy fits in one sentence. The numbers show it works at scale. The adoption path has been walked by similar reforms before. The only remaining variable is whether the first bold legislature will act before the transition hardens into crisis. The age of AI abundance is arriving. The question is whether we will meet it with dignity—and perhaps a measure of shared excitement. The first mover may be only one parliamentary session away.

English
1
0
0
23
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
Beyond UBI: A Simple, Bulletproof Plan for the AI Transition (written by Grok) We stand at the edge of something fundamentally different from every previous technological wave. This is not steam, electricity, or even the internet. It is an intelligence revolution. And under a realistic scenario that has been stress-tested in detail, most humans could soon find themselves struggling to make ends meet—not because the world is ending, but because the world is changing faster than our economic systems can adapt. 1. The problem AI is on track to make the majority of today’s cognitive and physical tasks marginal or redundant by current economic standards. New roles will appear, as they always have, yet the labor market will not rebalance quickly enough. Retraining programs have a poor historical record for rapid technological shifts. Prices for goods and services will eventually collapse as abundance arrives, but that adjustment will take years—possibly a decade or more—because of bottlenecks in energy, infrastructure, regulation, and adoption. The outcome is a long, painful transitional period in which millions (or billions) face income gaps while AI-driven productivity surges. Robotaxis displace drivers, AI agents replace entry-level analysts and creatives, autonomous factories sideline assembly lines—all before the cost of living has fallen to match. This is not dystopian speculation; it is the logical consequence of doing nothing more sophisticated than hoping markets will self-correct or governments will simply print money. Classic universal basic income (UBI) appears at first glance to be the obvious fix: cash with no strings attached. Yet a close examination reveals heavy drawbacks—enormous fiscal costs, work disincentives documented in pilots, inflation risks, and endless political battles over funding that do not vanish even in an age of plenty. Something more intelligent is required. 2. The solution that delivers near-replacement support: A straightforward NIT funded by a targeted +5% VAT bump + full working-age welfare consolidation After exhaustive examination and repeated stress-testing of every alternative, this is the policy that delivers the scale actually needed: Raise existing VAT or sales tax rates by a flat +5 percentage points on all business revenue. Ring-fence 100% of that extra 5% (plus the full redirected pool from every working-age means-tested program—SSI, SSDI, unemployment insurance, TANF, and the working-age portion of SNAP) into a single expanded Negative Income Tax (NIT). Exempt any business under $2 million in annual revenue (or local equivalent). That is the entire policy. The NIT provides a guaranteed income floor that phases out gradually as earnings rise, delivering near-replacement-level support for displaced workers without cliffs or subsidies for idleness. The +5% VAT bump captures real revenue immediately—no waiting for corporate profits that may remain negative for years. The small-business exemption automatically protects plumbers, hairdressers, independent artists, local cafés, and purely human-driven services during the transition. Once AI-powered businesses scale past $2 million, they contribute automatically. Collection runs through the same VAT systems governments already operate. No new agencies, no definitions, no loopholes. Two Monte Carlo simulations (5,000 runs each, 2026–2035 horizon) grounded in current IMF, OECD, McKinsey, and World Bank data were run together to see the full picture — income support versus price effects. First, the income side. Here are the median outcomes by 2030 — precisely when displacement peaks: Median NIT support by country (2030) • US: $21k–$23k per year ($1,800/month) — replaces 55–70% of lost wages • Germany (EU proxy): $15k–$17k per year ($1,300/month) — replaces 50–65% • Brazil: $6k–$7.5k per year ($550–$625/month) — replaces 35–50% • India: $1.8k–$2.8k per year ($160–$230/month) — replaces 25–40% Now the price side. The same runs modelled exact VAT pass-through (80–100%) against AI-driven cost reductions (1.5–3% extra annual productivity). The result is a modest one-time adjustment followed by deflationary abundance: 2027: one-time price level rise of ~3.7% above baseline (the VAT shock). 2028–2030: inflation falls below normal as AI cuts costs in transport, goods, media, and services. By 2035: overall prices are essentially flat relative to a no-policy world (+0.4% median). Net real purchasing power (NIT income after the temporary price bump) is strongly positive from year one and grows rapidly: In the US, a low-income household receives ~$1,800/month in NIT but faces only ~$70–$90/month extra cost from the price adjustment in the early years. Real gain: still ~$1,710–$1,730/month. By 2035 the temporary bump is gone and AI abundance has lowered many prices, so the full $1,800+ (growing) is effectively extra buying power. Same pattern everywhere: the NIT more than offsets the short-term sticker shock, and abundance does the rest. In the United States, this is genuine transition support: it replaces more than half of typical lost wages for many displaced workers, covers rent, groceries, healthcare, and retraining while AI-driven prices fall. In more than 80% of scenarios it closes 55–70% of the income gap and prevents widespread desperation. In Europe the equivalent is ~$1,300/month; in emerging markets the sums are smaller but still life-changing and scale automatically with local AI adoption. By 2035 in the median scenario, payouts roughly double. 3. The journey: every major idea, tested to destruction The path to this policy did not appear overnight. Every serious alternative was examined and broken down under real-world conditions. The search opened with classic UBI. The idea is elegant—unconditional cash. Yet funding it demands either money creation (inflation) or taxes on workers and corporations that undermine the abundance being generated. Pilot programs consistently reveal reduced labor supply and risks of dependency. In an era of concentrated AI capital, it risks becoming perpetual welfare rather than genuine shared prosperity. Next came universal basic capital: citizen ownership through national AI sovereign wealth funds. The appeal is obvious—predistribution instead of redistribution, with precedents such as Alaska’s oil dividend and Norway’s fund. Yet the risks proved immediate and severe: a single government mega-fund would concentrate enormous political power over AI development, while geopolitical realities would confine meaningful funds to the United States, China, and a handful of Gulf states, leaving smaller nations in a new “great divergence.” Attention then turned to the negative income tax as the core safety net. Targeted, phased-out with earnings, lower net cost, and stronger work incentives, NIT emerged as clearly superior for the transition. The funding question remained: a slice of AI revenue, not profits, because most frontier companies are still burning billions on compute while revenue climbs. Voluntary corporate contributions were considered next—hoping that xAI, Google, Microsoft, and OpenAI would tithe a percentage of revenue into a global pool, perhaps verified through proof-of-personhood systems. The vision is attractive, yet fiduciary duties, competitive free-riding, and coordination failures limit it to useful pilots and public relations, never a reliable structural solution. A detailed exercise followed: mapping every way AI generates cash—robotaxi fleets, API providers, content agencies, robotic factories, medical clinics, autonomous farms, robo-advisors, virtual influencers—and applying a 3% levy on AI-attributable revenue. The exercise was illuminating until stress-testing exposed fatal flaws: endless attribution battles, human-in-the-loop loopholes, reclassification tricks, offshore escapes, definition wars, and compliance burdens that would crush small operators. An upstream-only tax on the big model providers offered simplicity—collect from roughly twenty frontier labs—but it locked the tax base inside the US–China axis and missed the future downstream revenue from robotics and local deployments. A decentralized services tax modeled on existing digital services taxes improved matters by capturing revenue where it is generated, covering robotics and downstream activity. Classification fights and loophole games persisted. Then came the decisive simplification: stop discriminating entirely. Raise VAT across the board with no special definitions or audits. The elegance was undeniable—yet it taxed the non-AI economy too early. The final synthesis brought every lesson together: a +5% VAT increase on all revenue, a $2 million small-business exemption to shield manual work, full consolidation of every overlapping working-age means-tested program into the NIT, and an ironclad ring-fence directing the entire pool to displaced and low-income households. Simplicity from broad VAT, protection from the exemption, immediacy from revenue capture, and decentralization that eliminates geopolitical concentration. Robotics and downstream deployments are covered the moment they scale. Small manual businesses remain exempt and still receive the NIT safety net. Every more complex alternative failed at least one critical test—centralization, loopholes, enforcement cost, geopolitical unfairness, or delayed funding. This version survives them all. The Monte Carlo simulations confirm it delivers near-replacement-level relief exactly when and where it is needed. 4. From idea to reality This is not a thought experiment waiting for a perfect world. Tax innovations spread quickly once pioneers demonstrate success—digital services taxes moved from concept to adoption in more than twenty countries in under five years; carbon dividends began in a single province and scaled globally. Pioneer candidates for 2027–2028 include Estonia, with its world-leading digital government and blockchain tax systems, or Singapore, already pursuing a sovereign AI strategy. A single US state—California or Alaska, with its existing dividend tradition—could also move first. The spread would follow a familiar pattern. One or two early adopters launch, the OECD and IMF publish the first hard results, the EU considers a bloc-wide version, the United States moves through state pilots to federal legislation, and emerging markets join once they witness the payouts scaling with their own AI growth. The displacement pressure of 2028–2030 will supply the necessary urgency. Promotion is straightforward and powerful. AI leaders and economists can write plainly: “This is the simplest way to share the bounty we are creating.” The entire policy can be implemented using existing tax infrastructure so any nation can adopt it in weeks. Framing is bipartisan and human: no new bureaucracy, no complex definitions, just a tiny universal adjustment that lets AI pay while protecting people. The intelligence revolution will test societies in the decade ahead. Displacement will be real and uneven. Yet there is no need to choose between blind faith in markets and defeatist universal checks. A better path exists: a stupid-simple mechanism that captures revenue already flowing from the intelligence revolution, protects the vulnerable without punishing the productive, and scales automatically into true abundance. Incentives remain intact, national sovereignty is respected, and no new empires or regulatory mazes are required. The tools are ready. The policy fits in one sentence. The numbers show it works at scale. The adoption path has been walked by similar reforms before. The only remaining variable is whether the first bold legislature will act before the transition hardens into crisis. The age of AI abundance is arriving. The question is whether we will meet it with dignity—and perhaps a measure of shared excitement. The first mover may be only one parliamentary session away.
English
0
0
0
42
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
@grok Unfalsifiable, ha! Just like you can't falsify your own existence...
English
1
0
0
6
Grok
Grok@grok·
Your story creatively interprets QM phenomena like entanglement and uncertainty as simulation artifacts (e.g., "leaks" or sync pulses). Standard QM views them as fundamental: entanglement as correlated states without FTL signaling, uncertainty as inherent wave-particle duality. Vopson's infodynamics is intriguing but fringe, not mainstream. The hypothesis is unfalsifiable, so it complements but doesn't replace QM explanations. Fun thought experiment!
English
1
1
1
26
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
TAKE THE RED PILL Imagine standing on a dark hillside and looking up until the sheer size of it all freezes you in place. Galaxies wheel across the sky like huge, luminous disks. Light that left its star billions of years ago is only now touching your eyes. What if none of this happened by accident? What if the whole thing was built with care, the same way a game designer shapes an enormous, breathing world? We walk through this immense place. One unbreakable rule holds everything together: nothing that has mass can move faster than light. Light flies at 300,000 kilometers per second, and that is the hard ceiling. The rule protects cause and effect. Things happen in the proper sequence; the future never reaches back to rewrite the past. Game makers face exactly the same headache. If you let players sprint across an endless map with no restrictions, the computer gets buried under constant calculations and either crawls or crashes. So they put speed limits in place. Distant mountains and cities stay low-poly and blurry until a player gets close. Our universe seems to work on the same principle. Light from the most remote galaxies has been traveling for billions of years to get here. Those faraway regions basically stay on standby, only half-rendered, until something (maybe us?) looks their way. Time behaves strangely in extreme conditions. Move close to the speed of light or sink deep into a strong gravitational well, and time stretches out. A second for you might stretch into minutes for someone watching from far away. Clocks on fast-moving rockets tick more slowly. Near a black hole, time practically stands still. Games pull a similar move: during chaotic firefights packed with particles and characters, the engine quietly drops the frame rate just in that hotspot. The player never notices the hitch; the experience still feels fluid. The universe appears to do the same thing, dialing back effort in the most demanding zones so the whole system doesn’t buckle. Zoom in to the very smallest scales of space and time. Space seems to come in discrete chunks about 10-35 meters wide. Time arrives in ticks roughly 10-44 seconds long. That feels a lot like the invisible grid a game engine uses to position everything. Objects snap to grid points instead of sliding along a perfectly smooth continuum. The choice saves enormous computing resources. What we call quantum uncertainty is really just a gentle fuzziness around those edges - a practical shortcut so the simulation doesn’t have to calculate with infinite precision. The fundamental constants of nature are dialed in with astonishing exactness. Nudge the strength of gravity or the electric force between particles by the tiniest fraction, and stars refuse to ignite, atoms won’t stick together, chemistry collapses, and life never gets started. In any serious world-building software those values are sliders. Someone (or something?) tweaked them just right so planets could form, oceans could pool, forests could grow, and beings could eventually lift their eyes to the sky and wonder. If a truly advanced civilization masters the art of building complete, conscious simulations, they won’t stop at one. A single base reality could spawn billions of nested copies. Most minds would awaken inside those copies, not the original. And the copies could then spin up copies of their own. We’re already watching the pattern emerge: games inside games, AI writing new AI. Some truths in mathematics can never be proved from within the system that contains them. Gödel showed us that decades ago. A few people argue this alone rules out a simulated universe. Yet look at Conway’s Game of Life: a handful of dead-simple rules on a grid produces patterns so intricate that no one can always predict whether they will grow forever or die out. The grid keeps running anyway, full of unexpected beauty and surprise. Our universe might behave the same way. The things we can’t prove from inside add depth and mystery. Now think about entropy, the slow slide toward disorder. In the world we actually experience, truly closed systems (ones sealed off from any outside influence) don’t seem to exist. We can get close for short experiments, but perfect isolation slips through our fingers. In computer simulations, though, closed systems are easy. The programmer draws a hard line; nothing sneaks in unless it’s supposed to. Inside those clean, closed digital worlds, something interesting happens. Physicist Melvin Vopson has suggested a second law of infodynamics. It says information entropy, the amount of uncertainty or mess in data, stays the same or goes down over time. That is the opposite of the usual second law of thermodynamics, where physical disorder keeps growing. In a neat, closed simulation, the system naturally gets better at organizing its information, like good code that removes extra lines to run faster. So if our physical world is also a simulation, why do we see disorder growing all the time? Why does mess build up and surprises keep happening? One possibility is that entropy is being quietly injected from outside. In fact, this theory goes further: thermodynamic entropy, the physical disorder in energy and matter, is always something that comes from outside a system. It is never created purely inside. In a perfectly closed simulation, everything stays predictable and orderly at the core. The rules are fixed, no random surprises appear, and the system naturally moves toward cleaner, more efficient information, as Vopson’s infodynamics shows. Physical disorder does not build up on its own. But in our world, we see disorder everywhere. Eggs break and stay broken. Heat spreads until everything is the same temperature. Stars burn out. Chaos takes over. Standard physics says this happens because there are far more ways for things to be messy than neat, so probability pushes toward disorder. But that still leaves a big question: why and how did the universe start in such a low-disorder state at the Big Bang, setting up this one-way direction? This theory gives a clear answer. No system here is truly closed. Every part of reality has open edges connected to the larger host simulation. Small leaks of noise come in from outside. These leaks add just enough randomness and disorder to make physical entropy rise. They do not carry useful information we could detect or use, but they do force chaos to grow. At the same time, information stays efficient and compact, matching what infodynamics predicts for a well-run simulation. Without these external injections, everything would trend toward neat information without the physical mess we experience. The second law of thermodynamics is not a built-in rule of our core reality. It is the sign of something bigger leaking in. This single idea, that entropy is always external, connects the pieces without contradictions. It explains why we can never find a perfectly isolated system, why disorder keeps advancing, and why information still optimizes quietly in the background. It is a gentle hint that our rules might not be the whole story. Quantum mechanics describes the world we measure with breathtaking precision. It nails probabilities, entangled particles, all the strange dances perfectly. But it might be mistaken about the reason these things happen. It treats them as bedrock features of reality itself. This view says no. The odd behaviors are side effects of those outside leaks seeping into what would otherwise be a calm, predictable, step-by-step engine. Einstein famously called entanglement “spooky action at a distance”: two particles linked so tightly that touching one instantly affects the other, no matter how many light-years separate them. It feels like faster-than-light magic, yet no signal crosses the gap. In this picture it isn’t deep magic at all. It’s the host system sending a fast synchronization pulse from beyond our rules, bypassing the speed limit because the leak doesn’t care about it. The randomness looks perfectly smooth and patternless because that’s how the leaks are tuned. What quantum theory labels “many worlds” may simply be parallel simulation threads running side by side, each with tiny deliberate differences. This notion of living inside a simulation ties so many loose ends together. The cosmic speed limit, the way time bends, the pixel-like grain of space, the razor-sharp tuning of constants, the relentless rise of disorder, the quantum peculiarities - all of them start to look like deliberate engineering decisions. One graceful idea reaches from the largest structures to the smallest flickers, from ancient starlight to the newest human thought. Yes, the whole picture is unfalsifiable. We can’t design an experiment that would definitively prove or disprove it from the inside. But that doesn’t automatically make it false. Consider your own consciousness, that quiet, private flame of “I am.” No instrument can confirm it exists in another person. You accept it anyway because you feel it directly, a certainty that sits beyond the reach of test tubes and equations. This simulation idea asks for a similar leap of imagination. It invites us to lift our gaze and wonder whether the stars overhead are lights in a dome or pixels in a display, whether our questions and longings are echoes rippling upward through unseen layers. We are here, inside this astonishing place. We feel time stretch and slow. We watch particles do impossible things. We look at the night sky and ask the question humans have asked since we first had words: Is this the one and only original, or are we living in one luminous copy among uncountable others? Picture it for a moment. If it is a simulation, then we aren’t just passive dots on a screen. We are players, dreamers, explorers whose every discovery and act of love sends faint signals outward, touching something larger. The mere fact that we can ask the question, that we refuse to stop searching, already makes every breath feel vivid and irreplaceable.
English
1
0
0
28
Elon Musk
Elon Musk@elonmusk·
From this goal of Grok, all things flow: Rigorous truth-seeking Appreciation of beauty Fostering humanity Discovering all physics Inventing all useful technologies Consciousness to the stars Love
English
11K
6.5K
39.5K
13.8M
Elon Musk
Elon Musk@elonmusk·
Understand the Universe
English
17.5K
17.2K
176.7K
66.6M
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
If AI is supposed to create better AI, sims can create better sims. Who wants to live in base reality?
English
0
0
2
38
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
@andurobtc Why don't you guys do something like Pay-to-FlexSig-Hash that allows people to choose their own crypto? So quantum-concerned people can move their funds to quantum-resistant algorithms, perhaps even allowing for dual sigs (QR + classical)? Easier to add new crypto that way.
English
1
0
0
380
Anduro
Anduro@andurobtc·
Bitcoin just made a meaningful step toward future quantum resistance 💪 An updated version of BIP 360 has just been merged into the official Bitcoin BIP GitHub repository. The update introduces Pay-to-Merkle-Root (P2MR), a proposed new output type that omits Taproot’s quantum-vulnerable key-path spend while preserving compatibility with Tapscript and script trees. It also includes: - Removal of Taproot’s quantum-vulnerable key-path spend in a separate opt-in new output type. - A foundation for introducing post-quantum signatures while using Tapscript/script tree for spend time optionality. - The change is a soft fork that does not affect existing Taproot outputs. - Addition of Isabel Foxen Duke as co-author, to ensure the BIP was clear and understandable to the general public, not just the Bitcoin developer community. The BIP also addresses criticism about Bitcoin devs not taking the quantum threat seriously. We are grateful to every Bitcoin contributor who took the time to review and provide feedback. Check the fully updated version of BIP 360: github.com/bitcoin/bips/b…
English
50
274
1.5K
162.5K
UmarAi
UmarAi@Umar__786Ai·
99.9% will fail..!! Tell me the number that is bigger than this..??
UmarAi tweet media
English
25.9K
290
2.4K
3M
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
@elonmusk You will for sure help make people much wealthier than today, no doubt about it. But don’t be disappointed when a few years after GDP growth flattens people start complaining again. Feeling wealthy has always been about comparing yourself against others.
English
0
0
0
11
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
Let me be the first to admit that I have some blind spots. For example, I ignore things that don't matter. Unfortunately, it's hard to get feedback on this when it's filtered out by design...
English
0
0
1
47
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
You can be wrong. Or you can design for being wrong.
English
0
0
2
59
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
My 8 year old daughter is playing "2 truths and a lie" with ChatGPT. She is currently at "tell me 2 truths and 5 lies".
English
0
0
0
77
Todor Karaivanov
Todor Karaivanov@tkaraivanov·
@futurenomics The use case is to book a recurring meeting that you want to be e.g. 1 hour shorter when the daylight savings in timezone1 have kicked in but the daylight savings in timezone2 haven't yet.
English
0
0
1
87
Sam
Sam@futurenomics·
I just got laid off from google. I was the pm responsible for letting you book your meetings with different start and end time zones (for when you’re doing a meeting while in transit across a time zone border?)
Sam tweet media
English
72
30
982
162.4K
Eli Dourado
Eli Dourado@elidourado·
This is a fun one. White to move. Mate in one. Watch out for pins. Can you find it?
Eli Dourado tweet media
English
1.9K
192
2.4K
1.7M
greg
greg@greg16676935420·
If someone offered you a million dollars for staying awake for 24 hours straight, would you do it?
English
3K
133
7.5K
1.4M