🎃 Mark 👻 🟡⚪️🟣⚫️

16.9K posts

🎃 Mark 👻 🟡⚪️🟣⚫️ banner
🎃 Mark 👻 🟡⚪️🟣⚫️

🎃 Mark 👻 🟡⚪️🟣⚫️

@meditationstuff

Enlightened via https://t.co/bRbwiodDZz Groundlessness, but, like, hyperanalytical. 10k-20k+ hours like water flowing downhill

Earth Katılım Haziran 2014
7.1K Takip Edilen6.4K Takipçiler
🎃 Mark 👻 🟡⚪️🟣⚫️
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff·
I’ve been SLOWLY acclimating to having LOW-intensity 365nm (red) and 660nm (UVA) INDIRECT light on for 4-5 hours per day, without corrective eyewear. And, this wasn’t even my main goal, but holy crap my visual acuity seems to be really obviously improving. (I can’t be bothered to objectively measure improvements rn, sadly.) Will it plateau?? There’s a whole peer-reviewed literature on delaying myopia with setups like this. I was in fact wondering if you could actually reverse it. We shall see.
English
4
0
24
885
🎃 Mark 👻 🟡⚪️🟣⚫️
👀
Unrealrealist⏸️@PDoomOrder1

I want to respond directly to this article because I think it gets the central issue wrong in a very specific way. My disagreement is not with the claim that intelligence is not omniscience or omnipotence. Of course it is not. My disagreement is with the way Dean repeatedly treats the failure of omnipotence as reassurance. He argues against a picture of AI risk that is stronger, stranger, and less necessary than the one he actually needs to answer, and then writes as if defeating that stronger picture settles the real dispute. I do not think it does. In fact, I think that move is what allows him to sound so confident that machine takeover is overwhelmingly unlikely. The confidence does not come from grappling with the strongest forms of the concern. It comes from misunderstanding the mechanisms by which advanced AI could become dangerous, toppling a strawman, and then treating the collapse of the strawman as though it were a refutation of the underlying issue. The real issue is not whether AI becomes God. It is whether it becomes sufficiently more capable than humans, in enough strategically important domains, to become catastrophically dangerous. There is an enormous space between human-level intelligence and omnipotence, and that is where most serious concern lives. To dismiss the doomers, you need something much stronger than the observation that AI will not be magical and that experiments take time. That does not touch the actual claim. A system does not need to be all-powerful to become uncontrollable. It only needs a large enough edge, exercised through the right channels, for long enough. Part of the problem is that the doomer position is not a monolith. There is no single package of assumptions that rises or falls together. Different people emphasize different mechanisms, different timelines, different thresholds, and different end states. Some focus on extinction. Some focus on permanent loss of control. Some focus on lock-in under machine-mediated institutions. Some focus on radical human disempowerment. These are not interchangeable, and refuting one caricature does not refute the whole landscape. Even if Dean had shown that some people overstate what superintelligence could infer or do, that still would not take down Yudkowsky’s view in particular, much less the broader class of arguments about catastrophic AI risk. I also do not think Dean engages Yudkowsky fairly. He seems to read him as though he were committed to a guaranteed and highly specific path by which AI inevitably conquers the world, or as though the entire argument depends on AI possessing something close to omnipotence. But that is not the heart of the position. Yudkowsky’s point is not that there is one scripted route that superintelligence must follow. It is that we should not be surprised if a system operating beyond our cognitive range finds routes to dominance that we do not foresee. That could involve cyber operations, persuasion, automation, institutional capture, scientific acceleration, robotics, novel engineering, or combinations of capabilities we do not currently know how to model well. Had Dean read Yudkowsky more carefully, he would have seen that the issue is not a guaranteed pathway. It is almost the reverse. The issue is that we should expect to be surprised by pathways that occur to a superintelligence and not to us. That is one of the main difficulties here. A superintelligence is hard to think about defensively precisely because, by definition, it may have access to strategies, abstractions, tools, and routes through the problem space that are beyond our understanding. The challenge is not merely that it might know more facts than we do. It is that its best ideas may be ideas we cannot generate, cannot anticipate, and may not even fully understand after the fact. That asymmetry matters enormously. We are not trying to stop a very smart opponent with known capabilities. We are trying to reason about a system whose most dangerous capabilities may consist in finding moves we literally do not know how to think of in advance. This is one reason the repeated example about inferring relativity from a few frames of a falling apple does not do the work Dean wants it to do. At most, it shows that one especially strong formulation is implausible. But the AI risk case does not depend on whether a superintelligence could infer general relativity from a tiny visual sample. Even if that exact claim were false, the central concern remains intact. AI does not need to reconstruct all of modern physics from a handful of observations in order to become highly dangerous. So as a rebuttal, the example is mostly beside the point. But I also think Dean is too dismissive of the broader point behind examples like that. High intelligence can take you surprisingly far in science and engineering. A great deal of progress does not come from mindlessly accumulating data. It comes from finding the right abstraction, the right symmetry, the right invariance, the right formalism, the right thought experiment, the right question, or the right variable to treat as load-bearing. That is especially obvious in physics. Anyone who has spent time with Landau and Lifshitz knows how much theoretical structure can be extracted from a relatively compact set of principles once the right mathematical framework is in hand. A large share of physics is not brute-force induction from giant piles of observations. It is seeing the structure. That is also why I think Dean is too anthropocentric about what counts as a simple theory. Newtonian mechanics is simpler for us in the sense that the mathematics is easier to learn and use. But in terms of background assumptions it is not obviously the simpler worldview. Newton gives you absolute space, absolute time, and action at a distance. General relativity gives you a more unified and, in an important sense, more principled picture. A sufficiently powerful intelligence might find the more invariant and conceptually economical theory more natural, and then recover Newtonian mechanics as a limiting case. Human scientific history should not be treated as though it maps the uniquely natural order in which intelligence must discover the world. It maps the order that happened to be accessible to minds like ours, with the tools we had, under the path-dependent conditions we inherited. Einstein is actually a good illustration of the broader point. Some of his deepest advances were not the product of gigantic new datasets but of thought experiments, conceptual clarity, and an unusual ability to notice which principles were doing the real work. Imagining what it would be like to ride alongside a beam of light, or reasoning through the equivalence of gravitational and inertial effects, was not a substitute for contact with reality. It was a way of extracting much more from the reality already available by using better concepts. That is exactly the kind of thing Dean’s framing understates. Scientific progress is not simply a matter of waiting for the world to reveal itself one experiment at a time. It is also a matter of having minds capable of seeing what the evidence already constrains. The same point appears in more ordinary scientific cases. Take the origin of the Moon. For a long time, scientists lacked the kind of direct observational access one would ideally want. Yet they could still make real progress because some facts were highly diagnostic. The Moon’s density being close to the density of Earth’s outer layers is not just another datum in a pile. It sharply constrains the explanation space. That was not mere brute-force experimentation rather It was recognizing which observation has unusually high evidential leverage. Science is full of cases where the decisive step is not more data in the abstract but identifying which feature of the available data matters most. This is why I think Dean badly understates how much intelligence matters in science and engineering. He says that better models of the world do not usually come from thinking about the problem really hard, but instead mainly from testing ideas in the real world. That is much too crude. Of course experiments matter. Of course reality gets a vote. But a huge amount of scientific progress consists in identifying the right experiment, choosing the right framing, seeing which fact is actually diagnostic, constructing the right mathematical representation, and drawing the right inference from the outcome. Often the hard part is not physically running the test. It is knowing what test to run. Experimental design is itself an intellectual achievement. The setup is not some fixed, mindless bottleneck. Better reasoning, better planning, and better robotic dexterity could all change how quickly and effectively experiments are carried out. Two agents can run the same experiment and learn very different amounts from it depending on how well they understand what they are seeing. Even if each individual experiment has an irreducible duration, intelligence still helps determine how many experiments are needed, how well they are chosen, how many can be run in parallel, how informative they are, and how much you extract from the results. Nature sets the duration of each trial. Intelligence helps determine how many trials you need, how well they are sequenced, and what you learn from each one. This is why the repeated claim that experiments take time does not come close to answering the concern. At most, it suggests that the world imposes friction. Fine. But friction is not safety. Saying that dangerous experimentation would take time is not an argument that catastrophe is unlikely. It is, at best, an argument that some dangerous trajectories would unfold over months or years rather than hours or days. Then you still need to show that this extra time would save us. You need to show that human institutions would recognize the threat, coordinate effectively, and act successfully within that window. Dean does not show that. He gestures at delay as though delay were equivalent to safety, but those are entirely different claims. More importantly, I do not think the right reply is that machine takeover would “not be effortless.” It might be effortless in some domains. Or rather: even where there is no straight line from capability to dominance, intelligence may still help you identify the geodesic through a constrained landscape. The world may impose bottlenecks, detours, and local obstacles. That does not mean a sufficiently capable system wanders blindly through them. Greater intelligence may consist, in part, in seeing the shortest feasible path through a space of real constraints, finding the route humans miss, and exploiting it before we have even conceptualized it. The absence of a straight line is not much comfort if the system is unusually good at finding the curve. Even if fabrication, industrial buildout, and iterative experimentation slow a system down, the conclusion is not therefore safe. The conclusion is only that the timescale may be extended in some cases. Whether that extension matters depends on whether humans can use it to solve alignment, coordinate politically, and resist displacement before the system becomes too deeply embedded or too capable to control. That is the real question. It is much harder than pointing out that labs and factories cannot be conjured instantly. The same overreach appears in Dean’s appeal to computational irreducibility. Even if some processes are irreducible in a strong sense, that does not imply that humans are anywhere close to optimal inferers. Kolmogorov complexity can tell you that there are lower bounds. It cannot tell you that our species is near those bounds. It certainly cannot tell you that the historical development of science is anything like the shortest path through idea space. There is a huge difference between saying reality cannot be compressed arbitrarily and saying humans have already explored it in something close to the most efficient possible way. Dean repeatedly slides from the first claim to rhetoric that only makes sense if the second were true. There seems to be a picture in the background of his argument in which humanity had to run something close to the right set of experiments, in something like the right order, to get where we are. I do not think that picture is credible. A huge number of experiments happen because people are confused, because they lack the right abstraction, because they fail to notice what is really load-bearing, or because institutions and personalities channel inquiry inefficiently. Much of scientific history reflects human limitation, path dependence, and clumsiness. Intelligence does not abolish experimentation, but it can radically change how much experimentation is needed and how efficiently the search through hypothesis space is conducted. This matters because Dean keeps making the same move. He points out that the world has bottlenecks, that not all knowledge is online, that tacit knowledge exists, that experiments take time, that institutions resist, and then he leans toward the conclusion that machine takeover is overwhelmingly unlikely. But none of that follows. At most, he has established that the world pushes back. Fine. But reality pushing back is not a safety guarantee. The real question is whether a much smarter, faster, more scalable, more persistent system could overcome enough of those bottlenecks in enough strategically important areas to become catastrophically dangerous. That is a far lower bar than omnipotence. Humans themselves are the obvious example. We did not need omnipotence to dominate the planet. We did not need perfect foresight, perfect knowledge, or infinite power. We needed an edge in reasoning, coordination, tool use, and cumulative learning. That was enough to reshape ecosystems, subordinate other species, wipe out species, and become the dominant strategic force on Earth. So when Dean says, in effect, that AI will still face resource constraints, physical bottlenecks, hidden knowledge, institutional friction, and uncertainty, my reaction is simply: yes, and so did humans. That does not remotely imply safety. It only means that the threat would be mediated through the world rather than operating outside it. If you imagine a dodo trying to reason about whether humans posed an existential threat, it could easily have comforted itself with arguments structurally similar to Dean’s. Humans are not omnipotent. They cannot instantly reach every island. They face logistical bottlenecks. Building ships takes time. They do not understand everything. Their institutions are messy. They make mistakes. Experiments fail. None of those observations would have saved the dodo. They would only have described real frictions in the mechanism of extinction. Dean is making too strong a claim when he treats the existence of such frictions as evidence that machine takeover is overwhelmingly unlikely. Frictions are compatible with disaster. Often they merely describe the route by which disaster arrives. I would not try to rank extinction against disempowerment with spurious precision, and I do not think the exact taxonomy matters very much here. The point is simply that both extinction and durable loss of human control are live concerns. They may be distinct risks, they may come apart, or one may lead into the other; that is not really the issue. The issue is that Dean has not shown that advanced AI is unlikely to produce either. Showing that AI is not omnipotent, or that experimentation takes time, does not resolve the possibility of extinction, and it does not resolve the possibility that humans remain alive while losing meaningful control over civilization’s future. Once you think in those terms, Dean’s framing starts to look too narrow. His picture is too much “computer versus man,” as though the only serious concern is a clean confrontation in which a machine mind visibly overpowers humanity. That is one imaginable path, and it should not be dismissed just because it sounds dramatic. But it is hardly the only one, and probably not the only one worth worrying about. Perhaps a more legible concern to Dean is computer systems deeply embedded in an increasingly difficult-to-understand world, with humans trying to retain control from inside that world. The danger is not just a single model in a data center trying to outthink us from a distance. It is the gradual fusion of machine cognition with the infrastructure that governs finance, logistics, military systems, research, communications, administration, manufacturing, robotics, and resource allocation. In that world, the issue is not a dramatic showdown. It is that the environment itself becomes more opaque, more machine-legible, more optimized for nonhuman processes, and less governable from a human point of view. That concern becomes even more vivid in the kind of world Elon Musk sometimes gestures toward, where there may eventually be as many robots as humans, or something in that vicinity. In that world, the relevant unit of analysis is not a chatbot. It is a civilization saturated with AI-linked robots, sensors, factories, labs, vehicles, supply chains, bureaucratic systems, and possibly weapons systems. At that point the comparison is no longer simply between a computer and a person. It is between human agency and a densely networked techno-industrial order increasingly optimized, interpreted, and coordinated by systems we do not fully understand. That is the kind of picture serious AI risk arguments are trying to get people to look at. And it is far more unsettling than the cartoon Dean keeps arguing against. The article’s treatment of sample efficiency, tacit knowledge, and distributed expertise also feels much too comforting. Yes, humans are remarkably sample-efficient in many contexts. Yes, firms like TSMC contain hard-to-formalize know-how. Yes, important knowledge is distributed across people and institutions. But none of that is a knockdown argument. Lower sample efficiency is only reassuring if it imposes a real ceiling on capability. Otherwise it just means machines compensate with scale, speed, and parallelism until the inefficiency stops mattering much. If a system can already become highly capable while being less sample-efficient than humans, then it is already competitive on brute force alone. If it later becomes more sample-efficient as well, the advantage compounds. Likewise, the fact that not all knowledge is online does not show safety. The relevant question is not whether every relevant fact is public. It is how much capability can be assembled from public information, inference, targeted experimentation, and active acquisition of what remains. There is already an extraordinary amount in the open on semiconductors, lithography, materials science, metrology, control systems, robotics, cyber operations, and related domains. Missing pieces can be inferred, elicited, bought, stolen, experimentally reconstructed, or bypassed. Public information is only one path. There is also a deeper problem with the way Dean talks about distributed knowledge. When knowledge is spread across a large human organization, that is not just a moat against outsiders. It is also evidence of human cognitive limitation. No one inside such a system can read everything, retain everything, integrate everything, or keep the whole relevant design space in mind at once. A system that can absorb much more of the public record, retain it perfectly, reason across it continuously, and identify neglected connections may gain a substantial advantage even before it accesses genuinely secret information. So “not all the knowledge is online” is nowhere near the reassurance Dean seems to think it is. All of this comes together in what I take to be the central mistake of the piece. Dean wants to conclude that even a misaligned AI system with no AI-specific safeguards would still fail to eradicate or enslave humanity because there are too many steps, too much hidden knowledge, too much complexity, too much capital required, too much institutional resistance, and too much human oversight. But that simply does not follow. At most, he establishes that the world pushes back. Fine. But reality pushing back is not a safety guarantee. The question is whether those frictions are decisive, and he does not show that they are. To dismiss the doomers, you need something much stronger. You need to show not just that obstacles exist, but that they reliably prevent systems with large capability advantages from crossing the threshold into catastrophic strategic superiority. You need to show that delays introduced by experimentation and physical bottlenecks translate into actual human rescue rather than confusion, racing, institutional paralysis, or capture. You need to show that the mechanisms by which humans remain in control are robust even under severe asymmetries in cognition, speed, scale, persistence, and coordination. Dean does not do that. I find the alignment discussion similarly unpersuasive. He suggests that powerful AI does not need to be gotten right on the first try, as though we can just muddle through with incremental improvement. But why exactly should we believe that? Why assume we can afford repeated failures and still converge safely? Why assume increasingly capable systems will remain corrigible, interpretable, and governable long enough for that gradualist story to work? Why assume markets and institutions will move us toward safety rather than toward opacity, race pressures, lock-in, and the deployment of systems that nobody truly controls? Those are the crucial claims, and I do not think Dean demonstrates them. In some ways that is because the deeper disagreement sits upstream. If you already believe that even a misaligned superintelligence would still fail because the world is too complex, too institutionally thick, too bottlenecked, and too full of tacit knowledge, then of course alignment will not look especially urgent. But that only means the real dispute is earlier. I think that earlier judgment is badly mistaken. My basic problem with the piece, then, is that it keeps aiming at the wrong target. It argues against omnipotence when the real issue is strategic superiority. It treats the limits of intelligence as though they implied safety. It invokes experimental bottlenecks, tacit knowledge, sample inefficiency, distributed expertise, and computational irreducibility without showing that those remain decisive against a system that is much smarter, faster, more scalable, cheaper to copy, able to operate continuously, and increasingly embedded in the world. It takes the existence of frictions as though that were close to an argument that catastrophe is unlikely. It is not. Yes, intelligence is not omnipotence. Yes, reality pushes back. Yes, experiments take time. Yes, tacit knowledge is real. Yes, institutions are messy. None of that is enough. Humans did not need omnipotence to become an existential threat to other species. They only needed an edge. The real question is whether AI could acquire a sufficiently large edge over us, and then translate that edge through infrastructure, institutions, robotics, software, science, persuasion, and automation into a world where humans can no longer meaningfully direct events. That could end in extinction. It could end in radical disempowerment. It could end in some other durable loss of human control. The point is not to pretend we can rank those outcomes with confidence we do not have. The point is that dismissing them requires much more than Dean has provided. Showing that AI is not omnipotent is not enough. Showing that experiments take time is not enough. Showing that one strawman version of machine takeover is implausible is not enough. That is the real issue. And I do not think Dean has actually answered it.

ART
0
0
2
476
Defender
Defender@DefenderOfBasic·
What makes a "thing" a thing is its relative frequency. Fast activity can perceive slow activities. Slow activities cannot (easily) perceive faster activities (or perceive them as "multiple distinct things")
Defender tweet media
English
55
89
871
46.5K
🎃 Mark 👻 🟡⚪️🟣⚫️
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff·
If you’re a bc/angel/etc and want to cure all solid cancer: Company to watch bc Jaminet. Disclosure: I have no relationship or financial stake atm but if they go public there’s a decent chance I’ll put I tiny amount of money in them right off angiex.com/contact
English
1
0
2
361
🎃 Mark 👻 🟡⚪️🟣⚫️ retweetledi
BowTied Biohacker
BowTied Biohacker@BowTiedUM·
The sun is nature's Ozempic, Retatrutide, Adderall, Morphine, Antibiotic, Anti-Inflammatory, and pretty much every pharmaceutical drug you could need Here's a breakdown of each wavelength of interest to biohackers, and why Big Pharma has to rig the science so you think the sun is harmful: UVB (280–315 nm) Peptide factory + Vitamin D UVA (315 nm to 400 nm) Neurotransmitter factory, Nitric Oxide, Opsin regeneration All UV = 5AR/DHT Upregulation, POMC Activation 405nm Violet Antimicrobial, Microbiome reset 485nm Cyan Circadian signal/Wakefulness + Drive/Melanopsin activation 630nm Red Surface repair, CCO activation, collagen synthesis 670nm Deep Red Mitochondrial boost, Peak CCO absorption, subcellular melatonin 760nm NIR Transition depth, Opsin regeneration, deeper tissue engagement 810nm NIR The Brain wavelength, Transcranial PBM, cerebral blood flow 850nm NIR Deep tissue Joint/muscle/bone, water chromophore 935nm NIR Extended depth, Water layer interaction, organ support 1050nm NIR Heat-gated channels, lymphatic support The POMC Cascade - This Is Where the Real Magic Happens: UVB light stimulates the POMC gene in your skin AND in your brain (specifically the arcuate nucleus of the hypothalamus). α-MSH (Alpha-Melanocyte Stimulating Hormone) - Creates melanin (your solar panel semiconductor), suppresses appetite through MC4R receptors in the brain, increases energy expenditure, has potent antimicrobial and antifungal activity, modulates immunity, influences eNOS/nitric oxide, and regulates cholesterol/bile acid metabolism. Low α-MSH = obesity, metabolic syndrome, immune dysfunction, poor skin pigmentation. This is the anorexigenic peptide - it decreases food desire in the brain β-Endorphin - Your endogenous opioid. A landmark 2014 Cell paper proved that UV exposure causes p53-mediated POMC transcription in keratinocytes, producing β-endorphin that enters systemic circulation and activates mu-opioid receptors. They showed that chronic UV exposure produced opioid tolerance and that blocking opioid receptors caused measurable withdrawal symptoms. Nature addicted you to sunlight on purpose. As someone who spent a decade on exogenous opioids, I can tell you - nothing replaces the feeling of genuine endogenous β-endorphin production from UV exposure. The mechanism matters. ACTH - Stimulates cortisol production (gluconeogenic - makes glucose from sunlight) CLIP - Insulin secretagogue (raises insulin without food intake) β-MSH - Controls B cells (humoral immunity) γ-MSH - Controls T cells (cellular immunity) - this is why skin breakouts often indicate problems with beta/gamma MSH processing When you expose your skin to UVB, you're activating an entire neuroendocrine cascade that: Lowers leptin by improving leptin sensitivity through the melanocortin pathway. Leptin resides in subcutaneous fat and its activation is directly controlled by light on skin and eyes. When this pathway works correctly, you get appropriate satiety signaling. Increases GLP-1 signaling in the brain - the same pathway that drugs like semaglutide (Ozempic) artificially activate. UV exposure does it physiologically. Think about that. Produces antimicrobial peptides - NB-UVB has been shown to alter antimicrobial peptide expression in the skin, enhancing your innate immune defense Stimulates serotonin production - UVB stimulates serotonin in keratinocytes while inhibiting dopamine production locally, allowing serotonin accumulation Generates beta-endorphin systemically - not just locally in the skin, but at levels that cross into plasma and affect the entire body Upregulates melanogenesis - building melanin, your body's semiconductor solar panel that converts light energy into DC electrical current
BowTied Biohacker tweet media
English
19
139
684
30.9K
🎃 Mark 👻 🟡⚪️🟣⚫️
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff·
Rusty ⚡️: Solar Powered ☀️@ze_rusty

This viral thread is telling you the NTP study proves RF radiation makes you live longer. Ronald Melnick, the senior toxicologist at NIEHS who literally designed this NTP study, just published a new paper this month saying the EXACT OPPOSITE. His conclusion from his OWN data? CLEAR EVIDENCE of carcinogenic activity, Malignant Heart Schwannomas "CLEAR EVIDENCE" is the HIGHEST classification NTP uses. "but the radiated mice lived longer!!" Here read straight from the NTP report (TR-595): "Survival of all exposed male groups was significantly greater… due to the effect of chronic progressive nephropathy in the kidney of sham control males" It explicitly states the control group died faster from chronic progressive nephropathy, a kidney disease common in aging rats. The exposed animals ate less, weighed less, got less kidney disease. That's it. That's the "lived longer" It's right there in the report. and here's what the thread conveniently left out.. Brain gliomas appeared at 1.5 W/kg, the lowest dose in the study. 1.5 W/kg is below what real iPhones emit during simultaneous antenna use (1.58-1.60 W/kg) Brain tumors showing up at casual phone-level exposure. Let that sink in. and it doesn't stop at the NTP. The Ramazzini Institute independently found the same heart tumors at 0.1 W/kg, that's 100x lower. This month (March 2026), the same NTP study author, Melnick published a new paper with Dr. Joel Moskowitz (@berkeleyprc) from UC Berkeley. Same NTP data. EPA-standard carcinogen risk analysis. Their conclusion? FCC safety limits need to be reduced by upto 200x. Not 2x. Not 10x. Up to TWO HUNDRED times. The same paper highlights a WHO-commissioned review that found "HIGH-CERTAINTY" RF exposure destroys male fertility. Sperm count drops. Testosterone drops. Sperm DNA damage. Testicular cell death. That thread mentioned NONE of this.

ZXX
0
0
2
177
🎃 Mark 👻 🟡⚪️🟣⚫️
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff·
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff

# Indoor all-day sunlight-mimicking protocol (14 pages) This document might be useful to people with Long Covid, ME/CFS, or autoimmune conditions. No claims of "treatment" are made. If you are very severe, feedback is especially welcome and encouraged: meditationstuff@gmail.com ; twitter: @meditationstuff Please distribute freely. Alpha draft; feature complete; last updated: 2026-02-15 Share with / Latest: docs.google.com/document/d/e/2… ### **Summary**: If you want to spend all day, every day, in the sun—for health or just because it feels good—but you can't, maybe try this. ### **Disclaimer**: This document is for education and entertainment. No guarantee or warranty is expressed or implied. The author is NOT a licensed health practitioner. There are no claims of diagnosis or treatment in this document. Use at your own risk; it might make you worse. ### **Purpose**: Speaking informally and anecdotally, some people have had large improvements in quality of life using this protocol. For example, following these guidelines, someone chairbound and housebound, for two years, became able to stand, walk, and travel by car. Sleep became refreshing. Results may vary. Exercise patience and caution. This might make some people much worse. ### **Rationale**: Sunlight is composed of infrared radiation, ultraviolet radiation, and visible light. Infrared radiation improves mitochondrial function and reduces the energy needed to sustain core temperature. Ultraviolet light is anti-inflammatory and may even be directly, mildly antimicrobial for blood passing through surface capillaries. High-lux visible light, timed correctly, strongly drives the circadian rhythm, which governs healing and the immune system. (Ultraviolet light also strongly drives the circadian rhythm.) To produce biological effects, long duration and low intensity may be better than short duration and high intensity. Time of day matters. ## **Contents**: 1. Summary, disclaimer, purpoes, rationale. 2. Preparation 3. Infrared radiation 4. Ultraviolet radiation 5. Visible light 6. Timing 7. Buildup, Nutrition, WARNINGS 7a. Eye Safety 8. Conclusion ## Preparation Take your time. Introduce different elements slowly and for very brief periods (even just 10 seconds to a minute, per day, to start). You might wait two to three weeks, or longer, between introducing each major element. Again, in my experience, you don't need to do any of this fast; there's no benefit to speed. This is pure speculation, but you're likely not "overwhelming" a pathogen, before it has a chance to entrench or mutate or something like that. Go slow. See the rationale section, too. Putting all this together will cost money and time and energy. You may need lots of help, ingenuity, and forbearance from a carer. Also see the "Timing" section and the "Buildup, Nutrition, WARNINGS" section. I would start with introducing infrared, first. I introduced UV before high lux visible light. It might be better the other way around, for eye safety or otherwise, or not. ## Infrared radiation Buy a "250w red heat lamp" and place it 5-10 feet (1.5-3 meters) from your bed. "Less is more;" 8-15 feet is better, if you have the space; buy a lower-wattage heat lamp if you don't. You may need to buy a heat lamp fixture separately from the bulb; you can find videos on the internet for how to assemble it. You may need to buy some sort of pole or frame to mount it. Be mindful of fire safety. Ideally you'll add a "smart plug" so you can control the light from your phone. You may need lots of help from a carer to do all this. Eventually, I added a second 250w heat lamp at a similar distance. #### A brief note on "high-NIR" bulbs": These are great. They do something good. You could start with these before introducing a red heat lamp. They don't seem to have enough "oomph" to really move symptoms, but they otherwise definitely improve quality of life. #### A brief note on "white heat lamps": For a while I preferred white heat lamps to red, but, once I introduced UV light, I began to exclusively prefer red heat lamps. ## Ultraviolet radiation. For tube lights or LED panels, search for the phrase "365nm," without the quotes. (This is the wavelength of ultraviolet radiation that's most abundant in sunlight.) I have two 100w input LED panels and two 50w input LED panels, all placed a good fifteen feet from the bed, from different directions. These get hot. Be mindful of fire safety. (As merely a secondary adjunct, you can also buy some UVA/UVB "reptile lights" to "add a little UVB to the air.") Infrared radiation travels pretty well through bedding and clothing, but UV "likes" direct skin exposure. Upon introduction of UV, consider exposing as much skin as possible and arranging lights from multiple angles, if possible. Get and stay naked, if you can. ## Visible light Search for "high bay UFO light," without the quotes. I currently have one ~30,000 lumen light, and one ~60,000 lumen light. These need to be hung or placed in some sort of stand. I used heavy metal laptop stands. You may want to buy a lux meter. Arrange the lights so that 10,000 lux would enter the eyes if they were looked at directly. But don't look at them directly. Like UV, visible light "likes" lots of surface area skin exposure, in addition to stimulating peripheral vision. Get and stay naked, if you can. Generally, don't look directly at lights, except for brief, incidental glances from distances of several feet or ideally more. ## Timing (I sleep with an imperfectly fitting opaque sleep mask.) (All the below uses "smart plugs" which can be scheduled and controlled with your phone.) 1. At sunrise, I have some normal room lights turn on, on a timer. 2. Fifteen minutes after sunrise, a few 402 "high-NIR, high violet" bulbs turn on. I'm not sure if these are essential, and they are not detailed in this document. 2. Thirty minutes after sunrise, 50w of UV light turn on, 250w of the red heat lamp turn on, and the ~30,000 lumen visible light turns on. 3. Around 10am, I turn on the other high bay light light and the rest of the UV lights, and another red heat lamp. 4. Around 12:30pm, my body says "all done," and I turn off the UV and the high bay UFO lights. 5. I turn off the heat lamps and high-NIR bulbs at sunset, at the very latest. 6. I wear "blue blocking red glasses" after 8:30pm. While it might feel good, infrared radiation after sunset may be too stimulating and may interfere with sleep. Sleep in complete darkness if you can. (Note: 365nm is UVA. If you have a UVB light, as well, use it around noon.) ## Buildup, Nutrition, WARNINGS 1. Infrared light made me desperately crave magnesium, vitamin C, and possibly B spectrum vitamins and choline, starting three days after I introduced the lights and lasting for about two weeks. I wanted several grams of vitamin C and non-elemental magnesium, each day. Of course, I had to poop more, and urgently, and that could be devastating, if you're severe, so you may want to go very, very, very slowly. (For magnesium, I like pure magnesium chloride hexahydrate powder, dissolved in water.) 2. Ultraviolet light did not especially cause any cravings. Please let me know if you experience any. 3. High lux visible light caused extreme cravings for DHA (fish oil or fish). Eating high ALA (which inefficiently converts to DHA_ does not seem to be enough. The retina is very high in DHA. I also generally craved more B vitamins and electrolytes, including calcium and potassium. Please have sources of these for which you can consume as much as needed, without consequences. Or, go very, very, very, very slowly. For each lighting element, its introduction initially caused a great deal of heat and sweating, which gradually faded over weeks, as my body improved its heat dissipation. Some people may find this dangerously overstimulating. You can wait weeks between introducing each element. And you can build up each element seconds or minutes at a time, just a little more each day. Listen to your body. If your body says "*done*" after two to eight minutes, or even less, great. Two hours? Great. Four hours? Great. By chance, I tracked my nutrition for a full year using the Cronometer app, before I embarked on this, and I think that made everything much, much, much smoother. (As an aside, the author follows a strict ketogenic diet. It took a full year, and much experimentation, to fully adapt to.) ### Eye Safety UV, infrared, and high intensity visible light can all cause eye damage. Also, metabolic diseases can make you more vulnerable to eye damage. If you build up slowly, everything discussed here, if the DISTANCES are followed correctly, should EVENTUALLY be safe to use without eye protection. BUT, AT FIRST, eye protection is recommended, possibly for weeks. Sunglasses aren't enough, even wraparound ones, for some people. You want something more like fully-sealing goggles. Sperti brand makes green goggles. Remove eye protection cautiously and for short periods, at first. Note: All these claims are made un-authoratively and informally. You must do your own due diligence. Generally, don't look directly at lights, except for brief, incidental glances from distances of several feet or ideally more. You don't even necessarily need lights in your peripheral vision. They can be behind you. You just need skin exposure (UV) and sufficient lux into your eyes (visible light). You can buy an expensive UV meter to make sure your eye UV exposure isn't more than what you'd get outside on a summer day. Lux meters are much cheaper and are also useful for visible light. You might find you're "craving" looking directly into a light. Be cautious and brief, stop earlier than you think, and do note that eye symptoms can take as long as 3-4 hours to become apparent, or more. Please do your own research, but UV radiation and infrared radiation (and blue and violet light), at too high intensity, over a long period of time—years—can cause cataracts. UV light increases risk of cancer but apparently reduces all-cause mortality, including death by cancer! ## Conclusion Please let me know how I can improve this document, especially for very severe individuals and their carers. In general, when in doubt, ask, "How can I make the indoors more like natural sunlight, in intensity, spectrum, and timing?" Overall, you want to "dial the intensity" DOWN until you can do many hours a day. Slowly work up to it. Maybe slowly increase intensity the what's suggested above. This might not be the right thing for everyone. Some people may do very well with something much less "extreme." Contact: meditationstuff@gmail.com Document location and version: Currently the document lives at docs.google.com/document/d/1hi… Last updated 2026-02-15; alpha draft I might put it on github at a later time. In the above I use UV radiation and UV light interchangeably (ditto with infrared), but "light" is a misnomer. Please distribute freely. Todo: Much, much, much less important but maybe discuss 405nm bulbs, Chromalux bulbs, 10,000 lux panels, UVA/UVB reptile lights, Sperti Vitamin D lamp, more nutrition stuff, spectral approximation with phone apps, blue light and violet light versus other colors re melatonin and circadian entrainment, green light, red light versus infrared re mitochondria.

ZXX
1
0
4
736
🎃 Mark 👻 🟡⚪️🟣⚫️
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff·
I have had insane health benefits from (carefully timed) UV, visible light, and all the way to far-infrared, so yeah, hmm I’ve been much more first-pass wary of microwave-ish-scale radiation (phone, wi-fi); I do find this band a bit phenomenologically unpleasant in bursts, but maybe I should be less wary /
Zane Koch@zanehkoch

for a while i've had a slight fear that the bluetooth from my airpods could be frying my brain this weekend i pulled the raw data from a $30m government study of 1,679 mice blasted with cell phone radiation and reanalyzed it what i found was...not what I expected? 🧵

English
3
0
24
2.8K
🎃 Mark 👻 🟡⚪️🟣⚫️
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff·
@seekingyaga I don't think I got the idea from you, but I got the idea from someone, and I want to credit them, the next time I see their account go by on the TL. I'm very glad multiple people have discovered this
English
0
0
1
47
Lorah
Lorah@seekingyaga·
@meditationstuff I'm kind of curious if I inspired you to get chicken lamps and if that helped you get over a hump? You've taken the idea way farther than I have but I've found that those simple, cheap lamps have an outsized impact on all aspects of my health. x.com/i/status/19909…
Lorah@seekingyaga

@meditationstuff @health_lighting While trying to keep warm, I discovered that infrared heating bulbs make me feel rly happy. I use white one in livingroom + red one in my bedroom at night. Super cheap and widely available. The light from the white one is gorgeous. (fixed broken link) homedepot.com/b/Lighting-Lig…

English
2
0
1
83
MTM 14
MTM 14@mtm14·
@zanehkoch @meditationstuff Chief risk from earbuds is hearing loss. Ever notice how young people all watch TV with subtitles on?
English
10
0
4
13.8K
🎃 Mark 👻 🟡⚪️🟣⚫️ retweetledi
Zane Koch
Zane Koch@zanehkoch·
for a while i've had a slight fear that the bluetooth from my airpods could be frying my brain this weekend i pulled the raw data from a $30m government study of 1,679 mice blasted with cell phone radiation and reanalyzed it what i found was...not what I expected? 🧵
English
406
693
9.2K
4.3M
🎃 Mark 👻 🟡⚪️🟣⚫️ retweetledi
🎃 Mark 👻 🟡⚪️🟣⚫️
🎃 Mark 👻 🟡⚪️🟣⚫️@meditationstuff·
I knew for the longest time I wasn’t getting enough calcium, but my body always vetoed it, even with D3, K2, etc. I finally got the cofactors right and now my body wants 2000-4000+ mg per day. That’s absolutely in toxicity-land potential and also, long-term that will increase all-cause mortality. But I figure it will be temporary? My teeth, while I practice good oral hygiene, were always a little bit “fuzzy” all the time. Now the feel “big” and smooth, like cartoon teeth. And I’m like…was my *skeleton* fuzzy? Is it still fuzzy? It will be interesting to see what happens and for how long this goes on. Not medical advice, don’t do this at home or something, etc.
English
7
2
101
6.8K
🎃 Mark 👻 🟡⚪️🟣⚫️ retweetledi
yatharth ༺༒༻
yatharth ༺༒༻@AskYatharth·
i can walk into a room and guess CO₂ levels within a couple hundred ppm but that wasn't the important part. the important bit was setting up that visceral, automatic, clear, no self conflict link between "tired feeling? yeah, check windows"
English
2
8
514
31K
Lorah
Lorah@seekingyaga·
@meditationstuff @sglowmo @loftbruk guessing you have this covered somehow but does the lack of tryptophan affect you or do you supplement that as well? brand rly matters. I've tried a bunch, some is disgusting + almost burnt tasting. Some tastes creamy + satisfying, like good bone broth. Don't have current fav.
English
1
0
1
35