Maximum-Epiplexity reinforcement learner

7K posts

Maximum-Epiplexity reinforcement learner banner
Maximum-Epiplexity reinforcement learner

Maximum-Epiplexity reinforcement learner

@MaxDiffusionRL

Likes surprisal. Wealth of weak ties. Fattens fat tails. Eclectic af. Likes the unbenchmarked. Puts FEP/Hessians/Fisher info into their Claude prompt

Mirzam Tunnel Katılım Ocak 2021
2.8K Takip Edilen764 Takipçiler
Bryan Johnson
Bryan Johnson@bryan_johnson·
This new perspective study reports that our brains are carrying 3,000x more microplastic than our blood. Microplastic burden of the human brain rose ~50% between 2016 and 2024. The average brain now carries roughly: > 11x the load of the liver > 11x the kidney, and on a per-mass basis around > 3,000x the concentration found in circulating blood (on a per-mass basis) This study argues that eliminating ultra-processed foods (i.e. chicken mcnuggets, breaded shrimp) carries an additional benefit: reducing brain microplastic accumulation. This is based on an inferred chain of mechanisms rather than proven causality in humans, yet the convergence is striking. The paper outlines four pathways through which microplastics plausibly damage the brain: > oxidative stress and chronic inflammation > endocrine disruption > gut-microbiome injury > and vascular damage. These map onto various brain and mental diseases including: depression, anxiety, cognitive decline, stroke, dementia. The same conditions are independently linked to ultra-processed food consumption in large prospective cohorts Each 10% increase in ultra processed food intake > 25% higher dementia risk > 16% higher cognitive impairment risk > 8% higher stroke risk High versus low ultra processed food consumption tracks with 44% higher odds of depression and 48% higher odds of anxiety. While we do not yet have a human study showing UPF intake directly raises brain microplastic burden. Here is what we do have: A study found that the more processed forms of protein foods carry significantly more microplastic particles. > Chicken nuggets contained 31x more microplastics per gram than raw chicken breast (least processed item in the study) > Breaded shrimp, the most processed item in the study, carried ~130x the level in raw chicken breast (caveat: shrimp also carries higher baseline contamination from ocean and water pollution) > A 1,031-woman pregnancy cohort showed each 10% higher UPF intake tracked with 13.1% higher urinary phthalates, the plasticizers that leach from food packaging Microplastics cross from the blood to the brain. Animal research shows mechanistically how microplastic particles do cross the blood-brain barrier. In mice, polystyrene nanoparticles at 293 nm reached the brain within 2 hours of oral exposure. Particles at 1.14 μm and 9.55 μm did not cross at all. While most microscopy-based microplastic tests have a detection floor around 1 μm. The fraction that actually crosses into the brain sits below that threshold. If a test picks up larger particles in your blood, the smaller, BBB-crossing fraction is almost certainly there too, just below the detection window. The big ones are a proxy for the dangerous small ones. Cut all microplastic input where you can and avoid ultra processed foods, this another important one. In addition: use a water filtration system for your drinking water, reverse osmosis with remineralization is the gold standard. I recently reported complete elimination of microplastics from my semen (first in human demonstration) and a 87% reduction in my blood.
Bryan Johnson tweet media
English
66
28
305
34.5K
Paria Rashidinejad
Paria Rashidinejad@paria_rd·
Looped Transformers: the dream was right. But there was trouble in paradise. The loop made them unstable, expensive, and memory-hungry, with gains hard to scale. So we asked: 𝗖𝗮𝗻 𝘄𝗲 𝗿𝗲𝗮𝗽 𝘁𝗵𝗲 𝗿𝗲𝘄𝗮𝗿𝗱𝘀 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗽𝗮𝘆𝗶𝗻𝗴 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 𝘁𝗮𝘅? Introducing 𝗔𝘁𝘁𝗿𝗮𝗰𝘁𝗼𝗿 𝗠𝗼𝗱𝗲𝗹𝘀 𝗳𝗼𝗿 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗮𝗻𝗱 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴: • A Backbone proposes an initial “guess” output embedding; • An Attractor refines it: a fixed-point solver lets the model “think” before each token. Implicit differentiation trains the model stably, with constant memory and without BPTT. Training also revealed a surprising phenomenon: 𝗘𝗾𝘂𝗶𝗹𝗶𝗯𝗿𝗶𝘂𝗺 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 Over the course of training, the Backbone learns to propose latents close to the equilibrium itself, making the Attractor almost unnecessary at inference. Results: • 𝗣𝗮𝗿𝗲𝘁𝗼 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 𝗼𝗻 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗺𝗼𝗱𝗲𝗹𝗶𝗻𝗴: up to 𝟰𝟲.𝟲% lower perplexity and 𝟭𝟵.𝟳% better downstream accuracy. A 770M Attractor Model beats a 1.3B Transformer, despite being trained on half as many tokens. • 𝗦𝗶𝗴𝗻𝗶𝗳𝗶𝗰𝗮𝗻𝘁 𝗴𝗮𝗶𝗻𝘀 𝗼𝗻 𝗵𝗮𝗿𝗱 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝘁𝗮𝘀𝗸𝘀: a 27M Attractor Model trained on only 1K examples achieves 𝟵𝟭.𝟰% 𝗼𝗻 𝗦𝘂𝗱𝗼𝗸𝘂-𝗘𝘅𝘁𝗿𝗲𝗺𝗲 and 𝟵𝟯.𝟭% 𝗼𝗻 𝗠𝗮𝘇𝗲-𝗛𝗮𝗿𝗱, while Transformers and frontier models like Claude and GPT o3 score 𝟬%. 📝 arxiv.org/pdf/2605.12466 🧵 1/10
Paria Rashidinejad tweet media
English
14
66
454
35.5K
adalovescoffee
adalovescoffee@adalovescoffeee·
main thing that genuinely fucked me up in undergrad after failing (twice oof) is how it really fucked with my self confidence, especially how that made me ashamed to reach out to old friends, i self isolated myself bc i was convinced they'd think less of me bc i was struggling
English
5
0
36
702
Tanay Lohia
Tanay Lohia@tanaylohia·
Windows PCs have become absolutely unusable nowadays. It's sad how much AI is degrading everything else as nothing else matters than brrr-ing those GPUs rn.
English
4
0
9
405
Maximum-Epiplexity reinforcement learner
coming to grouphouses and coworking spaces [and sometimes the lounges of universities like Stanford's NLP] is still the best way to expose yourself to high-variance and serendipity! [even if you don't engage with 80% of content]
English
2
0
3
99
James Lockwood
James Lockwood@QBlazedog61029J·
This is the Pérez Hourglass visualized as a phononic diode scaffold. The core idea is simple. Take an AlN wurtzite lattice, break z-axis inversion symmetry with an asymmetric hourglass geometry, then force acoustic energy through a narrow strained nexus. Forward propagation compresses into the throat. Reverse propagation sees the density gradient, spreads, scatters, and gets dumped into kink-band traps. So the render has three layers: crystallography = aluminum and nitrogen node structure strain tensor = Gaussian stress concentration at the neck phononic flow = wave packets moving through the lattice and lighting up where the geometry focuses or dissipates energy Important correction: this is not yet a full finite-element proof of a working acoustic diode. It is a GPU visual model of the design principle: asymmetric geometry plus localized strain plus directional phonon routing. Basically a little nano-hourglass that turns shape into selection rules for sound. This is the Pérez Hourglass Transducer as a phononic diode scaffold. The model takes a stylized lattice and folds it into an asymmetric hourglass. The throat is the key. The radius is defined differently above and below z = 0, so the structure is not mirror symmetric: r(z) = 5.5 + 0.018z² on one side and r(z) = 5.5 + 0.010z² on the other. That makes the neck a geometric impedance mismatch. The atom spacing is then compressed by a Gaussian factor near the throat, 1 - 0.5 exp(-z²/80), so the lattice density increases where the funnel pinches. In the shader, the strain proxy is exp(-z²/150), meaning the stress field is deliberately concentrated at the bottleneck. The phonon layer is not a full elastic PDE solver. It is a visualized wave packet: sin(0.4z - 6t). That gives a wavelength of 5π in code units and a phase speed of 15 code units per second. In flow mode, the lattice displaces laterally with the wave and lights up when the phase peaks. In the lower expanded region, off-axis red pulses mark scattering and dissipation bands. So the actual claim is precise: asymmetric hyperboloid geometry plus Gaussian neck strain plus directed sinusoidal phonon flow. Not magic material physics. Not a validated acoustic diode yet. A GPU-rendered mathematical design sketch for how geometry can bias sound transport. codepen.io/jlthermoelectr… @JCPEREZCODEX @PhiBoostGlow @CY_Chauprade
James Lockwood tweet media
English
5
4
27
1.1K
Maximum-Epiplexity reinforcement learner
@DeryaTR_ is super-optimistic and hypes A LOT and mb might be WAYYY more open-minded than the skeptics of him even tho it might be hard to believe his LEV timelines and constant optimism [tho i still need more days to assess!!]
English
0
0
0
45
Maximum-Epiplexity reinforcement learner
SOME people who *hype* everything/are over-optimistic and say some non-believable things are still insanely open-minded and willing to try way more than others [that helps everyone else learn]! [but they have to be okay w/being publicly wrong and not constantly defend their ideas to death when wrong (the @davidasinclair trap!!)]
English
1
0
0
69
Maximum-Epiplexity reinforcement learner
high situational awareness can more than make up for what seems like massive deficits in competence (and is WAY more learnable)
English
0
0
0
59
Dan Turner-Evans
Dan Turner-Evans@DanTurnerEvans·
X-Labs lives! The initial topic areas are: 1. Scientific Instrumentation for Sensing and Imaging 2. Quantum Systems: Interconnects and Integrated Photonics I am really excited to see and someday use the new technologies that come out of the initiative! nsf.gov/news/nsf-annou…
English
2
18
80
13.5K
Peter Ottsjö
Peter Ottsjö@peterottsjo·
Vitalist Bay is basically one big garden party where everone hates death. It’s wonderful.
Peter Ottsjö tweet mediaPeter Ottsjö tweet mediaPeter Ottsjö tweet mediaPeter Ottsjö tweet media
English
1
2
23
1K
Linda Xie
Linda Xie@lindaxie·
Putting it out there that I'm going to start angel investing again. I'm specifically interested investing in deep tech so please keep me in mind!
English
66
12
349
21.2K
Maximum-Epiplexity reinforcement learner
(like making AI compute less "alien-like" to human brains when they "jump in capabilities" *might* help but that's why human enhancement important!!)
English
0
0
1
37
Berkeley Genomics Project
Berkeley Genomics Project@BerkeleyGenomic·
Speakers for Reproductive Frontiers 2026 (June 16-18, Berkeley) include leaders in polygenic prediction, artificial placentas, in vitro gametogenesis, in vitro oocyte maturation, repro law, + more. Early bird tickets are $350 (until April 25). reproductivefrontiers.org
English
1
5
15
2.1K
Kika
Kika@v_moreno_juan·
I am very happy to share that our paper is now published in @Nature 😊 Cell-type-targeted mitochondrial transplantation rescues cell degeneration. nature.com/articles/s4158… @IOB_ch (1/6)
English
14
94
480
31.3K