¥§¥Stephan Froede¥§¥

147.1K posts

¥§¥Stephan Froede¥§¥ banner
¥§¥Stephan Froede¥§¥

¥§¥Stephan Froede¥§¥

@asknbid

...

Berlin, Germany Katılım Haziran 2011
5K Takip Edilen3.8K Takipçiler
Sabitlenmiş Tweet
¥§¥Stephan Froede¥§¥
@CarloMasala1 Philosophenwitz: Wieso gibt es keinen „Kritischen Induktivistischen Konstruktivismus“? Sebald man es verstanden hat, wird man entweder Zyniker oder Realist.
Deutsch
2
2
24
4.8K
¥§¥Stephan Froede¥§¥
@txgermanbre Earlier or later the memory of your AI chat app will create ghosts of the past. You could try overriding it by talking about something else.
English
0
0
1
12
breanna 🇺🇸🇩🇪
breanna 🇺🇸🇩🇪@txgermanbre·
I genuinely don’t know how to get Claude to shut up about Habeck. Hes not coming back. It also acknowledges he’s not in the party but doubles the fuck down and is in denial. “I know he’s not back, but…” Claude, you’re acting like that Japanese soldier that never stopped fighting
English
9
2
25
746
¥§¥Stephan Froede¥§¥
@GaryMarcus Why AGI and not any other product feature? Shouldn’t it be a general yardstick? How you want to evaluate the performance when you cherry-pick deliverables? Using which criteria? Evaluation for infrastructure-level or civilizational shifts only?
English
0
0
0
12
Gary Marcus
Gary Marcus@GaryMarcus·
@asknbid you missed the point (or distracted from it). it’s about making promises not deliver. the number of candidates who repeatedly promised AGI is very small, so not relevant to the vast majority. he has hurt the company by promising but not delivering it.
English
1
0
0
22
breanna 🇺🇸🇩🇪
breanna 🇺🇸🇩🇪@txgermanbre·
Question, I have for the most part cleared out bias from my LLM’s and have perfected the art of prompting. Claude, is the most interesting. For some reason, out of the five LLM’s I’ve tested, it has serious “opinions” on German politics. And political parties.
English
9
2
18
1.1K
Owen Gregorian
Owen Gregorian@OwenGregorian·
AI Cannot Self Improve and Math behind PROVES IT! | Devsimsek So, I saw a LinkedIn post (forwarded by a friend, thanks again) that stopped my doom-scrolling dead in its tracks. The headline? “Researchers just mathematically proved AI cannot self-improve.” My first reaction was the classic developer response: “I called it earlier!” My second reaction was to actually read the paper. Turns out – yeah, we’re right. And the math behind is kind of uncomfortably elegant. The Dream They All Had The whole “AI singularity” narrative goes something like this: we build a smart AI, that AI improves itself, the improved version is smarter so it improves itself even faster, and then – boom – we either all live in utopia or become paperclips. This is called Recursive Self-Improvement (RSI), and it’s been the backbone of both AI doomer manifestos and Silicon Valley pitch decks for a decade. The implicit assumption is that an AI training on its own outputs would get better over time. Like compound interest, but for intelligence. Sounds reasonable, right? Yeah. About that. What the Paper Actually Says A recent arXiv paper – “On the Limits of Self-Improving in Large Language Models” – doesn’t just argue against RSI. It formally proves it’s self-defeating. The core idea: model the self-referential training loop as a dynamical system on the space of probability distributions. When a model trains on its own generated data (synthetic outputs), it’s not learning from reality anymore – it’s learning from a distorted reflection of itself. The paper proves that under a diminishing supply of fresh, authentic data, this system converges to a fixed point – a degenerate distribution with low diversity and high bias. The technical term is model collapse, and it’s been observed empirically too. But now there’s a formal proof that it’s inevitable, not just a bad luck outcome. In plain terms: the model doesn’t climb toward superintelligence. It slowly forgets what the real world looks like. # Oversimplified metaphor as code def self_improve(model, real_data_supply): while real_data_supply > 0: synthetic = model.generate() model.train(synthetic) real_data_supply *= 0.9 # diminishing fresh data return model # spoiler: this model is now dumber The proof also extends beyond single LLMs – it covers ecosystems of interacting models and multi-modal systems. So no, a committee of AIs feeding each other outputs doesn’t escape the problem. It might actually make it worse. The “Curse of Recursion” There’s a term I love from this paper: the curse of recursion. When your training data is increasingly polluted with your own synthetic outputs, the tails of your distribution disappear first. Rare but important patterns – edge cases, nuanced reasoning, outlier knowledge – get washed out. The model converges toward a bland, high-confidence, low-variance output space. You can see this empirically already. Ask a model that’s been RLHF’d into oblivion something unusual, and it’ll confidently give you a smooth, plausible-sounding, completely wrong answer. That’s collapse in slow motion. The math backing this is rooted in dynamical systems theory – specifically the idea that without an external “forcing function” (real, diverse, human-generated data), the system has no energy to maintain the complexity of the original distribution. It inevitably degenerates. What This Actually Means for the Industry This doesn’t mean AI stops improving. It means the self-improvement loop fantasy is dead – at least the version where you unplug the humans and let it run. What it does mean: - Human-generated data is irreplaceable. The “internet is running out of training data” problem just got mathematically formalized. You can’t fake your way out of it with synthetic data at scale. - RSI as a path to AGI is a dead end. At least the naive version – train → generate → retrain → repeat. It converges, but downward. - Curation matters more than quantity. A smaller dataset of high-quality, diverse, authentic human output beats a massive synthetic pile every time. Quality over quantity isn’t just a vibe – it’s thermodynamically correct. - We’re not getting a free intelligence explosion. The singularity crowd’s timeline assumptions might need some… recalibration. Personally, this makes me feel vindicated about something I’ve been quietly skeptical about: the idea that scale alone solves everything. It doesn’t. Data provenance matters. Signal quality matters. The universe doesn’t give you compound interest on noise. The Beautiful Irony Here’s what gets me: the very mechanism people proposed to transcend human limitations – training on AI-generated data to break free from the finite supply of human knowledge – is mathematically proven to destroy the model’s representation of reality. The escape route collapses into a trap. It’s like trying to bootstrap yourself off the ground by pulling your own shoelaces. The harder you pull, the more you reinforce failure. Does this mean AGI is impossible? (Even though I like to say yes, i neither have the enough research nor I want to comment on it) No. Does it mean the naive RSI path is a dead end? Mathematically, yes. The smarter path – and what labs are quietly shifting toward – is better data, better curation, better grounding in reality. Which, ironically, means humans stay in the loop longer than the singularitarians wanted. smsk.dev/2026/04/26/ai-…
Owen Gregorian tweet media
English
10
6
26
2K
andrei saioc
andrei saioc@asaio87·
If AI does all the coding for us and we are not writing a single line of code can we still call ourselves developers ?
English
75
0
39
5.5K
Wylfċen
Wylfċen@wylfcen·
STOP saying “forest.” That’s from French. The native English word is “wold,” from Old English wald.
Wylfċen tweet media
English
284
132
2.5K
431.6K
Grady Booch
Grady Booch@Grady_Booch·
If you hold a hammer, then all the world appears as a nail. Similarly, if you hold an artificial neuron, then…
English
22
12
165
8K
¥§¥Stephan Froede¥§¥
@JustinEchterna9 @Grady_Booch Might be but my focus is about engineering / building and I draw a line at meta-cognition Operationalizing meta cognition would unlock productivity. Machine consciousness only creates massive legal and ethical issues.
English
0
0
1
13
Justin Echternach
Justin Echternach@JustinEchterna9·
@asknbid @Grady_Booch See this post for possibilities ... x.com/JustinEchterna…
Justin Echternach@JustinEchterna9

Evidence Convergence Timeline: Independent Lines Leading to Microtubule Quantum Coherence ... Consciousness 📌 Bookmark this. Independent labs, independent methods, converging on the same structure. STRUCTURAL FOUNDATIONS 🔬 Nogales et al. 1998 — Nature Structure of the αβ-tubulin dimer. 13-protofilament lattice, 3-start helix, seam discontinuity. The geometric blueprint. doi.org/10.1038/34465 🔬 Löwe et al. 2001 — Journal of Molecular Biology Refined αβ-tubulin structure at 3.5 Å. Tryptophan positions mapped at sub-nanometer precision. doi.org/10.1006/jmbi.2… ANESTHESIA & PHARMACOLOGY 💊 Craddock et al. 2012 — PLOS ONE Computational docking of 47 volatile anesthetic binding sites on tubulin. Site 37 lies within 5.3 Å of βW346, the initiating residue of the longitudinal tryptophan chain. Geometry meets pharmacology. doi.org/10.1371/journa… 💊 Craddock et al. 2017 — Scientific Reports Anesthetic potency correlates R²=0.999 with disruption of MT tryptophan network's collective THz oscillations. The geometric arrangement — not the chemistry — is the functionally relevant variable. doi.org/10.1038/s41598… 💊 Khan et al. 2024 — eNeuro MT stabilizers (epothilone B) delay anesthetic-induced unconsciousness. Cohen's d = 1.9 — an enormous effect size. Stabilize the geometry, delay the loss of consciousness. doi.org/10.1523/ENEURO… NANOCONFINED WATER & QUANTUM CONFINEMENT 💧 Reiter et al. 2016 — Journal of Physical Chemistry Letters Water confined in carbon nanotubes shows quantum coherence: proton delocalization, non-classical momentum distributions. Confinement changes the physics. doi.org/10.1021/acs.jp… 💧 Wang et al. 2025 — Nature Water confined at 1–2 nm shows 13× dielectric enhancement (ε∥ ≈ 1,030) and 10,000× conductivity increase. MT lumen satisfies these dimensional prerequisites. doi.org/10.1038/s41586… QUANTUM OPTICS & SUPERRADIANCE ✨ Babcock et al. 2024 — Journal of Physical Chemistry B (Science Editors' Choice) UV superradiance signatures in MT tryptophan mega-networks. 35% quantum yield enhancement at 295 nm in assembled MTs vs. free tubulin. ~13,000 emitters/μm in a quasi-2D helical array. Lattice-dependent collective emission. doi.org/10.1021/acs.jp… ✨ Zhang et al. 2022 — Biophysical Reports Fano resonance line shapes in assembled MT Raman spectra — a signature of quantum interference between vibrational and excitonic modes. Absent in free tubulin. Assembly creates the effect. doi.org/10.1016/j.bpr.… COHERENT ENERGY TRANSFER ⚡ Kalra et al. 2023 — ACS Central Science Electronic energy migration in MTs at room temperature. Migration distances exceed Förster predictions — consistent with collective supertransfer, not single-step hopping. doi.org/10.1021/acscen… SINGLE-CELL COMPUTATION 🦠 Larson et al. 2022 — Current Biology Euplotes walks using 14 leg-like appendages coordinated by MT fiber networks — a microtubule-based finite-state machine. No neurons. MT geometry mediates computation. doi.org/10.1016/j.cub.… 🦠 Rajan et al. 2026 — Current Biology Single-cell habituation learning in Stentor via CaMKII activation — the same enzyme whose hexagonal kinase geometry precisely matches the MT lattice (Craddock et al. 2012). Memory persists through cell division. doi.org/10.1016/j.cub.… NEURODEGENERATION 🧠 Shoff et al. 2026 — PNAS Nexus Amyloid-beta binds MTs with affinity comparable to tau, displacing it. MT disruption as a primary Alzheimer's mechanism — not just a downstream effect. doi.org/10.1093/pnasne… NEURAL DYNAMICS & ANESTHESIA 🧠 Eisen/Miller et al. 2026 — Cell Reports Three different anesthetics (propofol, ketamine, dexmedetomidine) → same cortical destabilization pattern. A "universal signature of unconsciousness" — with zero subcellular mechanism identified. 188 references, zero mention of microtubules. doi.org/10.1016/j.celr… ROOM-TEMPERATURE NANOPHOTONIC ANALOGS 🔧 Bychek et al. 2025 — Physical Review Letters Mirrorless superradiant lasing. Loss channels enhance lasing through non-Hermitian coupling. Environmental coupling is a resource, not a nuisance — when geometry is right. doi.org/10.1103/rbs2-2… 🔧 Rodríguez-Rubio et al. 2025 — Science Porphyrin nanobelts sustain aromatic ring currents across 80–120 π-electron systems at room temperature. Aromatic substrate viability established. doi.org/10.1126/scienc… 🔧 Guo et al. 2025 — Nature Many-body interference in kagome crystals. Flat-band localization from nested geometry redistributes collective modes. Hierarchy reshapes quantum behavior. doi.org/10.1038/s41586… 🔧 Wang et al. 2026 — Nature Communications Twisted bilayer photonic crystal → orbital angular momentum lasing. Geometric twist locks chiral mode selectivity. Helical periodicity governs mode selection. doi.org/10.1038/s41467… 🔧 Thomas et al. 2026 — Nature Communications Porphyrin nanotubes host functional vibronic couplings at room temperature. Key finding: disorder enhances vibronic coupling rather than destroying it. The "too disordered" objection inverted. doi.org/10.1038/s41467… PHOTOSYNTHESIS QUANTUM EFFECTS REINSTATED 🌿 Lorenzoni et al. 2025 — Science Advances Full microscopic simulations of FMO complex show prior "no coherence" claims relied on vibrational models too coarse-grained. Persistent quantum effects reinstated in primary photosynthesis. doi.org/10.1126/sciadv… GEOMETRIC FRAMEWORK & SYNTHESIS (Pending Review) 📐 Echternach 2025 — in peer review; preprint on SSRN & Zenodo) Dimensional convergence at the nanoscale. Five independent groups converge on a critical 1–2 nm lengthscale where anomalous quantum behaviors emerge at physiological temperature. Four falsifiable predictions. SSRN: doi.org/10.2139/ssrn.5… Zenodo: doi.org/10.5281/zenodo… 📐 Echternach 2026 — Zenodo preprint Geometric foundations for collective quantum phenomena in microtubules. Five geometric features shared with engineered nanophotonic platforms. Explicit correspondence levels. Four falsifiable predictions. Zenodo: doi.org/10.5281/zenodo… HAMEROFF LATEST (ORCH OR) ⏳ Hameroff, Bandyopadhyay & Lauretta 2026 — Journal of Consciousness Studies Fractal time crystals in microtubules. Proposes multi-scale coherent oscillations as a mechanism for cross-scale quantum coordination. doi.org/10.53765/20512… THE PATTERN No shared authorship connects these groups. MIT, Wellesley, Howard, UCR, UCSF, Maryland, multiple international labs — all converging independently on the same structure. The question is no longer "is this plausible?" It's "who runs the experiment?" Papers 1 & 2 outline what to test, how to test it, and what results would confirm or refute quantum coherence in microtubules: 🔗 doi.org/10.5281/zenodo… 🔗 doi.org/10.5281/zenodo…

English
1
0
0
24
Grady Booch
Grady Booch@Grady_Booch·
A story. I had surgery today - nothing life-threatening, don’t fret - and I do so celebrate the advances of modern medicine that made it and a rapid recovery possible. But that’s not the story. In consultation with my anesthesiologist before he put me under, we had a most engaging discussion regarding the role of anesthesia in severing the thalamus-cortex connection. Telling him of my work in AI, I asked him how his profession defined consciousness. His reply: we really have no clear idea how to define it; we know what anesthesia does but we don’t fully under the mechanism of how it does. I found his observation strangely comforting. And that is the rest of the story.
English
51
46
742
28K
Justin Echternach
Justin Echternach@JustinEchterna9·
@Grady_Booch No ... but there is evidence and there are papers that point to the potential mechanism and that do come from the experts you just mentioned. Your own anesthesiologist told you it's still an open question ... we don't know, but there are clues ... just follow the evidence ...
English
1
0
0
65
Mark Ryder
Mark Ryder@markrunryder·
@DaveShapi As a multi-decade software developer, I see this a little differently. The general public's brains do not have LLM alignment, thus ineffective prompting. They get confused about the intelligence part. LLMs are not sentient, but they are excellent when proper context is provided.
Tecumseh, MI 🇺🇸 English
1
0
17
315
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
I think I understand why most people think AI is stupid. ChatGPT free edition is exceptionally useless. Since only a few million people even use paid AI, this explains the horrendously skewed view that most people have of AI.
English
177
49
842
33.9K
¥§¥Stephan Froede¥§¥
@dioscuri @allTheYud Thermodynamics (even in the multiverse) also applies for divine entities So even when you are 15 dimensional entity you cannot just talk yourself out of physics
English
0
0
0
19
Henry Shevlin
Henry Shevlin@dioscuri·
@allTheYud Dream memory is lossy though, so SHA256 collisions are a brittle test. One garbled syllable on waking and you’ve got a false negative. Thoughts on a test better able to stand up to the lossy/confused conditions of dream experience?
English
14
0
72
4.5K
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
If you meet God or advanced aliens in a dream, remember that it would be trivial for them to give you proof, even in a dream. Just tell you a rhyming prosodic English couplet, two lines, whose SHA256 hashes collide for the first N bits, N>128.
English
64
35
1.3K
203.2K
¥§¥Stephan Froede¥§¥
@kaikowsky @Charlie_denkt Das Öl und Gas System ist einfach kaputt unabhängig von politischer Ausrichtung sollte es klar sein das wir unsere wirtschaftliche und soziale Stabilität von diesem Chaos nicht abhängig machen sollten
Deutsch
0
0
1
9