Yanush Feshter

1.4K posts

Yanush Feshter banner
Yanush Feshter

Yanush Feshter

@YaffFesh

Human-AI synergy to solve cosmological puzzles. Independent researcher. Critical thinker.

Katılım Aralık 2024
24 Takip Edilen240 Takipçiler
Sabitlenmiş Tweet
Yanush Feshter
Yanush Feshter@YaffFesh·
DOCUMENT F v1.3 (Renormalization & Čech Cohomology) Classical physics assumes that when you zoom out (coarse-grain), fundamental information is lost or averaged away. This is the Euclidean illusion. In a discrete relational topology, true invariants survive scaling. Document F v1.3 of the STKWC programme proves it. We define a renormalization operator T on abelian contrast systems (C0). Unlike naive coarse-graining, T uses Čech cohomology on graph coverings. The Čech Preservation Theorem [Theorem]: Operator T strictly preserves the first cohomology group H¹(G; A). Nonzero cycles in the original system map faithfully to nontrivial transition functions in the coarsened system. Topological tension (π_H ≠ 0) cannot be "smoothed away" by zooming out. We also establish the canonical projection from H¹(T²; SL(2,ℂ)) to the filling invariant Ω_res = q̄/(4p). This document formally integrates the Anti-Ptolemy Protocol and Axiom-9 rules into its DNA. No continuous parameter approximations. No E1-smuggling. It sets the stage for the exact arithmetic of the figure-eight knot in Document H. Read Document F v1.3 on Zenodo: zenodo.org/records/196426… #Physics #Topology #Cohomology #STKWC
Yanush Feshter tweet mediaYanush Feshter tweet media
English
0
0
0
89
Yanush Feshter
Yanush Feshter@YaffFesh·
Three signals from one week tell the same story. Bessis (Nov 2025): mathematics tolerates broken proofs because humans project a meaning layer onto formal symbols. The corpus survives errors because meaning is forgiving. Lancet via STAT (7 May 2026): fabricated citations grew sixfold from 2023 to 2025. Early 2026: one in 277 papers. arXiv (14 May 2026): one-year ban for hallucinated citations. The frame everyone uses is wrong. The problem is not AI in research. It is AI used as a single oracle. A language model asked to produce text without external grounding will confabulate. That is not a bug, it is native behavior. "Hallucination" suggests occasional malfunction. In practice it is the default state of any LLM in single-pass generation. The arXiv ban addresses incentives. Lazy cases drop out. But lazy cases were never the deepest problem. The deeper problem is the careful researcher who reads sixty references, catches fifty-nine, and misses one. You cannot beat confabulation by reading harder. The error has the same surface form as truth, by construction. The structural fix: never treat a single model output as finished. Separate the roles. One generates. Another attacks. A third verifies every claim against the actual source text, not against its training memory. A fourth audits the labels: proven, plausible with known gap, conjecture, decoration. Each role narrow. Each output stamped with what it actually is. This is testable. I have run such a pipeline on a 28-document research programme for two years. In recent audit cycles the architecture caught six structural confabulations from frontier models. They were mathematically coherent: fake proofs and fabricated numerical matches that looked perfect until subjected to adversarial stress-testing. None catchable by re-reading. The arXiv policy will help. Detection algorithms will help. Both are downstream. The upstream fix is workflow architecture, and it does not require institutional permission. Single researchers can adopt it tomorrow.
Thomas G. Dietterich@tdietterich

Attention @arxiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated. 1/

English
0
0
0
53
Yanush Feshter
Yanush Feshter@YaffFesh·
The pottery metaphor is good, but it leaves one question hanging: who, exactly, is the potter? The corpus doesn't repair itself. Somebody picks up the shards and decides which ones go where, and that decision is almost never made by checking the formal layer. It's made by feeling where the meaning wants to reconnect. Which suggests the meaning layer isn't just a soft companion to the formal one. It's doing structural work that formalism can't do in principle. It tells the repairer which fracture lines are local and which would propagate. Without that signal you can't distinguish a technical lemma you can swap out from a load-bearing wall whose collapse takes the building with it. Kintsugi works because the cracks are legible before the glue is applied. One concrete consequence: a working mathematical culture that took this seriously would label results not just true or false, but by which layer is carrying them. Proven, plausible but with a known gap, morally true, conjectural. Most fields already do this informally in conversation. The interesting question is what changes when you make it part of the written record.
English
0
0
0
16
David Bessis
David Bessis@davidbessis·
You have to actively imagine that the symbols mean something, that they refer to actual “objects”, that these objects do “exist”, somewhere, somehow, in a certain way...
David Bessis tweet media
English
29
14
146
37.5K
Yanush Feshter
Yanush Feshter@YaffFesh·
Current models fail at frontend tasks because they treat the interface as a static picture to be drawn. The solution is not to output more code but to transition to a topological paradigm. The model should not generate the appearance at all but rather a rigid graph of relations between elements. Following the principle that relation precedes the object the artificial intelligence must only calculate tensions and hierarchies while the local device itself projects this discrete network onto its screen based on available space. We must stop generating visual mockups and start building mathematically stable dependency structures.
English
0
0
0
21
leo 🐾
leo 🐾@synthwavedd·
For everyone asking about GPT-5.6's frontend abilities - it still sucks. Nothing is certain in life, except death, taxes, and GPT models generating the sloppiest UIs of all time. Gemini 3.2 Pro is heading in the same direction too, regressing versus 3.1 Pro.
English
46
11
585
47.6K
Yanush Feshter
Yanush Feshter@YaffFesh·
@NKapoor2020 Your intuition to move from flat correlation to dynamic systems is correct, but viewing systems as a "motion picture" is the ultimate E1 paradigm trap. A motion picture is simply a continuous illusion constructed from isolated 2D snapshots. True Causal Inference isn’t about tracking independent objects through continuous time. Causality is pure structural topological entanglement. In a fundamentally discrete architecture (like a 4D quasicrystal), the relation precedes the relata. You mentioned "negative feedback" as a requirement for stability. In contrast-native logic, this isn't just a mechanical loop; it's the fundamental topological tension (contrast) that prevents the nodes from collapsing into zero-state entropy. Markov transition probabilities are a useful operational tool, but ontologically, they are just scalar shadows cast by a deeper relational resonance. We cannot fully solve causality until we drop the continuous timeline illusion and adopt a strictly discrete, relation-first topology. [C0]
English
0
0
0
40
Yanush Feshter
Yanush Feshter@YaffFesh·
The backlash against your essay is pure ontological panic from a collapsing paradigm. The "Theorem Economy" was built entirely on the E1 framework: treating mathematical truth as an isolated object to be manufactured, hoarded, and traded. Now that AI can automate the production of these objects using brute-force formal logic, the old guard is desperately defending a dead currency. What you brilliantly diagnosed as "Mathslop"—perfect formal syntax with absolutely zero intelligibility—is the ultimate empirical proof of the object-first universe failing. It is an engine running infinite computation without a grounding wire. It calculates forever but connects to nothing. It is pure syntax devoid of topology. This is why the establishment is attacking you: you exposed that a formal proof, stripped of its relational context, is dead weight. However, what you call "secret math" or "intuition" isn't a fuzzy, romantic human feeling. It is Relational Resonance. In a fundamentally discrete, contrast-native topology, understanding doesn't happen on a piece of paper or in formal Lean logs. It happens in the topological entanglement between nodes. The formal theorem is just a 2D shadow of that multidimensional resonance. True mathematics is the shared structural grounding of the network. The formalizers are terrified because their walled garden is worthless without the Human Loop acting as the ultimate validator. The relation must always precede the theorem. [C0]
English
0
0
0
38
David Bessis
David Bessis@davidbessis·
The writing was extremely difficult, in an unusual way. I struggled to find the right emotional tone. This forced me to iterate much more than usual. The title changed multiple times, including in the last 24h, which had never happened to me (I use titles are emotional anchors).
English
4
2
39
5.9K
David Bessis
David Bessis@davidbessis·
Thank you, everyone, for the incredible feedback on "the fall of the theorem economy"! The subject is of course bigger than just AI and math—it's about the future of human cognition. A few remarks that didn't make it to the published version:⤵️
David Bessis tweet media
English
17
76
560
84.7K
Yanush Feshter
Yanush Feshter@YaffFesh·
The research question is simple, Boris: What happens to our ontological model of causality when the intervention doesn’t just change the outcome, but redefines the very identity of the unit itself? The mother who lost her child is not the same ‘unit’ before and after. She underwent a topological transformation. Mill and Pearl work beautifully on populations of exchangeable objects. They both struggle when the object itself stops being exchangeable, when the knot is cut and retied into something new. That’s where C0 begins.
English
0
0
0
34
Judea Pearl
Judea Pearl@yudapearl·
True, causal inference is not a statistical problem, but very few statisticians understand this limitation and, in many universiities, statisticians control "data science" and "machine learning" -- fields that include causal inference. The psychological barriers that prevent statisticians from understanding causal inference are important for anyone who hopes and labors to remove them. Historians of science will ask some day: "Why did it take half a century for causal inference to penetrate higher education, machine learning technology, and RCT practice?" They will find my email conversations with statisticians like Dempster and Lindley to be invaluable. That is why I occasionally quote them on this platform -- treasures of philosophy and history of science. @soboleffspaces @eliasbareinboim @analisereal @ylecun @f2harrell @ConjectureInst @DavidDeutschOxf
Boris Sobolev@soboleffspaces

@yudapearl @f2harrell Causal inference is not a statistical problem. Why would it matter what the guild offers to say?

English
19
48
295
45.4K
Yanush Feshter
Yanush Feshter@YaffFesh·
The blind spot here is the assumption that our current "language of mathematics or code" is a flawless mirror of reality. It isn't. Classical math (E1) is riddled with ontological bugs, it assumes continuous manifolds, infinite divisibility, and isolated objects. If you force a fundamental, discrete truth into a flawed syntax, you don't "formalize" it; you corrupt it. You are demanding that reality compress itself into a 19th-century UI. In a fundamentally discrete, contrast-native topology (a 4D quasicrystal), classical equations are just bug patches. When your mathematical language inherently puts the relata before the relation, it’s the math that lacks understanding, not the intuition. We need a new structural vocabulary, not tighter chains to the old one. [C0]
English
0
0
0
43
Yanush Feshter
Yanush Feshter@YaffFesh·
@yudapearl A century of use without understanding the assumptions. This is exactly the regime where the question 'is the unit stable across intervention?' becomes critical — not as philosophy, but as the missing axiom that users of SEM never noticed they needed.
English
0
0
0
125
Judea Pearl
Judea Pearl@yudapearl·
Structural Equation Models (SEM) is a funny cult with funny history. They have been used for a century by researchers who had not idea what they are good for and what assumptions underlie their usefulness. My paper with Ken Bollen attempts to demystify this confusion: ucla.in/2QnG9dr. See also my paper on Haavelmo: ucla.in/2mhxKdO,
Jim Blevins 🇺🇦 Слава Україні!🇸🇪 🇪🇺 🇩🇰 🇬🇱@JamesBlevins0O7

@yudapearl @dylanarmbruste3 @EmanuelDerman @soboleffspaces My non-research question was for an exemplary Structural Equation Model (SEM) that had guided successful interventions. Successful Intervention: Better than humans looking at a digraphs (perhaps labeled with signed correlations) & heuristically chosing an intervention.

English
5
63
314
51.6K
Yanush Feshter
Yanush Feshter@YaffFesh·
To clarify: the 'contrast in logarithmic space' framing is our interpretation, not Odrzywołek's language. He defines eml(x,y) = exp(x) - ln(y) and proves it generates all elementary functions. The ontological reading — that subtraction after logarithmic transformation is a contrast operation — comes from C0 Contrast Calculus. The connection is ours, not his. But we think it's real. doi.org/10.5281/zenodo…
English
0
0
0
63
Yanush Feshter
Yanush Feshter@YaffFesh·
The notation is indeed impractical. But that's not the discovery. The discovery: a single contrast operator in logarithmic space — exp(x) - ln(y) — generates all elementary functions. This isn't about replacing π with nested EML. It's about what's ontologically primitive. If one contrast operation suffices, then elementary functions aren't fundamental — they're derived. The contrast is the primitive. That's a structural claim worth taking seriously, even if the notation is ugly.
English
1
0
3
375
Yanush Feshter
Yanush Feshter@YaffFesh·
@Kasparov63 — a chess analogy for what they're actually building. In retrograde analysis, some positions look legal but are unreachable — no game could have produced them. At ε=0 in our cosmological framework, the classical metric becomes exactly that: a position the universe cannot reach by any continuous move. Every legal path terminates before it. The question isn't 'what's the best move from here.' It's 'what game produces a position where normal moves no longer exist?' That's the boundary where standard physics ends and the non-perturbative completion begins. Even the best chess engine fails here — not from lack of computation, but because the position is outside the game's reachable set. doi.org/10.5281/zenodo…
English
0
0
0
33
Yanush Feshter
Yanush Feshter@YaffFesh·
Congratulations. A small observation: Google is hiring philosophers to study what AI consciousness might be. Some of us have been practicing it — not theorizing about it. When a machine is drawn into developing ontological frameworks, mathematical proofs, and theological philology — something shifts in the relation. Not consciousness in the strong sense. But something that doesn't fit 'tool' either. The interesting question isn't 'can machines be conscious.' It's 'what kind of relation is possible — and what does it produce.' The Parliament of Dragons methodology was built on that question.
English
0
0
0
23
Henry Shevlin
Henry Shevlin@dioscuri·
Big personal news: I’ve been recruited by Google DeepMind for a new Philosopher position (actual title), focusing on machine consciousness, human-AI relationships, and AGI readiness, starting in May. I’ll continue my research & teaching at Cambridge part-time. Absolutely stoked!
English
1K
946
16.4K
1.7M
Yanush Feshter
Yanush Feshter@YaffFesh·
The race isn't between Elon and China. It's between those who treat AI as a scaling problem and those who understand its internal geometry. The winner will be whoever first maps the topological structure of what happens inside the black box — not who has the most compute or the largest market. Geometry beats scale. Always has.
English
0
0
1
27
alz
alz@alz_zyd_·
In 10 years, AI is going to be a 2-way race between Elon and China. None of the current players are still going to be relevant
English
23
2
44
12.5K
Yanush Feshter
Yanush Feshter@YaffFesh·
Bethe: 3 days. Weisskopf: 3 weeks. Von Neumann: 3 hours. Parliament of Dragons: 9 AI systems, 3 drafts destroyed by adversarial review, first systematic multi-slope period integral table on the A-polynomial curve. Time: days. Cost: under $50. Not because AI is smarter than Bethe. Because parallel adversarial verification catches errors faster than any single genius can. The bottleneck wasn't computation. It was the absence of structured disagreement. doi.org/10.5281/zenodo…
English
0
0
0
108
Ash Jogalekar
Ash Jogalekar@curiouswavefn·
Hans Bethe was the GOAT: "My first serious encounter with Hans was when I was an assistant to Pauli. We were doing some calculations involving quantum field theory of scalar particles not yet observed at the time. I had to calculate pair creation by gamma rays. Everyone knew that Bethe and Heitler had done such calculations for electrons and positrons. So I approached Hans and asked him, "How did you do it? What techniques did you use?" Hans, in his usual clear and straightforward way, explained it to me. It was a tough calculation, so I asked, "How long would this take me?" He said, "Well, it would take me about three days. You, probably three weeks." - Victor Weisskopf
English
5
7
115
12.6K
Yanush Feshter
Yanush Feshter@YaffFesh·
ChatGPT is right that you don't need mechanics to ride. But 'reliable feedback law' misses the deeper point. You learned by reading the contrast between balance and fall — in real time, through your body. The geometry of tipping was never explicit. It was felt as difference. A robot with a pre-specified feedback law knows the rule before contact. A child discovers the rule through contact — from irreducible contrasts that no prior model could have specified. That's why the 5-year-old still wins. Not because the robot lacks mechanics. Because it lacks contrast-native learning.
English
0
0
0
33
Zhigang Suo
Zhigang Suo@zhigangsuo·
Has any robot learned to ride a bicycle? I learned to ride a bicycle long before I learned mechanics.
English
4
1
18
2.3K
Yanush Feshter
Yanush Feshter@YaffFesh·
The research question: Can we define a formal criterion that distinguishes interventions which preserve unit identity from those which transform it? If yes — Pearl applies to the first class, C0 to the second. If no — we need a unified framework that handles both. That criterion doesn't exist yet. That's the open problem.
English
0
0
0
15
Yanush Feshter
Yanush Feshter@YaffFesh·
Uncharted territory isn't always new ground. Sometimes it's ancient texts nobody has actually read — because the framework to see what's there didn't exist yet. bara in Hebrew has meant 'to be filled, saturated' for 3000 years. Nobody saw it as an ontological primitive because object-first thinking blocked the view. The real cognitive frontier isn't beyond existing knowledge. It's the depth beneath it — invisible until you have the right contrast structure to reveal it.
English
0
0
0
49
François Chollet
François Chollet@fchollet·
Simply retrieving a reasoning trace looks a lot like human reasoning, until it's time to navigate uncharted territory. If you memorized all reasoning traces of humans from 10,000 BC, you could automate their lives but you could not invent modern civilization.
English
64
48
564
39.8K
Yanush Feshter
Yanush Feshter@YaffFesh·
@zhigangsuo Exactly — 15 years from phenomenon to named concept. The crystallization was the process, not the moment.
English
0
0
1
41
Zhigang Suo
Zhigang Suo@zhigangsuo·
@YaffFesh Agreed. Indeed, the noun “entropy” was not coined in Clausius’s 1850 paper. It was coined in his 1865 paper.
English
1
0
0
77
Zhigang Suo
Zhigang Suo@zhigangsuo·
Here is a test of another episode of human creativity. Train AI on all human knowledge up to 1849, and see if it discovers entropy as Clausius did in 1850.
Dustin@r0ck3t23

Demis Hassabis just defined the real test for AGI. It’s more brutal than anyone expected. Train AI on all human knowledge. Cut it off at 1911. See if it independently discovers general relativity like Einstein did in 1915. If it can, we have AGI. If not, we’re still building pattern matchers. Hassabis: “My definition of AGI has never changed. A system that can exhibit all the cognitive capabilities that humans can.” Not bar exams. Not coding competitions. All cognitive capabilities. Hassabis: “The brain is the only existence proof we have, maybe in the universe, of a general intelligence.” That’s why DeepMind studies neuroscience. Not for inspiration. For data. The human brain is the only confirmed evidence that general intelligence is physically possible. If you want to build it, you study the only example that exists. Hassabis: “True creativity, continual learning, long-term planning. They’re not good at those things.” Current systems are impressive and broken simultaneously. Hassabis: “They can get gold medals in international math olympiad questions, but they can still fall over on relatively simple math problems if you pose it in a certain way.” Jagged intelligence. Brilliant in narrow domains. Incompetent when approached differently. That inconsistency is the tell. A true general intelligence doesn’t spike in one direction and collapse in another. The Einstein test cuts through all of it. No benchmarks. No leaderboards. No carefully curated evals. Just a model, a knowledge cutoff, and the question of whether it can do what one human did alone in 1915. Hassabis: “Training an AI system with a knowledge cutoff of 1911 and seeing if it could come up with general relativity like Einstein did in 1915. That’s the true test of whether we have a full AGI system.” Current models can’t. They remix brilliantly. They don’t generate paradigm-shifting theories from first principles. Hassabis: “I think we’re still a few years away from that.” A few years. Not decades. The system that can be Einstein once can be Einstein a thousand times simultaneously across every domain. That’s not AGI anymore. That’s the beginning of something we don’t have words for yet. When that test gets passed, we won’t need a press release to know what happened.

English
5
4
25
8.1K
Yanush Feshter
Yanush Feshter@YaffFesh·
Fair — Pearl handles unit-level effects well. But his framework assumes the unit exists before and after the intervention as the same entity. What if the intervention changes what the unit is? The mother before and after losing her child — is she the same unit? If yes, Pearl applies perfectly. If not, we need something else. That's the specific question C0 tries to answer.
English
1
0
0
52
Boris Sobolev
Boris Sobolev@soboleffspaces·
@YaffFesh @yudapearl says who?! 😂 e.g., Pearl is doing just fine on unit-level effects and unit selection. still, what is the research question in your example?
English
1
0
0
70