Brian Crabtree

5.5K posts

Brian Crabtree

Brian Crabtree

@ourtown2

Curiosity driven AI expert No possessions, no agenda. Just exploring math, systems, and strange ideas.

Katılım Temmuz 2009
434 Takip Edilen439 Takipçiler
OpenIDEA
OpenIDEA@OpenIDEAae·
Your post is trying to answer a deeper question than it first appears. The real question is not simply whether an LLM trained on Newtonian physics could discover relativity. The deeper question is this: if humans outsource too much of the struggle of thinking, do we lose the very conditions that produce original minds? Put differently: can a civilization keep generating Einsteins if fewer people are forced to build intuition the hard way? The core argument is clear: extraordinary thinkers are often not just people who knew more facts, but people who earned deep intuition through direct contact with problems. That concern is real. But I would not go as far as saying LLMs will prevent the next Einstein. Tools have always changed cognition. Writing changed memory. Calculus changed what could be offloaded from geometry. Computers changed what could be offloaded from arithmetic and simulation. They did not kill genius; they changed the layer at which genius had to operate. The real danger is not tool use itself, but premature cognitive outsourcing; offloading before the mind has built enough structure to judge, challenge, and reinterpret what the tool gives back.
English
1
0
0
201
Simo Ryu
Simo Ryu@cloneofsimo·
Its very possible that LLM trained on newtonian physics may never come up with relativity to explain cosmic scale gravity. In that case Einstein would have to intervene and solve it instead. But would he have had come up with it, assuming he offloaded all the physics problem solving to LLMs? I think this is serious problem. Undoubtedly many GOATs are only GOATs because they built all the intuition from problem solving themselves. Grothendieck famously reinvented measure theory from scratch when he was teenager. If people offload their RL envs they couldve used, to LLMs, we will never get the next Einstein
English
44
12
245
44.6K
Brian Crabtree
Brian Crabtree@ourtown2·
@cloneofsimo There is no local experiment optical, mechanical, or atomic that can distinguish between a 10^24 kg mass below the floor and a 9.8 m/s² upward transport of the manifold. Km
English
0
0
0
21
Brian Crabtree
Brian Crabtree@ourtown2·
@robbertleusink Dijkstra = simplest example of an invariant carrier emerging from constrained transport
English
0
0
0
273
Robbert Leusink
Robbert Leusink@robbertleusink·
Every navigation app on earth finds its route using an algorithm invented by a Dutch computer scientist in 1956 Edsger Dijkstra solved the shortest-path problem in twenty minutes at a café in Amsterdam, without paper; he did it in his head Google Maps, Uber, and every GPS system alive run on a Dutch mathematician's coffee break
Robbert Leusink tweet mediaRobbert Leusink tweet media
English
86
522
4.9K
627.3K
Brian Crabtree
Brian Crabtree@ourtown2·
The 2026 Stabilized Frontier Architecture of Arithmetic geometry The field has converged on a stratified model where the boundary is treated as a controlled internal deformation rather than a point of failure.
Brian Crabtree tweet media
English
1
0
0
7
Rogier Brussee
Rogier Brussee@RogierBrussee·
This is rather a beautiful example of modern math with very precise and profound statements (not to mention the great use Litt made of it for proving that certain Taylor expansions have rational coefficients) that look like total gibberish to those outside (and many inside) math.
Daniel Litt@littmath

Faltings wins the Abel prize! Obviously his work is immensely influential; in my own research I've used his results on p-adic Hodge theory quite a bit. Aside from his proof of the Mordell conjecture, this is my favorite result of his:

English
10
14
294
49.4K
Brian Crabtree
Brian Crabtree@ourtown2·
@littmath G. Faltings, “F-isocrystals on open varieties: results and conjectures”, The Grothendieck Festschrift, Vol. II, Progress in Mathematics 87, Birkhäuser, 1990, pp. 219–248.
Deutsch
0
0
0
89
Daniel Litt
Daniel Litt@littmath·
Faltings wins the Abel prize! Obviously his work is immensely influential; in my own research I've used his results on p-adic Hodge theory quite a bit. Aside from his proof of the Mordell conjecture, this is my favorite result of his:
Daniel Litt tweet media
English
17
26
401
46.6K
Math Files
Math Files@Math_files·
One day, Albert Einstein said he stopped studying mathematics and chose physics instead. When asked why, he replied: “I could tell what was true and what was false… but I couldn’t understand which things were really important.” Then Henri Poincaré shared his story. He said he actually started with physics but moved to mathematics. When asked why, he said: “I could see which things were important… but I wasn’t sure which of them were true.”
English
19
63
493
35.2K
Brian Crabtree
Brian Crabtree@ourtown2·
@AnishA_Moonka when a system minimizes decision variance and encodes coordination locally, it converges toward the same emergent behavior- fast, distributed, and low-conflict execution.
English
0
0
0
14
Anish Moonka
Anish Moonka@AnishA_Moonka·
A single ant has 250,000 neurons. Your brain has 86 billion. That’s a 344,000x gap. And yet what you’re watching is a colony solving a category of problem that no computer can crack perfectly at scale. It’s called the Steiner tree problem. Given a set of points, find the shortest possible network connecting all of them. First posed in 1811, proved essentially impossible to solve perfectly in 1972 (the computing time grows so fast with size that the world’s fastest supercomputer stalls on a few hundred points). Still one of the hardest open problems in mathematics. Ants solve it with chemistry. When an ant walks a path, it leaves a chemical trail called a pheromone. That trail evaporates over time. Shorter paths get walked faster, so pheromone builds up before it fades. Other ants prefer stronger trails. The colony converges on the shortest route without any single ant knowing the full picture. Jean-Louis Deneubourg at the Free University of Brussels proved this in the early 1990s with a dead simple experiment: two bridges between a nest and food, one twice as long as the other. Within minutes, the colony picked the short one. In 1991, computer scientist Marco Dorigo took that discovery and turned it into an algorithm (a set of step-by-step instructions for a computer) called Ant Colony Optimization. It’s now used to route wires inside microchips with billions of transistors (one study found an 8% reduction in wire length over traditional methods), plan delivery truck routes, and manage internet traffic. The phone you’re reading this on was partially designed using math that ants figured out 100 million years before humans existed. A 2023 study out of Stanford and several other institutions found that turtle ants in the tropical forest canopy build trail networks across tangled branches and vines that approximately solve the Steiner tree problem with zero central control. No ant has any information about the full network. Each one just follows a rule: at each junction, go where the pheromone is strongest. The collective intelligence comes from thousands of these tiny decisions stacking up. Stanford biologist Deborah Gordon has studied this for decades. She compares it directly to how brains work: no single neuron tells the others what to do, but together they produce thought. A 2024 Rockefeller University study found that individual ants decide whether to leave the nest using the same yes-or-no process that brain cells use to decide whether to switch on. The colony is, in a real mechanical sense, a brain spread across thousands of bodies. In early 2025, a Weizmann Institute study pitted ant groups against human groups on a task almost identical to this video: navigating a T-shaped object through a series of obstacles. The bigger the human group, the worse they performed. Too many competing ideas about which direction to push. The bigger the ant group, the better they got. No ego, no debate, just pheromones and simple rules scaling into something that looks a lot like intelligence. 250,000 neurons each. No leader. No blueprint. Solving problems that stumped mathematicians for two centuries.
The Figen@TheFigen_

They are ants solving a geometric problem and it is mind-blowingly colorful.

English
57
797
3.4K
302.5K
Brian Crabtree
Brian Crabtree@ourtown2·
@engineers_feed Nothing from the core survives intact all the way; the energy is carried outward, but the specific photons are constantly replaced.
English
0
0
0
16
World of Engineering
World of Engineering@engineers_feed·
It takes a photon up to 40,000 years to travel from the core of the sun to its surface, but only 8 minutes and 20 seconds to travel the rest of the way to Earth.
English
46
92
1.2K
88K
Brian Crabtree
Brian Crabtree@ourtown2·
Sure you need to learn extreme chunking ability fast pattern recognition long-term memory integration but raw capacity of working memory is still roughly: 4 ± 1 chunks of information you need working memory = pointers to compressed semantic structures - a TOC and it is still very specific to what you learned
English
0
0
0
4
attentionmech
attentionmech@attentionmech·
Is there really a way to increase working memory for humans?
English
68
1
90
13.4K
Brian Crabtree
Brian Crabtree@ourtown2·
KL(mixture, mixture) ≈ integral over dominant-component regions Epistemically it performs constraint-driven model selection across a stratified explanation manifold. Statistically this computes KL bounds. Evaluate knowledge under maximal explanatory pressure and collapse locally to the dominant hypothesis while bounding residual uncertainty.
English
0
0
0
18
Frank Nielsen
Frank Nielsen@FrnkNlsn·
Computational geometry for statistics! Using upper/lower envelopes of 1D statistical mixtures and the log-sum-exp bounds, we can efficiently bound the Kullback-Leibler divergence between mixtures (like GMMs) and the differential entropy of mixtures arxiv.org/abs/1606.05850
Frank Nielsen tweet media
English
3
18
110
3.7K
Abhinav
Abhinav@Gayakwad72087·
@thecurioustales The Uncertainty Principle (Heisenberg's Uncertainty Principle): In quantum mechanics as well, it exists in the form $\Delta x \Delta p \ge \frac{h}{4\pi}$. Statistics: This is also hidden within the formula for the Normal Distribution.
Abhinav tweet media
English
1
0
0
690
The Curious Tales
The Curious Tales@thecurioustales·
🚨In 1700s, French mathematician Georges-Louis Leclerc took a needle, a wooden floor, and a question that sounds almost childishly simple. If you drop a needle randomly onto a surface ruled with parallel lines, and the needle's length equals the distance between those lines, what are the odds it crosses one of them? The answer is 2 divided by pi. No circles anywhere in that experiment. No curves, no arcs, no radii. Just a straight needle falling onto straight lines through pure chance. And pi crawls out of the probability like it was hiding there the entire time, waiting for someone to ask the right question. Mathematicians call this Buffon's Needle, and it remains one of the most conceptually violent results in the history of probability. You can physically recreate it on your kitchen floor. Drop a needle 500 times, count the crossings, divide, and you will approximate pi to several decimal places through nothing but randomness and straight lines. The circle was never in the room. Pi showed up anyway. This is what separates pi from every other mathematical constant. It doesn't stay inside its original context. It migrates. Euler discovered it hiding inside the sum of the reciprocals of all squared integers, a problem involving no geometry whatsoever. The Gaussian bell curve that governs how errors distribute in measurements, how heights vary in a population, how quantum particles spread across space, carries pi in its foundation even though the curve itself was never constructed from a circle. Physicist Eugene Wigner wrote a paper in 1960 that never got the mainstream attention it deserved. He called it "The Unreasonable Effectiveness of Mathematics in the Natural Sciences." His central bewilderment was precisely this pattern: mathematical structures developed in complete abstraction, with zero intention of describing physical reality, keep turning out to be the exact language the universe was already using before anyone looked. Pi is his strongest case. It wasn't engineered to fit physics. It was found already fitted, in places nobody thought to look for it, in systems that share nothing geometrically with a circle. The needle doesn't know about circles. The universe apparently does.
The Curious Tales tweet media
The Curious Tales@thecurioustales

x.com/i/article/2032…

English
151
775
3.7K
517.1K
Brian Crabtree
Brian Crabtree@ourtown2·
@ihtesham2005 apply Adversarial epistemological ontology constraint prompts: and the briefer the better let the LLM find its own path
English
0
0
0
513
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
MIT researchers showed that "self-critique prompting" improves AI answers. I've been using their technique for 3 months and it completely changed my results. Here are 8 prompts that make ChatGPT review and improve its own work:
English
15
59
379
72.8K
Brian Crabtree
Brian Crabtree@ourtown2·
@allenholub This entire diagram is very close to how complex adaptive systems transition between stability basins in nonlinear dynamics.
Brian Crabtree tweet media
English
0
0
1
36
Allen Holub. https://linkedIn.com/in/allenholub
One common critique of AI is that it's imprecise and nondeterministic. We programmers have held too long to the notion that precision—in algorithms, in the programming languages we use, in the data we collect—is essential. We think that if our specifications and implementations are precise, our problems are solved. Our eternal quest for ever greater precision has failed us. We simply cannot write precise enough code. There are always bugs. That precise spec turns out to describe something nobody wants, and the details are wrong. Computers don't need to be this precise. Analog computers worked very well for the classes of problems they solved. Aviation and naval navigation, for example, were entirely analog until just a few years ago, and the planes and ships got where they needed to go. Precision is perhaps not as important as some of us think. It seems to me that our attempts to impose precision on a chaotic, fuzzy world have failed us. Algorithms often fail. E.g., problems like chaotic turbulence or traffic-flow patterns are easy to model, but no algorithm can predict them. Even physics is not as precise as some imagine; the location of an electron is probabilistic, not precise. We programmers want the world to be Newtonian, subject to precise mathematics, but it's a quantum world. We need to figure out ways of working that reflect that reality. The original Agile challenged the idea that a precise up-front plan was viable. It assumed the world was complex, not simply complicated, and we needed to work in a way that recognized that complexity. We dumped the precise plan and instead built small, got feedback, then adjusted. That worked surprisingly well. Agile failed when we reverted to the idea of precision. Estimates, backlogs, burn-down charts, prescriptive meetings—all of that is an attempt to reimpose precision onto an imprecise activity. That thinking destroyed Agile, but the original thinking was correct. We now need to extend the original thinking further, to the coding itself. So, AI. LLMs fly in the face of precision coding, approaching programming in a more natural, almost analog way. The haters want our tools to create MORE precise results, so they are deeply suspicious of AI's fuzziness. They think they must correct that fuzziness by a detailed after-the-fact analysis that injects precision into the generated code. Disaster strikes when people who think that way are forced to use AI. Just look at Amazon. The solution is not to fight the fuzziness, but to work with it and develop ways to write effective systems in a fuzzy way. That thinking leads to guardrails as our primary tool rather than inspection. Things like a modular component architecture, emphasis on testability, static analysis, testing in production, etc., all become critical. So, my advice is: Embrace fuzziness. But add guardrails. It's time to rethink how we work. Instead of "that's nuts," think "what innovative guardrails solve the problem?"
English
40
17
139
12.8K
Colin Gorrie
Colin Gorrie@colingorrie·
Noam Chomsky once called English spelling "a near optimal system." You might think he was being ironic. Far from it. The silent 'b' in "bomb" reappears in "bombard." The silent 'n' in "hymn" is pronounced once again in "hymnal." The silent 'g' in "sign" comes back in "signal." English spelling keeps these words looking like the family they are, even when pronunciation pulls them apart. The past tense ending "-ed" is pronounced three different ways (-t in "jumped," -d in "played," and -ed in "painted"), but spelled the same every time. One spelling, one meaning: something happened in the past. English spelling is full of inconsistencies and silent letter because it’s not simply encoding how words sound. If English spelling were aiming to represent sound alone, it would indeed be a total failure. But that's not the kind of system English has. It encodes words' meaning and history as well.
Colin Gorrie tweet media
English
127
354
4K
259.9K
Trent Telenko
Trent Telenko@TrentTelenko·
Iran has been militarily defeated every time the US has militarily engaged it.
English
28
8
132
13K