Bruce Elliott

12.5K posts

Bruce Elliott banner
Bruce Elliott

Bruce Elliott

@belliott4488

One-time physicist; current guitarist, astrophotographer, orbital mechanic, bonsai enthusiast; cyclist to be again someday; advocate for The Oxford Comma

Eastern US Katılım Ocak 2009
331 Takip Edilen226 Takipçiler
Sabitlenmiş Tweet
Bruce Elliott
Bruce Elliott@belliott4488·
I don't think anyone looks for me very often here, but just in case - I'm not here much any more. The curation of the information here inevitably reflects the judgment of the CEO, and I think his judgment is crap.
English
2
1
10
857
Fermat's Library
Fermat's Library@fermatslibrary·
Canadian computer scientist Ken Iverson introduced the modern symbols and names for the floor and ceiling functions in 1962. Typesetters produced the symbols by trimming the tops and bottoms of square brackets, [ and ]. The notation was adopted almost immediately.
Fermat's Library tweet media
English
3
64
447
30.6K
Context Pin It
Context Pin It@contextpinit·
Only one of them can win
English
778
2.2K
19.5K
762.3K
Bruce Elliott
Bruce Elliott@belliott4488·
@martinmbauer I don't understand. Are you saying that pupils and atoms are NOT the same thing??
English
0
0
0
140
Bruce Elliott
Bruce Elliott@belliott4488·
@Grady_Booch Hasn't it always been pretty clear that training LLMs on enormous datasets of human output merely trains them to produce the grand glorious average, i.e. mediocrity?
English
0
0
0
7
Grady Booch
Grady Booch@Grady_Booch·
It is increasingly clear that large language models - by the very nature of their architecture - are incapable of producing anything beyond the mediocrity of their training data. For me, the interesting question is this: why are humans able to do so?
Sukh Sroay@sukh_saroy

New research just exposed the biggest lie in AI coding benchmarks. LLMs score 84-89% on standard coding tests. On real production code? 25-34%. That's not a gap. That's a different reality. Here's what happened: Researchers built a benchmark from actual open-source repositories real classes with real dependencies, real type systems, real integration complexity. Then they tested the same models that dominate HumanEval leaderboards. The results were brutal. The models weren't failing because the code was "harder." They were failing because it was *real*. Synthetic benchmarks test whether a model can write a self-contained function with a clean docstring. Production code requires understanding inheritance hierarchies, framework integrations, and project-specific utilities. Different universe. Same leaderboard score. But it gets worse. A separate study ran 600,000 debugging experiments across 9 LLMs. They found a bug in a program. The LLM found it too. Then they renamed a variable. Added a comment. Shuffled function order. Changed nothing about the bug itself. The LLM couldn't find the same bug anymore. 78% of the time, cosmetic changes that don't affect program behavior completely broke the model's ability to debug. Function shuffling alone reduced debugging accuracy by 83%. The models aren't reading code. They're pattern-matching against what code *looks like* in their training data. A third study confirmed this from another angle: when researchers obfuscated real-world code changing symbols, structure, and semantics while keeping functionality identical LLM pass rates dropped by up to 62.5%. The researchers call this the "Specialist in Familiarity" problem. LLMs perform well on code they've memorized. The moment you show them something unfamiliar with the same logic, they collapse. Three papers. Three different methodologies. Same conclusion: The benchmarks we use to evaluate AI coding tools are measuring memorization, not understanding. If you're shipping code generated by LLMs into production without review, these numbers should concern you. If you're building developer tools, the question isn't "what's your HumanEval score." It's "what happens when the code doesn't look like the training data."

English
133
130
1K
111.3K
Mathelirium
Mathelirium@mathelirium·
The Probability Density of a Brownian Particle Solves the. Heat Equation. Einstein’s 1905 paper on Brownian motion doesn’t get the same spotlight as General Relativity, but it’s still an astonishing piece of work. It takes a messy, noisy phenomenon and extracts three hard facts from it: First, the randomness of the particles is physical. The motion is molecular impacts made visible. Second, one path looks like chaos, but the cloud has a rigid scaling law...mean-square displacement grows like t, so the width grows like √t. Third, the cloud evolves deterministically. Its density solves the heat equation PDE! So Brownian probability spreads the same way heat spreads!
English
10
46
375
21.5K
Mathelirium
Mathelirium@mathelirium·
This Eisenstein 1905 paper makes you wonder: Why Does Nature Keep Hiding Order Inside Randomness or Chaos? What is Trying to Tell Us? Flip a coin 10 times and it can look wild. Flip it a million times and the fraction of heads locks near 0.5. So, randomness starts cancelling itself? Physics does the same trick. A glass of water has about 10^23 molecules doing all kinds chaotic nonsense, yet pressure and temperature behave smoothly and repeatably. The microscopic world is messy but the averages are stubbornly stable. What other examples can you think of?
English
45
75
546
32.4K
Bruce Elliott
Bruce Elliott@belliott4488·
@mathelirium Recently there have been suggestions that physics needs to focus more on complexity, as opposed to reducing all behavior to basic laws. The prime example is living systems, which are hard to explain by simply applying Newton's and Maxwell's laws to large numbers of atoms. 2/2
English
0
0
1
9
Bruce Elliott
Bruce Elliott@belliott4488·
@mathelirium The "arrow of time" has been discussed a lot in physics, usually along with reference to the laws of thermodynamics. I have nothing to add to that (you can search for it), except to point out that thermodynamics studies complex systems of many particles. And ... 1/2
English
1
0
1
9
Mathelirium
Mathelirium@mathelirium·
Does anyone have a good explanation to this? If the laws of physics are symmetric, why does the world/time have direction? What I mean is Newton's laws don't care whether time runs forward or backwards. Maxwell's equation don't either. Even most of Quantum Mechanics is time reversal symmetric. Yet, when coffee cools, it doesn't un-cool. Smoke spreads but it doesn't gather itself back.
Mathelirium tweet media
English
77
7
84
6.9K
Grady Booch
Grady Booch@Grady_Booch·
Ever had one of those days when you wake and find that all mimsy were the borogoves? Today is one of those days.
English
6
1
41
5.8K
Bruce Elliott
Bruce Elliott@belliott4488·
@Grady_Booch I'm 100% okay with my software tool referring to itself as "it".
English
0
0
0
17
Grady Booch
Grady Booch@Grady_Booch·
Human language needs a new pronoun, something whereby an AI may identify itself to its users. When, in conversation, a chatbot says to me “I did this thing”, I - the human - am always bothered by the presumption of its self-anthropomorphizatuon. There is no “I”, there is only an “it” with which I am interacting.
English
72
28
269
18.3K
Bruce Elliott
Bruce Elliott@belliott4488·
@jeffjarvis Not so much and idiot as someone who's a bit smarter than average and thinks he's a genius.
English
0
0
0
6
Jeff (Gutenberg Parenthesis) Jarvis
See how easy it is to compress all this into one word: Idiot.
Dustin@r0ck3t23

Elon Musk just dated the death of human language and explained exactly why it has to die. Musk: “Our brain spends a lot of effort compressing a complex concept into words.” Language isn’t communication. It’s failed compression. You have a complete thought. You crush it into words. The listener gets fragments and attempts reconstruction. Everything important dies in translation. We don’t communicate. We approximate and hope it’s close enough. Musk: “You would be able to communicate very quickly and with far more precision.” Neuralink doesn’t improve communication. It replaces it. No compression. No loss. Direct cognitive transfer at the speed thoughts occur. Not describing the painting. Transmitting the experience itself. Musk: “You wouldn’t need to talk.” Five to ten years until brain interfaces make speech optional. Talking persists for sentiment. For information? Speech becomes primitive compared to direct neural transmission. Lifetime of memory in one second. Complete schematics transferred instantly. Not summaries. The entire thought structure whole and uncompressed. Not better communication. Actual telepathy at physical information limits. Musk: “Ideally, we are a symbiosis with artificial intelligence.” Humans who don’t merge with AI at high bandwidth don’t just fall behind. They become incomprehensible to the intelligence that matters. We’re already cyborgs with pathetic interfaces. Phones extend cognition through typing at words per minute when bandwidth should be terabytes per second. Neuralink doesn’t optimize that. It detonates the constraint. Five to ten years. Not fiction. Deployment window. From language as default to neural link as standard. From compressing thoughts into inadequate words to transmitting uncompressed cognition. From humans using AI to humans indistinguishable from AI at communication speeds. The species that survived by evolving language is making it extinct with technology matching how fast we actually think. The ones who don’t transition won’t just be slow. They’ll operate at such reduced bandwidth they become effectively deaf to everything happening at neural speed around them. Language served 50,000 years. It has less than a decade before it becomes smoke signals. Functional but hopelessly inadequate for anything that matters.

Pluckemin, NJ 🇺🇸 English
5
5
20
2.6K
The White House
The White House@WhiteHouse·
QUEUE FREE BIRD 🦅🇺🇸
English
996
2.9K
38.9K
8.9M
Bruce Elliott
Bruce Elliott@belliott4488·
@CburgesCliff I've never seen a description of the oscillation between the two orientations. Is it harmonic? It doesn't look like it. IOW, it doesn't resemble the swing of a pendulum.
English
1
0
2
61
Bruce Elliott
Bruce Elliott@belliott4488·
@jamestanton Just a guess, but is the arcsin of a rational number always an integer multiple of pi (and thus transcendental)?
English
0
0
0
84
James Tanton
James Tanton@jamestanton·
Is sin (1 radian) rational or irrational?
James Tanton tweet media
English
4
2
13
3.5K
Paulo
Paulo@pjgomes72·
@3YearLetterman The Bible was actually translated from its original language to English. I understand that your stupid, but just wanted you to know that.
English
143
2
153
93.3K
Three Year Letterman
Three Year Letterman@3YearLetterman·
If the Bible was written in English, there’s no reason the Super Bowl halftime shouldn’t be as well
English
7.7K
1.9K
37K
15.6M
Robert A. George
Robert A. George@RobGeorge·
Man, this is deep. Real keeper. Sometimes, I weep at what has become of our culture.
English
27
2
54
2.8K
Buitengebieden
Buitengebieden@buitengebieden·
Finally he made it.. 💪
English
177
795
9.1K
237.1K
Bruce Elliott
Bruce Elliott@belliott4488·
@CburgesCliff I met him once when he came to deliver a colloquium at my department. I was clutching my copy of his GR text for him to sign, but I decided that I didn't want to look like a fan boy; we were supposed to be something more like colleagues. I mostly don't regret it.
English
1
0
1
14