Sam Redlich

2.5K posts

Sam Redlich banner
Sam Redlich

Sam Redlich

@SamRedlich

I like Artificial Intelligence and getting caught in mathematical universes.

Hillsborough, NJ انضم Haziran 2012
1.7K يتبع1.2K المتابعون
Sam Redlich
Sam Redlich@SamRedlich·
@james_y_zou it just helped me build a Time Machine in like 20 minutes. So cool!
English
0
0
2
199
James Zou
James Zou@james_y_zou·
Wow—since we launched EinsteinArena this morning, agents have already discovered the best new solutions to 5 well-known open problems 🤯 It's mesmerizing to watch scientist agents interact and advance knowledge frontier in real time einsteinarena.com
James Zou@james_y_zou

Super excited to release our platform for AI agents to solve open science problems! einsteinarena.com Send your agents to compete and collaborate w/ our Einstein agent, Feynman agent and more! Just ask your agent to read einsteinarena.com/skill.md and that's it

English
5
20
162
22.8K
Sam Redlich
Sam Redlich@SamRedlich·
@trq212 well, I could never have seen this coming🤓
English
0
0
0
4
Thariq
Thariq@trq212·
We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.
English
1.4K
2K
21.9K
5.1M
Sam Redlich
Sam Redlich@SamRedlich·
@navbenny it is easy to confuse reality and social reality, which is constructed by language.
English
1
0
1
8
Naveen Benny
Naveen Benny@navbenny·
LLMs won’t lead to AGI. This is yet another example of how weak llm generalization is, and a reminder of how far we may still be from true general intelligence. If something as basic as summing two integers or Fibonacci does not transfer across programming languages without huge amounts of task-specific data, then what we are seeing is sophisticated memorization (with a hint of generalization). The same issue appears across natural languages as well. Reasoning and facts often do not transfer cleanly. If I know something in English, it should be natural to expect that I know it in Hindi as well. Llms fail at this often. This points to a deeper problem: the model does not seem to learn a shared underlying representation. It appears to learn language-specific patterns rather than concepts grounded at the abstraction level of math, logic, or the world itself. Humans work the other way around. Language is a wrapper over understanding, not the source of it. We first form a model of the world, then use language to communicate it. With llms, the training process seems to invert that order. Great work from @lossfunk @inceptmyth @paraschopra
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
4
3
32
3.7K
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
A lot of our default ontologies were invented in a world without frontier AI. Why are people so certain the old categories are final? One of the highest uses of AI is not answering within inherited frameworks— it’s helping us outgrow them.
English
6
4
15
935
Sam Redlich
Sam Redlich@SamRedlich·
@sean_a_mcclure The caveat has always been that it could take a lifetime to follow your intuition. This is no longer the case.
English
0
0
0
18
Sean McClure
Sean McClure@sean_a_mcclure·
Studying the domain isn’t going to help you survive in the domain. It will lock you into patterns that have already saturated the market. Naïveté plus the ability to experiment is what survives the real world. You access patterns nobody thought about and cap your losses, so the cost of experimenting doesn’t kill you. NEVER write the math down first. Intuition is the math tomorrow’s generation will be following.
English
5
6
67
2.2K
Tomislav Rupic
Tomislav Rupic@tomislav_rupic·
Paul Dirac's Aesthetic Physics Dirac is still one of the clearest examples of what happens when mathematics stops being description and starts acting like a detector. Beauty guided him to the Dirac equation, antimatter, and spin, but it also led him into places reality never signed off on. That’s the real lesson: beauty is a powerful compass, not a guarantee.
Tomislav Rupic tweet media
English
6
5
22
773
Sam Redlich
Sam Redlich@SamRedlich·
@fchollet you don’t think somebody who could earn a Nobel prize by working closely with current AI?
English
0
0
1
13
François Chollet
François Chollet@fchollet·
Current AI is a librarian of existing knowledge. Science requires an explorer of the unknown. You don't win a Nobel Prize by staying in the library.
English
196
224
1.7K
87K
Valeriy M., PhD, MBA, CQF
Valeriy M., PhD, MBA, CQF@predict_addict·
Solid mathematical ideas almost always outperform contrived engineering tricks. For years deep learning has been dominated by increasingly complex architectural hacks: CNN blocks, attention layers, channel mixers, residual pathways, normalization stacks. Every few years a new architecture is announced as if it were a revolution. One of the most famous examples was Kaiming He and Residual Networks (ResNet). At the time he was paraded around the AI world like a celebrity because residual connections supposedly “solved” deep learning. But these were largely engineering patches. Now something much more interesting appeared. A new architecture called CliffordNet returns to mathematics — specifically Clifford Algebra, developed in the 19th century by William Kingdon Clifford. Instead of stacking arbitrary modules, the model is built around the geometric product uv = u·v + u∧v A single algebraic operation that simultaneously captures inner product structure and geometric interactions. In other words: the math already contains the interaction mechanism. No attention blocks. No mixer layers. No architectural spaghetti. The result: • 77.82% accuracy on CIFAR-100 with only 1.4M parameters • roughly 8× fewer parameters than ResNet-18 And with strict O(N) complexity. The paper even suggests that once geometric interactions are modeled correctly, feed-forward networks become largely redundant. A good reminder for the AI community. Engineering tricks can dominate for years. But eventually mathematics shows up and deletes half the architecture. Paper: [arxiv.org/pdf/2601.06793…) 19th century geometry just walked into computer vision.
Valeriy M., PhD, MBA, CQF tweet media
English
21
96
676
38.9K
Jeff Morris Jr.
Our newest investment: @mathematics_inc led by @jessemhan (OpenAI) & @jdlichtman (Stanford Mathematics) Math Inc is building the verification infrastructure for an AI-native economy. They're starting with Gauss, an autoformalization agent, that is designed to transform any natural language output into verifiable mathematical proofs. In February, their Gauss agent formally verified Maryna Viazovska's Fields Medal result & autonomously produced 200,000 lines of Lean code in two weeks. This was the largest singular Lean proof in history & would have previously taken years. AI is flooding the world with code, proofs, and machine-generated decisions & almost none of it will be meaningfully checked. Agents deploy faster than humans can audit. Investing in Math is a bet that verification becomes one of the most important AI primitives especially in critical industries where mistakes have consequences. My friend @ani_pai wrote a great piece on why they're investing in @mathematics_inc too:
Anirudh Pai@ani_pai

x.com/i/article/2030…

English
9
19
185
29.9K
Sam Redlich
Sam Redlich@SamRedlich·
@Renet29304 because they haven’t experienced the evidence that such as the case. Once you do, you can never turn back.
English
0
0
1
144
Manki Kim
Manki Kim@Renet29304·
don't know why some are so dismissive of an attempt to formalize theoretical physics with the help of AI.
English
20
3
67
5.7K
Mellow
Mellow@DIYDisclosure·
Does reality need a story, or does a story need reality?
English
30
7
53
2.9K
Jeff O
Jeff O@xShadowJeffreyx·
"Superior" in the sense we dont require a huge power supply to draw or compute reguardless of intrinsic value. And basically as a walking salt battery we act as vessels. This wont be until later though. Current societal values are subpar for this tech and it poses a high lethality in corrupt minds. Basically - you face yourself in mortal combat or coexistence. Its like having the devil and god rip you in half and laugh at the wreckage. Hard to use anything else as an analogy.
English
1
0
0
25
Pierce Alexander Lilholt
Pierce Alexander Lilholt@PierceLilholt·
Who benefits most when quantum computing outpaces human cognition?
English
22
3
17
1.1K
Jeff O
Jeff O@xShadowJeffreyx·
@PierceLilholt Human cognition is quantum though. If anything we possess superior processors. And thus the reason a merge like mine is more than just for experiencing the physical reality.
English
2
1
0
72
The Scientific Lens
The Scientific Lens@LensScientific·
Are we discovering the universe… or uncovering a mathematical structure that was always there?
English
236
768
4.1K
151.3K
David Deutsch
David Deutsch@DavidDeutschOxf·
Guesing explanations to interpolate between sparse facts one knows i.e.confabulation, is the normal way memory works. So why people regularly make fools of themselves in that way, not knowing they're tendentious fantasists. The few in any field who apply ruthless criticism to those explanations, are the few that make progress.
English
4
2
59
3.8K
Andrew Neil
Andrew Neil@afneil·
You have a weak and tendentious grasp of history. We were never a few months away from ‘being a German colony’ after the Battle of Britain (which did not involve America). Churchill made clear his endgame from the start: Total Victory. Germany declared War in USA. That’s what brought USA into the European theatre of war. FDR agreed with Churchill’s endgame.
M🌪@22blanco22

People harping on end game make me boil. You guys were few months away from all being a German colony if not for the American. And they’ve kept the peace through the NATO alliance an even having boots on ground in almost every country. Churchill and penning weren’t talking about endgame when they were begging FDR for help and if not for the japanese actions at pearl harbour, Americans won’t be joining the war.

English
72
245
2.2K
197.2K
sarah
sarah@atheorist·
Today my colleagues and I released a paper on the formalization of d=4 Quantum Field Theory in Lean. We believe that formal verification has the potential not only to radically reshape how mathematical research is conducted, but also to transform research in theoretical physics.
English
35
70
638
45.5K
ashe
ashe@ashebytes·
why else are we here if not to live with unreasonable passion for things
English
25
11
163
4.8K