Laurent F.

56 posts

Laurent F. banner
Laurent F.

Laurent F.

@hellheff

@Databricks. Interpretative dancing of quantum field theory is my main hobby. Opinions are my own.

San Francisco, CA Beigetreten Eylül 2014
880 Folgt92 Follower
Laurent F. retweetet
Nikita | Scaling Postgres
Nikita | Scaling Postgres@nikitabase·
Remember the days when you had to guess how much storage your database needed, and it would just break if you used too much.
English
2
2
17
2.5K
Laurent F.
Laurent F.@hellheff·
Grothendieck once said he didn’t attack problems head-on; instead, he’d ‘build a new house’ so vast that the problem became just a small room inside. That’s vision. LLMs, by contrast, can only rearrange the furniture in whatever house already exists. I believe @edfrenkel and @EricRWeinstein are pushing for higher standards than remodeling!
English
0
0
3
595
Laurent F. retweetet
Edward Frenkel
Edward Frenkel@edfrenkel·
This is an unwise statement that can only make people confused about what LLMs can or cannot do. Let me tell you something: Math is NOT about solving this kind of ad hoc optimization problems. Yeah, by scraping available data and then clustering it, LLMs can sometimes solve some very minor math problems. It's an achievement, and I applaud you for that. But let's be honest: this is NOT the REAL Math. Not by 10,000 miles. REAL Math is about concepts and ideas - things like "schemes" introduced by the great Alexander Grothendieck, who revolutionized algebraic geometry; the Atiyah-Singer Index Theorem; or the Langlands Program, tying together Number Theory, Analysis, Geometry, and Quantum Physics. That's the REAL Math. Can LLMs do that? Of course not. So, please, STOP confusing people - especially, given the atrocious state of our math education. LLMs give us great tools, which I appreciate very much. Useful stuff! Go ahead and use them AS TOOLS (just as we use calculators to crunch numbers or cameras to render portraits and landscapes), an enhancement of human abilities, and STOP pretending that LLMs are somehow capable of replicating everything that human beings can do. In this one area, mathematics, LLMs are no match to human mathematicians. Period. Not to mention many other areas. Calling on my friend @EricRWeinstein and @GaryMarcus, who has been one of the few sane expert voices on these matters lately. 🙏 h/t @hellheff
Sebastien Bubeck@SebastienBubeck

Claim: gpt-5-pro can prove new interesting mathematics. Proof: I took a convex optimization paper with a clean open problem in it and asked gpt-5-pro to work on it. It proved a better bound than what is in the paper, and I checked the proof it's correct. Details below.

English
246
224
1.5K
1M
Eric Weinstein
Eric Weinstein@EricRWeinstein·
Imagine you woke up tomorrow and 5 reports were released leaving no question as to the truth: A) COVID's origin. B) Epstein's IC connection, wealth, activity and death. C) JFK's Assassination. D) UFO/UAP. E) Press complicity in statecraft narratives. What happens the day after?
English
3.6K
865
10K
572K
Laurent F.
Laurent F.@hellheff·
@DrJimFan John von Neumann would disagree with "Data has historucally been seen as separate from compute". Maybe we just forgot. Maybe new facts/papers have more in common with old books then what we first envisioned.
English
0
0
0
83
Jim Fan
Jim Fan@DrJimFan·
Machines will train machines. Never bet against scaling. Never.
Andrej Karpathy@karpathy

I don't have too too much to add on top of this earlier post on V3 and I think it applies to R1 too (which is the more recent, thinking equivalent). I will say that Deep Learning has a legendary ravenous appetite for compute, like no other algorithm that has ever been developed in AI. You may not always be utilizing it fully but I would never bet against compute as the upper bound for achievable intelligence in the long run. Not just for an individual final training run, but also for the entire innovation / experimentation engine that silently underlies all the algorithmic innovations. Data has historically been seen as a separate category from compute, but even data is downstream of compute to a large extent - you can spend compute to create data. Tons of it. You've heard this called synthetic data generation, but less obviously, there is a very deep connection (equivalence even) between "synthetic data generation" and "reinforcement learning". In the trial-and-error learning process in RL, the "trial" is model generating (synthetic) data, which it then learns from based on the "error" (/reward). Conversely, when you generate synthetic data and then rank or filter it in any way, your filter is straight up equivalent to a 0-1 advantage function - congrats you're doing crappy RL. Last thought. Not sure if this is obvious. There are two major types of learning, in both children and in deep learning. There is 1) imitation learning (watch and repeat, i.e. pretraining, supervised finetuning), and 2) trial-and-error learning (reinforcement learning). My favorite simple example is AlphaGo - 1) is learning by imitating expert players, 2) is reinforcement learning to win the game. Almost every single shocking result of deep learning, and the source of all *magic* is always 2. 2 is significantly significantly more powerful. 2 is what surprises you. 2 is when the paddle learns to hit the ball behind the blocks in Breakout. 2 is when AlphaGo beats even Lee Sedol. And 2 is the "aha moment" when the DeepSeek (or o1 etc.) discovers that it works well to re-evaluate your assumptions, backtrack, try something else, etc. It's the solving strategies you see this model use in its chain of thought. It's how it goes back and forth thinking to itself. These thoughts are *emergent* (!!!) and this is actually seriously incredible, impressive and new (as in publicly available and documented etc.). The model could never learn this with 1 (by imitation), because the cognition of the model and the cognition of the human labeler is different. The human would never know to correctly annotate these kinds of solving strategies and what they should even look like. They have to be discovered during reinforcement learning as empirically and statistically useful towards a final outcome. (Last last thought/reference this time for real is that RL is powerful but RLHF is not. RLHF is not RL. I have a separate rant on that in an earlier tweet x.com/karpathy/statu…)

English
51
142
1.5K
192K
Laurent F.
Laurent F.@hellheff·
@jason_kint Hi Jason, it’s a somewhat well known document for experts. It might be largely unknown from the general public
English
1
0
1
771
Jason Kint
Jason Kint@jason_kint·
just found some eye-popping stuff in google discovery under the examples of substantive chats (not deleted). you ready? it's between two key VPs (33+yrs collectively working at Google) and strikes at the intersection of privacy and antitrust as they began cookie deprecation. /1
English
7
121
392
69.2K
Andrew Tate
Andrew Tate@Cobratate·
I will give away 100,000 dollars tonight. Simply retweet and comment below.
English
188K
153.2K
188K
28.3M
Troy Hunt
Troy Hunt@troyhunt·
Quick favour: I've been working on the issue of @haveibeenpwned emails sometimes going to spam for Office 365 or Outlook[.]com (or Hotmail) accounts, if you have one of these could you try this feature and tweet a reply of whether it went to junk or not: haveibeenpwned.com/NotifyMe
English
69
14
41
0
Laurent F. retweetet
Astropierre
Astropierre@astropierre·
Une nanoparticule composée d'à peine 100 millions d'atomes, piégée par des lasers, a été refroidie à la plus basse température permise par la mécanique quantique : 12 millionième de degrés au-dessus du zéro absolu. #PhysiqueMésoscopique bit.ly/37I4T8I
Astropierre tweet media
Français
8
65
242
0
Laurent F. retweetet
Prof. Feynman
Prof. Feynman@ProfFeynman·
"Man is the most insane species. He worships an invisible God and destroys a visible Nature. Unaware that this Nature he's destroying is this God he's worshipping." -- Hubert Reeves
Prof. Feynman tweet media
English
53
2.4K
5.6K
0
Laurent F. retweetet
Massimo
Massimo@Rainmaker1973·
It's possible to play a violin with a prosthetic arm: in addition to being a nurse and Paralympian swimmer, Manami Ito is also an accomplished musician and uses her skills together with the mechanical functions of her arm to play the violin buff.ly/2Qj0NM2
English
10
398
1.5K
0
Laurent F. retweetet
Prof. Feynman
Prof. Feynman@ProfFeynman·
There's a big difference between knowing the name of something and knowing something.
English
25
1.6K
4.9K
0
Laurent F. retweetet
Kevin Mitnick
Kevin Mitnick@kevinmitnick·
This article is about the grandmaster hacker of roulette. Very interesting! What do you think about it? Check it out: thehustle.co/professor-who-…
English
5
19
78
0
Laurent F. retweetet
Patrice RoBERT
Patrice RoBERT@skullpat·
🎂🍾🎉 Joyeux Anniversaire l'Amstrad CPC 464 : 35 ans aujourd'hui 🎉🍾🎂
Patrice RoBERT tweet media
Français
9
48
150
0