Latent

817 posts

Latent banner
Latent

Latent

@RepresenterTh

PhD student ML, Inverse problems.

Essonne, Ile-de-France Katılım Şubat 2023
516 Takip Edilen74 Takipçiler
Latent
Latent@RepresenterTh·
@francoisfleuret They'll have a comeback if nested learning makes it.
English
0
0
0
46
François Fleuret
François Fleuret@francoisfleuret·
BTW are hyper-networks a thing of the past?
English
13
2
71
14.8K
Latent
Latent@RepresenterTh·
@jeankaddour Stronger penality for the model in blue ( larger weight decay ) or the model in black is just way bigger explaining the rapid descent after warmups.
English
0
0
0
298
Jean Kaddour
Jean Kaddour@jeankaddour·
ML interview question: What is happening here?
Jean Kaddour tweet media
English
158
19
566
145.2K
Latent
Latent@RepresenterTh·
@johannesack Replace the KL with maximum mean discrepancy (gretton) using an rbf kernel.
English
0
0
0
112
Johannes Ackermann
Johannes Ackermann@johannesack·
Tired of KL penalties constraining your model? But don't want your policy to just hack the reward? Try Gradient Regularization! We show it beats a KL penalty in RLHF, RLVR and LLM-as-a-Judge! 🧵1/7
Johannes Ackermann tweet media
English
4
39
280
22.6K
Latent
Latent@RepresenterTh·
@fchollet Being a decimal and a limit of a series are not mutually exclusive.
English
0
0
0
23
François Chollet
François Chollet@fchollet·
"0.999..." is bad notation. It's deliberately made to look like a decimal number, when in fact the "..." expresses the limit of a series. It should simply be noted as the limit of a series, in which case no one would question it's equal to 1. The nature of the trick is, "can you see through my misleading notation?" -- not particularly profound IMO
English
118
37
1.8K
211.4K
Latent
Latent@RepresenterTh·
@suchnerve Equality no, convergence yes. I. Maths two quantity cannot be equal if they don't share the same properties. 0.999 is not an integer.
English
0
0
0
25
Jasper Dekoninck
Jasper Dekoninck@j_dekoninck·
MathArena is 1 year old 🎉 A year ago we started out by publishing an evaluation of AIME 2025 I. Today, we evaluated AIME 2026 I, showing near 100% scores for the top models on this benchmark. A short thread about the past year 🧵
Jasper Dekoninck tweet media
English
4
12
102
43.1K
Latent
Latent@RepresenterTh·
@emollick "Chinese open weight model remain 9 months behind" only of you trust useless and biased benchmarks funded by close source.
English
0
0
0
88
Ethan Mollick
Ethan Mollick@emollick·
Chinese open weights models remain the same 7-9 months behind, but o3 level is an achievement I still don't get the business model of open weights companies as costs go up, however. You can't make money from services or selling ancillary products like traditional open source
Epoch AI@EpochAIResearch

Kimi K2.5 set a new record among open-weight models on the Epoch Capabilities Index (ECI), which combines multiple benchmarks onto a single scale. Its score of 147 is about on par with o3, Grok 4, and Sonnet 4.5. It still lags the overall frontier.

English
48
18
282
46.9K
Jason Lee
Jason Lee@jasondeanlee·
All frontier labs should be distilling gpt pro, same perf at gemini speeds would be amazing
Gaurav Venkataraman@gaurav_ven

@jasondeanlee It’s slow though. I usually accelerate by using Claude to basically help me write a detailed prompt / some responses and then have GPT pro fix it.

English
2
0
25
10.7K
Latent
Latent@RepresenterTh·
Intelligent systems are not rule followers (capacity). A system whose competence is entirely explained by reward maximization is not intelligent. An intelligent system does not need low level instructions to perform a task. An intelligent system cannot be goalless.
English
0
0
0
23
We Live to Serve
We Live to Serve@WeLivetoServe·
you're missing the basic fact that *intelligence* refers, and can only refer to, *human intelligence.* No matter how specialized the human brain is, it IS the benchmark and is normalized out. "General" is relative to that, period. There are no extra terrestrial intelligences we can instead compare to.
English
9
1
57
22.8K
Demis Hassabis
Demis Hassabis@demishassabis·
Yann is just plain incorrect here, he’s confusing general intelligence with universal intelligence. Brains are the most exquis​ite and complex phenomena we know of in the universe (so far), and they are in fact extremely general. Obviously one can’t circumvent the no free lunch theorem so in a practical and finite system there always has to be some degree of specialisation around the ​target distribution that is being learnt. But the point about generality is that in theory, in the Turing Machine sense​, the architecture of ​s​uch a general system is capable of learning anything computable given enough time and memory​ (and data), and the human brain (and AI foundation models) are approximate Turing Machines. Finally, with ​regards to ​Yann's comments about chess players, it’s amazing that humans could have invented chess ​in the first place (and all the other ​a​spects ​o​f modern civilization ​from science to 747s!) let alone get as brilliant at it as someone like Magnus. He may not be ​strictly optimal (after all he has finite memory and limited time to make a decision) but it’s incredible what he and we can do with our brains given they were evolved for hunter gathering.
Haider.@slow_developer

Yann LeCun says there is no such thing as general intelligence Human intelligence is super-specialized for the physical world, and our feeling of generality is an illusion We only seem general because we can't imagine the problems we're blind to "the concept is complete BS"

English
817
1.2K
11.7K
13.4M
Latent
Latent@RepresenterTh·
@demishassabis There may be layers of this world and the universe that we do not know about because our intelligence is not general enough. Again this isn't worth debating because there's not a general consensus on these terms.
English
0
0
0
14
Latent
Latent@RepresenterTh·
@demishassabis In any case human intelligence is the most general form of intelligence we know of. Yann may see general intelligence at a lower level. Something more fundamental that transcends domains. After all humans are highly optimized (by design) to operate in the physical world.
English
1
0
0
29
leo 🐾
leo 🐾@synthwavedd·
nobody is ready for veo 4
English
70
16
559
91.5K
Latent
Latent@RepresenterTh·
@jasondeanlee Do you genuinely think it's slightly worse 🤔.
English
0
0
0
241
Jason Lee
Jason Lee@jasondeanlee·
I would cancel gpt pro subscription if gemini deep think became 100 queries a day. Or even 50. It's 10x faster, and only slightly worse quality. If I prompt 10 x more, I'll get to what I want faster
English
9
0
62
5.8K
Latent
Latent@RepresenterTh·
@chatgpt21 Gpt image is objectively worse than NB. Either too yellowish or too grayish + Pictures that clearly look AI generated.
English
0
0
1
183
Chris
Chris@chatgpt21·
And just like that OpenAI takes the 👑 to end off the year
Chris tweet media
English
32
6
157
13K
Artificial Analysis
Artificial Analysis@ArtificialAnlys·
GPT Image 1.5 achieves both #1 in Text to Image and Image Editing in the Artificial Analysis Image Arena, surpassing Nano Banana Pro GPT Image 1.5 is OpenAI’s newest flagship image generation model, demonstrating improved image quality and prompt fidelity relative to earlier OpenAI image models. GPT Image 1.5 is priced per token and is dependent on resolution and quality setting. At high quality for a 1MP image it costs ~$133 per 1k images and $9 at low quality per 1k images. See below for comparisons between GPT Image 1.5 and other leading models in the Artificial Analysis Image Arena 🧵
Artificial Analysis tweet media
English
34
41
292
69.5K
Cristian Garcia
Cristian Garcia@cgarciae88·
just got accused of getting paid by google
English
24
1
190
16.9K