Laurent Ach

877 posts

Laurent Ach

Laurent Ach

@ach3d

CTO, leveraging artificial and human intelligence @ https://t.co/S7FTM8wYNA - https://t.co/x8nrqGm8Lw

Paris, France Katılım Mayıs 2009
580 Takip Edilen411 Takipçiler
Sabitlenmiş Tweet
Laurent Ach
Laurent Ach@ach3d·
There is nothing it is like to be GPT-3
English
2
0
3
0
Laurent Ach
Laurent Ach@ach3d·
@FigaroVox @Aurelie_JEAN @Aurelie_JEAN, il ne faut pas trop écouter Dario Amodei qui est, parmi les dirigeants des entreprises d'IA, celui qui laisse le plus de place à l'éventualité d'une conscience des machines
Français
0
1
1
171
Laurent Ach retweetledi
François Chollet
François Chollet@fchollet·
If you ever feel like you're late to the game, consider that in the 1890s many scientists thought physics as a field was completely solved (quote below is from Albert Michelson in 1894). On the front of intelligence science, it feels more like the 1870s. For the first time we have something that is starting to really work (however primitive it may be), which we can use as a springboard for the next few decades of discoveries.
François Chollet tweet media
English
53
101
1.1K
52.3K
Laurent Ach
Laurent Ach@ach3d·
@babgi Ce sont aussi des travaux qui amènent une perspective très intéressante sur l’apparition des émotions et de la conscience à partir de l’évolution du vivant et pas à partir du traitement de l’information. C’est si évident quand on lit Antonio Damasio.
Français
0
0
0
20
Gilles Babinet
Gilles Babinet@babgi·
Je viens de finir ce livre "feeling and knowing" de Antonio Damasio (l'autre, pas le Damasio français) que j'avais entrepris de lire pour essayer de mieux comprendre le débat qui oppose les défenseurs des LLM a ceux qui privilégient les modèle de monde ou les approches neurosymboliques. Damasio défend une vision très très proche de celle de @ylecun, expliquant que pour développer la pensée complexe, les sens sont des compléments essentiels au raisonnement. Il va même plus loin en considèrant que les sens ne sont pas comme la philosophie moderne nous l'apprend des facteurs qui nous détournent d'une compréhension objective du réel, mais bien les facteurs premiers de compréhension du réel. j'ai noté en particulier ce paragraphe que je trouve tout à fait saisissant "The exaggerated distinction between human and nonhuman coping abilities, generated by an exceptionalist approach to human faculties, is deeply flawed. It is grandiose when it comes to humans; it unjustifiably diminishes nonhumans; and it fails to acknowledge the interdependence and cooperativity of living creatures, from the microscopic level to the human. Ultimately, it fails to acknowledge the presence of powerful motifs, designs, and mechanisms manifest in nature since life began—and even in the physics and chemistry that preceded it—and, in all likelihood, at least partly responsible for the blueprint of cultural developments usually attributed to humans" Un livre qui n'est pas toujours facile d'accès - il y a en particulier quelques paragraphes sur l'organisation cérébrale qui m'ont été largement étrangers - mais pour quiconque s'intéresse à la façon dont on raisonne et au delà qui s'intéresse à ce qui nous singularise au sein du vivant, il est vraiment passionnant.
Gilles Babinet tweet media
Français
10
30
134
9.2K
Laurent Ach
Laurent Ach@ach3d·
Stéphane Mallat nous fait avancer dans la compréhension théorique des modèles de réseaux de neurones, avec rigueur et modestie, et une vision passionnante et intelligente de l'IA qui se démarque de ce dont on se contente souvent dans la tech
CNRS 🌍@CNRS

En alliant abstraction théorique et retombées concrètes, Stéphane Mallat, lauréat 2025 de la médaille d’or du CNRS, a marqué de son empreinte les mathématiques appliquées à l’informatique. Du format de compression d’images JPEG 2000 aux fondements ... #Echobox=1757582418" target="_blank" rel="nofollow noopener">lejournal.cnrs.fr/articles/steph…

Français
0
0
1
57
Laurent Ach
Laurent Ach@ach3d·
This corresponds to the limits of combining symbolic and connectionist approaches, where it's impossible to create an ontology of the whole world and it's impossible to automatically generate all the explicit concepts we need for reasoning.
English
0
0
0
20
Laurent Ach
Laurent Ach@ach3d·
World models are clearly missing in current LLMs as @ylecun says but it’s unclear how future model architectures will balance useful emergent representations of the world with explicit concepts that are understandable to humans. techcrunch.com/2025/01/23/met…
English
2
0
0
46
Laurent Ach
Laurent Ach@ach3d·
I suspect there is a Heisenberg-like law that limits how explicit concepts can be and how many can be used.
English
0
0
0
9
Laurent Ach
Laurent Ach@ach3d·
Tester l'intégration d'IA générative par Qwant dans ses résultats de recherche - résumés de réponses, réponses détaillées, toujours en citant les sources d'information
Qwant@Qwant_FR

✨ L’IA DE QWANT EST EN OPEN WEEK ✨ Notre IA est disponible à toutes et tous pendant une semaine ! Plus besoin d’avoir un compte (même si c’est gratuit 👀) pour l’utiliser. Elle répond à toutes vos questions et requêtes en un clin d’oeil. Plus d’excuses pour tester ;)

Français
0
0
1
42
Laurent Ach
Laurent Ach@ach3d·
"Information is a matter of questions and answers, it's not an objective thing, information doesn't just sit there." ... "it entails a relationship between a subject and an object", brilliant remarks by @Mark_Solms youtu.be/meUtWtY00H8?fe…
YouTube video
YouTube
English
0
0
2
44
Laurent Ach
Laurent Ach@ach3d·
@rheimann Good analysis, but you ignore an argument I’ve sometimes heard, that our mind also does something like predicting the next word in a sentence, and so we could expect LLMs to have some intelligence similar to ours. The toughest arguments are often the most reductionist.
English
0
0
0
148
Laurent Ach
Laurent Ach@ach3d·
I had the pleasure participate in an interesting talk with @jonsvt and @brucel, about developing technologies like web search engines and web browsers in Europe
Qwant@Qwant_FR

🎙️ On a réalisé un petit podcast avec nos amis de chez @vivaldibrowser ! Notre CTO, @ach3d et Jon von Tetzchner, CEO chez Vivaldi, discutent de la manière dont il est possible de concevoir des technos respectueuses de la vie privée en ligne 🔐 youtu.be/EGaNjCLIH-k?si…

English
0
0
3
631
Laurent Ach
Laurent Ach@ach3d·
Interesting thoughts on the Turing test by @MelMitchell1 science.org/doi/10.1126/sc… It was once believed that beating a human at chess required general intelligence. The story goes on with AI mastering one task after another, without any intelligence
English
0
0
0
53
Laurent Ach
Laurent Ach@ach3d·
"Attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.” Great thoughts by @ShannonVallor on the usual comment "You don’t think that your brain is a machine?" this time by Yoshua Bengio noemamag.com/the-danger-of-…
English
0
5
7
1.1K
Laurent Ach
Laurent Ach@ach3d·
@ylecun information and computation are related to models of the world that can only exist at some particular scales in particular scientific and philosophical contexts. It’s naive to think that we can come up with a model that would explain everything.
English
0
1
0
108
Yann LeCun
Yann LeCun@ylecun·
There are only 3 great scientific questions: 1. What's the universe made of? 2. What's life all about? 3. What is intelligence? There are interesting sub-questions: 1.1 What's dark matter and dark energy? 1.2 how do you get "it from bit" to paraphrase John Wheeler 1.3 what is the nature of time? 2.1 is the emergence of life an intrinsic property of the universe? 2.2 how does complexity spontaneously form? 3.1 is the emergence of intelligent behavior an intrinsic property of the universe? 3.2 how does intelligence spontaneously form? (which is why learning is so fascinating) At the core of all of these questions is the nature of information and computation.
Andrew Côté@Andercot

@examachine @gargantuandwarf @ylecun What would be the greatest possible scientific discovery of our era, do you think?

English
796
591
4K
842.8K
Laurent Ach
Laurent Ach@ach3d·
As usual, @fchollet gives the clearest and most synthetic explanations about the capabilities of LLM. Everything is said, really!
François Chollet@fchollet

The question of whether LLMs can reason is, in many ways, the wrong question. The more interesting question is whether they are limited to memorization / interpolative retrieval, or whether they can adapt to novelty beyond what they know. (They can't, at least until you start doing active inference, or using them in a search loop, etc.) There are two distinct things you can call "reasoning", and no benchmark aside from ARC-AGI makes any attempt to distinguish between the two. First, there is memorizing & retrieving program templates to tackle known tasks, such as "solve ax+b=c" -- you probably memorized the "algorithm" for finding x when you were in school. LLMs *can* do this! In fact, this is *most* of what they do. However, they are notoriously bad at it, because their memorized programs are vector functions fitted to training data, that generalize via interpolation. This is a very suboptimal approach for representing any kind of discrete symbolic program. This is why LLMs on their own still struggle with digit addition, for instance -- they need to be trained on millions of examples of digit addition, but they only achieve ~70% accuracy on new numbers. This way of doing "reasoning" is not fundamentally different from purely memorizing the answers to a set of questions (e.g. 3x+5=2, 2x+3=6, etc.) -- it's just a higher order version of the same. It's still memorization and retrieval -- applied to templates rather than pointwise answers. The other way you can define reasoning is as the ability to *synthesize* new programs (from existing parts) in order to solve tasks you've never seen before. Like, solving ax+b=c without having ever learned to do it, while only knowing about addition, subtraction, multiplication and division. That's how you can adapt to novelty. LLMs *cannot* do this, at least not on their own. They can however be incorporated into a program search process capable of this kind of reasoning. This second definition is by far the more valuable form of reasoning. This is the difference between the smart kids in the back of the class that aren't paying attention but ace tests by improvisation, and the studious kids that spend their time doing homework and get medium-good grades, but are actually complete idiots that can't deviate one bit from what they've memorized. Which one would you hire? LLMs cannot do this because they are very much limited to retrieval of memorized programs. They're static program stores. However, can display some amount of adaptability, because not only are the stored programs capable of generalization via interpolation, the *program store itself* is interpolative: you can interpolate between programs, or otherwise "move around" in continuous program space. But this only yields local generalization, not any real ability to make sense of new situations. This is why LLMs need to be trained on enormous amounts of data: the only way to make them somewhat useful is to expose them to a *dense sampling* of absolutely everything there is to know and everything there is to do. Humans don't work like this -- even the really dumb ones are still vastly more intelligent than LLMs, despite having far less knowledge.

English
0
0
0
129