Emmanuel Sciara

611 posts

Emmanuel Sciara

Emmanuel Sciara

@esciara

Katılım Ekim 2007
93 Takip Edilen141 Takipçiler
Sabitlenmiş Tweet
Emmanuel Sciara
Emmanuel Sciara@esciara·
#GitHub #Copilot, c'est la revolución ! Oui, et il ne faut pas oublier que c'est un... *outil*, et que comme tout outil, l'important, c'est de savoir l'utiliser... A partir de demain et toute cette semaine, on va parler de comment maitriser Copilot et en tirer pleinement profit.
Emmanuel Sciara tweet media
Français
1
1
4
133
jimmah
jimmah@jamesdouma·
Improving FSD I've been asked a few times about how and whether FSD can continue to get better without improvements to the hardware. This question is often asked in the context of a misunderstanding of how neural network optimization and improvement generally works, so I'm going to take a minute here to explain it. Anyone who has worked on software will have some familiarity with the fact that if you put more effort into programming, you can achieve greater performance within the same end resource footprint. Modern software systems are very complicated and include a lot of layers of functionality which are present to provide isolation from the hardware limitations so that programming can be done with a set of abstractions that minimize the burden on the programmer and result in better software with less programmer labor. These software layers add significant overhead to the running of a program. So once a program has been made, if it turns out to be too resource intensive for the target application there are a lot of places where the design can be adjusted to reduce the end resource requirement at the cost of additional labor on the part of a programmer. This process of getting a program to provide the same or better performance within a given set of hardware limitations is generally referred to as optimization. Optimization can include choosing faster low level libraries, tuning compiler optimizations, changing to a tighter development stack, bypassing some layers of the stack, switching to a language that compiles more efficiently, changing the high level abstractions used at the top level, and many other options. There are a large number of advanced programming tools that are used to assist programmers in optimizing their software but all these tools take time and effort to employ in order to get a good end result and avoid compromising the final functionality. Because modern hardware is so highly performant compared the fundamental requirements of the tasks we program for, most software is written without the need for aggressive optimization. As a result almost all software could be dramatically sped up (10x, 100x, 1000x) if sufficient programmer labor is employed. The question of how much to optimize a system comes down to the tradeoff of the cost of faster hardware versus this programmer labor. For one-off applications better hardware is almost always cheaper. For programs that will run for millions or billions of CPU-hours more optimization is in order. Neural networks sit upon a foundation of conventional software. The conventional software foundation is used to implement the abstractions that the neural network itself requires. Any program that includes a large neural network component can be optimized via the methods described above but also can avail itself various improvements specific to neural networks. Much like the first draft of a conventional program, the initial implementation of a neural network is always going to be very far away from optimal in terms of its hardware requirements. This is true not just because any first draft is necessarily suboptimal but also because the field of neural network training is quite immature and every month new methods are found for getting the same result with less computation. The rate of methodological improvement is so fast it can be hard to believe. To give a sense of it: for a given level of performance we are seeing more than a 10x reduction in the hardware requirement for each year that passes. Better libraries, compilers, frameworks, and automated optimizers are part of this story but additionally new methods of quantization and of distillation, new architectural innovations, new and better data curation methods, larger datasets of higher quality data, new methods for automating hyper-parameter search and even of gradient descent itself are discovered regularly. So many powerful new methods have been uncovered in the last 12 months that everything running today will be 10x faster in a year even if we don’t find anything new. But we will find new improvements because we have every…single…year for the last decade. The FSD running on HW3 today is almost certainly orders of magnitude better (faster / smaller / more performant) than the first versions that came out when the platform debuted. And it will continue to get dramatically better for as long as Tesla cares to continue investing developer resources in making it better. There is certainly a point at which it’s cheaper to upgrade the millions of cars on the road than to invest the development cost needed to 10x the platform performance. But that day is not today and will probably not come for some time. So why does HW4 exist if HW3 is adequate? Because silicon continues to get better and cheaper. After a few years it’s actually cheaper to move to a newer, better device than to continue using the old one. Redesigning for a new IC process node (say 7nm rather than 14nm) requires substantial changes to an IC architecture because different IC elements shrink in different ways so you can’t just do a simple scale-down of a device. And if you’re making a new device anyway you might as well include new learnings in the NN space as well because eventually you will take advantage of them even if you don’t need to exploit them immediately. With Tesla’s continuous improvement process they would naturally want to periodically upgrade to a newer platform as better capabilities become cheaper. It’s dumb to not take advantage of silicon foundry improvements. So we have HW4. Eventually there will be versions of FSD that offer better functionality on HW4 than on HW3, but the additional capabilities of HW4 aren’t needed yet and probably won’t be for some time. There will be HW5, 6, 7. Just as we’re on iPhone 15 now and there’s no end in sight - FSD will be no different. Because we are very, very far away from *done* with making AI, computers, or cars better. As for AI - we haven’t finished picking up the fruit that’s lying on the ground to say nothing of the vast amount of stuff that is “low hanging”.
English
142
259
1.8K
322.3K
Emmanuel Sciara
Emmanuel Sciara@esciara·
Il semble qu'avant de considérer apprendre Rust 🦀, chaque dev devrait sérieusement patienter et regarder Mojo 🔥, en particulier si vous faites déjà du Python et que vous êtes/voulez être dans l'IA. Dans tous ces cas, un article à lire absolument.
Modular@Modular

Inspired by @ThePrimeagen's epic video discussing Mojo 🔥 and Rust 🦀, and fueled by an electrifying community discussion ⚡️ - we have a new post up, and you wont want to miss this one! 😱 ⬇️ Mojo vs. Rust: is Mojo 🔥 faster than Rust 🦀 ? 🤔 ❤️‍🔥 modular.com/blog/mojo-vs-r…

Français
0
0
0
55
Emmanuel Sciara retweetledi
Martin Fowler
Martin Fowler@martinfowler·
NEW POST David Tan and Jessie Wang reflect on how regular engineering practices such as testing and refactoring helped them deliver a prototype LLM application rapidly and reliably. martinfowler.com/articles/engin…
English
1
51
193
55.5K
Emmanuel Sciara
Emmanuel Sciara@esciara·
3/ en "Inline" : donne un peu des deux mondes, plus pratique pour de petites questions sur une portion précise, fourni des diffs 2/2
Français
0
0
0
2
Emmanuel Sciara
Emmanuel Sciara@esciara·
En synthèse de l'excellente youtu.be/GPLUGJsVx0s?si…, dans lequel de ces modes utiliser #GitHubCopilot : 1/ Par commentaires : pour donner du contexte, créer du code (from scratch) 2/ Par chat : pour poser une question, utiliser les commandes (/fix, /explain, etc.) 1/2
YouTube video
YouTube
Français
1
0
1
22
Emmanuel Sciara
Emmanuel Sciara@esciara·
3/ Clarté : utilisez un phrasé qui est facile à comprendre 4/ Spécificité : fournissez un haut niveau de détail et de précision 2/2
Français
0
0
0
3
Emmanuel Sciara
Emmanuel Sciara@esciara·
En synthèse de l'excellente youtu.be/GPLUGJsVx0s?si…, comment aider #GitHubCopilot à vous aider : 1/ Contexte : aidez le à comprendre la vue d'ensemble 2/ Intention : décrivez le scénario que vous voulez dérouler et votre objectif 1/2
YouTube video
YouTube
Français
1
0
0
15
Emmanuel Sciara
Emmanuel Sciara@esciara·
En synthèse de l'excellente youtu.be/GPLUGJsVx0s?si…, ce en quoi #GitHubCopilot excelle (attention les anglicismes) : 1/ Boilerplate code et scaffolding 2/ Ecrire des tests unitaires 3/ Pattern matching (regex et consorts) 4/ Expliquer du code imbitable.
YouTube video
YouTube
Français
0
0
1
27
Emmanuel Sciara
Emmanuel Sciara@esciara·
Une autre excellente vidéo, toute chaude sortie du four, de @geektrainer (anglais, 17 min), avec invitée, couvrant 1/ Ce en quoi #GitHubCopilot excelle 2/ Comment l'aider à vous aider 3/ Avantages des modes d'utilisation (commentaires, chat et "inline") youtu.be/GPLUGJsVx0s?si…
YouTube video
YouTube
Français
0
0
1
25
Emmanuel Sciara
Emmanuel Sciara@esciara·
Votre avis ? : quel destin pour les commentaires de prompting utilisés pour guider #GitHubCopilot ? Certains chez GitHub recommandent de les garder car ça donne une trace de ce qui a servi pour générer le code. Mon approche habituelle : Niet ! Le code doit suffire à lui-même !
Français
0
0
0
27
Emmanuel Sciara
Emmanuel Sciara@esciara·
@toddanglin - Écrire du code basique plus rapidement. - Aider à se rappeler de choses qu'on a oublié. - Etendre et refactorer du code existant. - Expliquer (et documenter) du code peu lisible. - Comprendre (et corriger) les messages d'erreur. - Ajouter des tests. 2/2
Français
0
0
0
15
Emmanuel Sciara
Emmanuel Sciara@esciara·
Concernant #GitHubCopilot, commentaire génial d'un collègue : "si je dois écrire 10 lignes de commentaires pour obtenir 20 lignes de code comme je veux, à quoi ça me sert ??!?" Vous en pensez quoi ??!?
Français
2
0
1
56
Emmanuel Sciara
Emmanuel Sciara@esciara·
En follow-up de ma série sur maitriser #GithubCopilot, super vidéo (anglais, 30 min) de @toddanglin sur les LLMs et les bases du prompt engineering pour développeurs: historique, fonctionnement. Pleins d'insights pour mieux les comprendre et les utiliser. youtu.be/YLwTMsNpOPA?si…
YouTube video
YouTube
Français
0
0
1
21
Emmanuel Sciara
Emmanuel Sciara@esciara·
6/ Ecrivez du code propre (rappelez-vous : "Garbage in, Garbage out") : A/ nommez les choses proprement, en leur donnant un sens intelligible ; B/ utilisez dans votre code les bonnes pratiques, qui seront alors reprises dans les suggestions.
Français
0
0
0
6
Emmanuel Sciara
Emmanuel Sciara@esciara·
Itérez, itérez, itérez : donnez des instructions par commentaire ; regardez ce que Copilot vous propose ; effacez sa proposition (ou les parties à améliorer) ; ajoutez des précisions et exemples à vos commentaires ; recommencez.
Français
1
0
0
8
Emmanuel Sciara
Emmanuel Sciara@esciara·
Maitriser #GitHubCopilot 8/8. En synthèse. Et la semaine prochaine, je partagerai des petits bonbons en plus 1/ Copilot est un outil. Plus vous maitrisez un outil, plus il vous apportera ce que vous cherchez à obtenir avec. Copilot peut devenir, à cet égard, un puissant outil.
Français
1
1
1
42