Sebastien Campion

2.4K posts

Sebastien Campion banner
Sebastien Campion

Sebastien Campion

@sebcampion

Research Engineer in Computer Science at INRIA

From BzH to BxL 🇧🇪 🇪🇺 Katılım Temmuz 2009
1.5K Takip Edilen406 Takipçiler
Sebastien Campion
Sebastien Campion@sebcampion·
@lemire I can confirm that, as far back as 15 years ago, I recompiled the Debian package from source code for the BLAS and LAPACK libraries, so that I could also use the grid search optimiser on L1/2/3 levels. And when you deploy your application on a 2,000-node cluster, it makes a gap
English
0
0
0
216
Daniel Lemire
Daniel Lemire@lemire·
Almost all x64 processors today (Intel/AMD) support advanced instructions such as BMI2, AVX2. Unfortunately, most Linux distributions give you code that is compiled to run on systems without these instructions. It means that you may get inferior performance. Many distributions like RedHat have begun defaulting on processors that have these advanced instructions. Recent versions of the popular Linux distribution Ubuntu offers version of your favourite software compiled with these advanced instructions (x86_64-v3 packages). Unfortunately, as far as I can tell, they are disabled by default and require some work to be enabled. I invite you to experiment.
Daniel Lemire tweet media
English
48
33
441
63.2K
Sebastien Campion
Sebastien Campion@sebcampion·
@morganlinton @Brooooook_lyn It’s never enough fast :) Async tasks like a claw agent (with limited context 8k) etc yes Software dev not really Try but keep in mind that almost of your RAM is used
English
0
0
0
56
Morgan
Morgan@morganlinton·
Okay, this is insane. If I’m reading this right, I can now run Qwen3.6 27B on my four year old M1 Mac Studio w/32gb of ram. While it won’t run fast, likely only about 20 tok/s, that’s still insane. Hats off to @Brooooook_lyn, he’s doing some absolutely amazing stuff.
Eric ⚡️ Building...@outsource_

🚨GUYS QWEN3.6 27B/35B @unslothai MLX QUANTS @Brooooook_lyn cooked 🔥 Apple Silicon users, the exact models y’all have been asking for LANDED. Unsloth mixed-precision quants in native MLX format. >Fast. >Clean. >Ready to run Model list + sizes + expected unified memory 🧠 Q2_K_XL → 15GB | 18.6 tok/s (3.32x) → on 24GB 🔥Q3_K_XL → 18GB | 15.5 tok/s → perfect on 32GB 🚀 Q4_K_XL → 21GB | 13.9 tok/s → sweet 32-36GB 🤖 Q5_K_XL → 25GB | 12.0 tok/s → 36GB+ 🏆 → 27GB | 10.8 tok/s → 48-64GB+ Q8_K_XL(27B models shown — collection also has 35B-A3B variants) Drop this straight into MLX and watch it fly on your M4/M3/M2 Mac. 👇🏻 huggingface.co/collections/Br…

English
19
36
442
71.2K
Sebastien Campion
Sebastien Campion@sebcampion·
@julien_c 15 tokens/ sec sur mon “vieux” MacBook M1 Pro de 32GB Côté qualité ça commence à devenir sérieux pour un usage réel Connecté à mes services lexsocket.ai, les premiers résultats sont très bons
Français
1
0
1
154
LDLC
LDLC@LDLC·
Qui veut une carte graphique GRATUITE ? (bon lundi)
Français
3K
304
6.9K
446.6K
Pappers
Pappers@get_pappers·
Pappers lance son MCP ! 3 raisons de tester : 1️⃣ Profondeur inédite : Chaque champ a été peaufiné pour que l'IA "raisonne" sur de la donnée fiable 2️⃣ Combo gagnant : Le raisonnement de l'IA + la data certifiée de Pappers 3️⃣ Accès gratuit et illimité pendant 2 semaines !
Français
17
36
298
142.7K
Benjamin Bayart
Benjamin Bayart@bayartb·
@gchampeau C'est toute la différence entre open source porté par une entreprise, et open source porté par une communauté. La notion de communauté de contributeurs est très importante, et trop souvent négligée. C'est un sujet de gouvernance, que les DSI ignorent facilement.
Français
3
0
17
999
Guillaume Champeau
Guillaume Champeau@gchampeau·
Ca c’est un sujet explosif pour la réputation de l’open source. Les changements restrictifs de licence dès que le projet arrive à un stade d’adoption massive, ça tue toute confiance aussi dans les autres projets open source :
Guillaume Champeau tweet media
LeMagIT@LeMagIT

Ingress Nginx : fin du support le 13 mars. ~50% des clusters Kubernetes exposés. Sur combien de composants critiques votre infra repose-t-elle sans le savoir sur ce modèle ? lemagit.fr/actualites/366…

Français
8
9
31
7.4K
Sebastien Campion
Sebastien Campion@sebcampion·
@remilouf +1000 :) We really need to build the OSS ressources to improve our tools and coding agent
English
0
0
3
97
Sebastien Campion
Sebastien Campion@sebcampion·
@sc_cath C’est un changement de paradigme qui a déjà commencé avant l’IA je pense. Il y a un siècle, l’accès à l’information, au contenu était couteux, désormais il faut payer pour filtrer l’information
Français
0
0
1
907
Sylvain Catherine
Sylvain Catherine@sc_cath·
AI slop is going to be very expensive because many subtle parts of economic and social systems are organized around small frictions that allow people to convey meaningful signals by overcoming them. You read letters because someone thought them worth the time to write. AI removes these frictions. It will render many social norms totally ineffective and will saturate many important channels of communication.
Gergely Orosz@GergelyOrosz

The death of inbound applications is upon us: and yes, it’s in a big part because of AI making it dead simple to apply. And so inbound applications become noisy, with increasingly more of non-qualified people. And so companies rely on referrals and recruiters to source instead.

English
31
83
909
82.6K
Mathieu Acher
Mathieu Acher@acherm·
The last challenge (building a Brainfuck interpreter in MNM Lang) meant not only learning MNM Lang from scratch, but also correctly reproducing Brainfuck itself: its 8 instructions (><+-.,[]), tape semantics, pointer updates, I/O, and nested bracket matching.
English
2
0
3
165
Mathieu Acher
Mathieu Acher@acherm·
I asked a coding agent to program in a esoteric language freshly invented in March 2026. Source code is literally colored M&Ms on a table. No docs beyond an opcode table. No Stack Overflow. No worked solutions. It solved all 26 challenges. Including a Brainfuck interpreter.
Mathieu Acher tweet media
English
3
5
19
1.3K
Sebastien Campion
Sebastien Campion@sebcampion·
@cortisquared Oui c’est une très bonne idée lorsque l’on sait comment sont déployés et utilisés ces outils en entreprise. C’est le ratio résultat/token qui compte
Français
0
0
0
279
Corti (Cortiste)
Corti (Cortiste)@cortisquared·
C’est probablement une bonne idée mais ce que je comprends pas c'est que Mistral construit tout ça sur ses modèles foundational, qui sont quand même maintenant en dessous de la concurrence, et pas juste en dessous des modèles propriétaires américains, aussi en dessous
Frid 🇪🇺🦌@Frid45

The French startup, Mistral AI unveils Forge, a platform enabling enterprises to build AI models grounded in their own data, workflows, and systems. Not generic AI, but tailored intelligence. ASML, ESA, Ericsson already onboard. Europe is stepping up 🇪🇺

Français
9
1
28
12K
Mathieu Acher
Mathieu Acher@acherm·
@Fabien_Mikol Donc pas du tout d’accord avec Chollet (et d’autres !) sur ce cas car j’ai des expériences très récentes qui montrent des capacités incroyables/nouvelles... les M&Ms, le POP, le moteur d’echecs en Brainfuck, etc. Je détaille demain ;-)
Français
2
1
11
1.8K
Fabien
Fabien@Fabien_Mikol·
Très intéressant puisque justement il semble bien que les travaux de Mathieu Acher vont à l'encontre de l'interprétation de Chollet. J'ai demandé son avis à Claude c'est intéressant. Je suis preneur de l'expertise de @acherm !
Fabien tweet mediaFabien tweet media
François Chollet@fchollet

This is more evidence that current frontier models remain completely reliant on content-level memorization, as opposed to higher-level generalizable knowledge (such as metalearning knowledge, problem-solving strategies...)

Français
2
2
8
2.4K
Peer Richelsen
Peer Richelsen@peer_rich·
mistral is proof you can be a foundational model company with 0 consumer adoption and a really bad model and still make 400M ARR
English
85
25
1.8K
170.4K
Sebastien Campion
Sebastien Campion@sebcampion·
@babgi Ne pas oublier que le show business a été inventé par les US Relire Éric Vuillard, Tristesse de la terre : Une histoire de Buffalo Bill Cody
Français
0
0
0
619
Gilles Babinet
Gilles Babinet@babgi·
Ça commence à devenir un feuilleton intéressant 1° il y a 15 jours Dario Amodei a confirme qu'il ne souhaite pas que ses technologies servent à espionner des américains et a faire des armes autonomes 2° il ne se plie pas a l'ultimatum de Peter Hegseth qui expirait vendredi a 22h française 3° Sam Altman vient de dire qu'il partage les preoccupations de Amodei. Partie de poker en vue axios.com/2026/02/27/alt…
Français
9
43
132
19.5K
Rémi
Rémi@remilouf·
It is fairly simple. I go on walks every morning and usually record my rambling. It’s synced with my server automatically, transcribed and copied to an inbox in my @obsdmd vault (although I could use any frontend, it’s just text files). A first agent scans the transcription and does a few things: 0. Create a new daily note 1. Identifies topics 2. Identifies (pending) decisions 3. Identifies tasks -> pushed to Things Inbox 4. Identifies potential evergreen notes or addition to one, and suggests it (linking to the daily note). I need to tick a box to approve. If I record other voice notes during the day it’s appended to the file. Every evening an agent reads the notes with the notes of the previous days and 1. Promotes everything I approved to evergreen notes and runs another agent to find relevant links. 2. Checks Things and summarizes my day 3. Surfaces issues that have been unresolved for a while, topics I keep bringing up 4. Asks 3 hard questions If there have been changes in my evergreen notes it scans the full vault for links. Then every week, month and quarter different agents analyze my notes and connect to other systems of record via CLIs I wrote. I have an agent that surfaces potential blog post topics (but doesn’t write the blog post) I also have an agent connected to a Telegram bot that has read-only access. Very nice to ask questions. Hoping to get rid of this at some point when I can run decent agents on my phone. It’s been very helpful for me, a natural rambler who processes things by talking and not by writing. The prompts are really hard to get right at the beginning, mostly because you’re discovering your needs as you go. I would recommend you start with a very simple agent (like daily notes), iterate for a couple of weeks before moving on. Don’t build cathedrals from the get go, however tempting it is. Even if it’s been running for a while, I still find myself checking the transcript to make sure it didn’t forget anything. Sometimes it does, maybe a prompting skill issue. I consider the daily notes as a DRAFT, albeit a very helpful one. Identify interfaces and places where a human is needed to prevent the whole thing from becoming an unholy mess; it can just be requiring a box to be ticked to approve. The links suggestion is life-changing. I plan on moving everything to open source models progressively.
Rémi@remilouf

Ok Time to write about my setup I guess 🙂

English
13
14
379
64.8K