Bertrand Duflos

12K posts

Bertrand Duflos banner
Bertrand Duflos

Bertrand Duflos

@BertrandDuflos

AI, platforms, data. Digital transformation. IP, licensing, competition law, copyright.

Paris Katılım Aralık 2012
3.6K Takip Edilen4.8K Takipçiler
Bertrand Duflos retweetledi
Alexandre Quiec
Alexandre Quiec@AlexQuiec·
"Qui étaient les alliés de la Russie il y a encore deux ans ? Le Vénézuéla, la Syrie et l’Iran. Aujourd’hui, leurs dirigeants sont complètement lâchés par le pouvoir russe. Le moins que l’on puisse dire, c’est qu’il ne fait pas bon à être protégé par la Russie." @EmmanuelMacron
Français
224
459
1.8K
120.5K
Bertrand Duflos retweetledi
Christiane Amanpour
Christiane Amanpour@amanpour·
"I've never seen an American president, not just in our lifetime, in the last two centuries of American history, be that critical of a British prime minister." @RNicholasBurns tells me "a fundamental mistake is: disparage your allies, make life difficult for them at home, you really can't expect them to be with you in a fight that they didn't start
English
498
1.5K
6.2K
405.4K
Bertrand Duflos
Bertrand Duflos@BertrandDuflos·
Arnault Chatel@ArnaultChatel

🇫🇷 @MistralAI vient de faire 4 annonces titanesques. Et personne n'en parle en France.(comme d'habitude) Les Américains, eux, ils sont en PLS. Alors permettez-moi de corriger ça. 1/ Small 3 → Small 4 Un modèle qui réunit TOUT le savoir-faire de Mistral. Open source. Gratuit. Mixture of Experts. Raisonnement + multimodal + code. Fenêtre de contexte XXL. Licence Apache 2.0 = ultra-permissive. C'est le nouveau champion de l'IA open source mondiale. 2/ Mistral rejoint la coalition Nemotron (NVIDIA) Aux côtés de Black Forest Labs, des meilleures boîtes IA open source de la planète. Un seul siège français dans cette coalition d'élite. Ce siège, c'est Mistral. 3/ LeanMistral Un modèle dédié aux preuves formelles : maths, sciences, raisonnement rigoureux. L'IA qui ne se trompe pas — et qui peut le prouver. Pour la crédibilité de l'IA en entreprise, c'est un game changer. 4/ Mistral Forge Fini le fine-tuning artisanal ou les bases de données séparées. N'importe quelle entreprise peut maintenant créer son propre modèle, entraîné sur ses données, verticalisé sur son métier. Des centaines d'IA hyper-spécialisées vont émerger. Elles auront toutes du Mistral dans les veines. L'avenir de l'IA, ce n'est pas forcément le plus gros modèle propriétaire derrière un paywall. C'est peut-être une IA open source, gratuite, partout, dans tous les logiciels et services — une vraie commodité technologique. Et le champion qui dessine cet avenir ? Il est français. Il s'appelle Mistral. Vous en pensez quoi ? #IA #AI #IAGen #LLMs #MBADMB #OpenSource #FrenchTech

QHT
0
0
1
130
Bertrand Duflos retweetledi
fabienne billat
fabienne billat@fadouce·
Yann Le Cun réunit un commando mondial pour réinventer l’IA depuis la France #WorldModel Le siège d’AMI Labs se situera dans le 2ème de Paris Par @GGrallet buff.ly/yp6gViN
fabienne billat tweet media
Français
7
41
118
4.8K
Bertrand Duflos
Bertrand Duflos@BertrandDuflos·
Mr Trump bullied his former European allies by brutally imposing tariffs. And more importantly by suggesting taking Groenland by force No surprise that the Europeans consider their US "allies" with a pinch of salt. Support a US Navy which later will just serve to pressure us?
English
0
0
1
43
Bertrand Duflos retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
oh yeah i should have linked autoresearch probably github.com/karpathy/autor… (you don't "use it" directly, it's just a recipe/idea - give it to your agent and apply to what you care about.) and the tweet about it that went mini-viral over the weekend with more context x.com/karpathy/statu…
English
92
209
2.5K
310.9K
Bertrand Duflos
Bertrand Duflos@BertrandDuflos·
One more step towards self-improving AIs.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
0
0
0
84
Bertrand Duflos retweetledi
Wide Awake Media
Wide Awake Media@wideawake_media·
Investigative journalist Whitney Webb: "They don't really need blackmail anymore... [Palantir is] the new Jeffrey Epstein." "If they want to blackmail you... they just access what Palantir sucked up about you—your search history, your communications, your finances, tweets you've liked in the past, all sorts of things." "You don't really need Epstein in the surveillance era." "The disturbing thing about Palantir is that it's really about pre-crime. They were the pioneers of predictive policing." "Palantir is the resurrection of [a] DARPA program called Total Information Awareness—that was so scandalous when it was announced it was defunded by Congress." "But then they realised that if they turned it into a private company, no one would complain. And that's how we got Palantir."
English
293
9.5K
27K
756.2K
Bertrand Duflos retweetledi
Arnaud Bertrand
Arnaud Bertrand@RnaudBertrand·
I don't know if people understand just how insanely egregious this is. First of all, 1) not only are NATO spending targets NOT legally binding (nothing in any NATO-related legal text mandates a specific GDP-based threshold for defense spending), but on top of this 2) Spain requested AND RECEIVED an exemption from the 5% target at the 2025 Hague Summit - NATO changed the declaration's language specifically to allow Spain to sign while publicly declaring it would not comply (jurist.org/news/2025/06/n…) This means that, legally speaking and according to NATO's own rules, Spain is doubly within its rights: there is no binding obligation to begin with, and Spain was excused from even this non-binding obligation. That's the first point: Germany's chancellor just endorsed - from the Oval Office - the U.S. punishing a fellow EU nation for refusing to comply with an obligation that doesn't exist in law, under a political pledge Spain was excused from at a NATO summit. The second point is that this 5% target has nothing to do with "defense", quite the contrary in fact: it is pretty explicitly an imperial tribute to the U.S. that will actually **weaken** European defense. That was Spain's main argument for refusing to comply: Prime Minister Pedro Sánchez said that "committing to 5% would not make us any safer" because it "would only reinforce our dependence" on the U.S. (tiempoar.com.ar/ta_article/ped…) That's the insane thing about EU defense spending: in recent years, the more it has spent on defense, the more that spending has flowed to American contractors as opposed to European ones, making the EU defense industry weaker (x.com/adam_tooze/sta…). Increasing spending to 5% doesn't strengthen European defense: it accelerates exactly this transfer. All the more insane given the well-documented production backlogs in the U.S. defense industry and its inability to produce at scale: US defense analysts - including from Trump-adjacent think tanks like AEI (aei.org/research-produ…) - openly acknowledge that European customers would be deprioritized behind U.S. ones in any real conflict. AND, critically, a defense industry from a country that's increasingly hostile to Europe - explicitly so in its National Security Strategy - and whose weaponry has "kill switches" that allows for remote disabling. I mean, the sheer madness of it: anyone with an ounce of common sense can see that DOUBLING your defense spending to enrich a foreign arms industry that has kill switches on your weapons, can't meet its own military's needs, and increasingly treats you as an adversary, is not even remotely a defense strategy - it's suicide. That's why having Merz - in the oval office, sitting next to Trump - endorse economic coercion against the one EU country that's still sane enough to see through this madness is so egregious, and frankly straight-up traitorous. For those who know Asterix and Obelix, Spain is the "one small village still holding out against the invaders" and Merz is Cassius Ceramix, the self-described "gallo-roman" Gaul village chief who's the incarnation of all sycophants after his tribe were conquered by the Romans. I'm with Asterix, and all Europeans should be too.
Disclose.tv@disclosetv

NOW - Germany's Merz supports U.S. embargoing Spain, claims it's to "convince" them to increase NATO spending.

English
216
2.6K
9K
577K
Bertrand Duflos retweetledi
Wes Roth
Wes Roth@WesRoth·
Anthropic has launched a new feature for Claude called Import Memory, designed to make it completely frictionless for users to migrate away from competitors like ChatGPT and Gemini. Right as thousands of users are protesting OpenAI's military contract and migrating to Claude, Anthropic launched a simple two-step process to transfer all your ChatGPT or Gemini memories directly into Claude. You just paste a specific prompt into ChatGPT to extract your data, drop it into Claude's settings, and your new AI instantly knows all your preferences, coding styles, and project context.
Greg@GregFeingold

Ready to make the switch? claude.com/import-memory

English
27
44
282
41.4K
Bertrand Duflos retweetledi
Judd Legum
Judd Legum@JuddLegum·
Judd Legum tweet media
ZXX
138
3.5K
14.8K
446.4K