Thibault
2.3K posts


@Zapidroid @OpenAI because it’s easier to make ai on macs, they have the silicon chips which windows doesn’t have. it’s as easy as that
English

@Stabillo75 @YellOwStaRL0L @ClemLeLfe @ENC_EN Si tu fais un poll demain la majorité des français voteront pour caliste sans aucun doute mon copain
Français

🇫🇷 La France sera bien présente !
C'est officiel, j'ai l'immense honneur de vous partager que j'ai été nommé Manager de l'Équipe de France pour l'Esports Nations Cup @ENC_EN 🏆
La scène esport française est immense, la communauté derrière nous est incroyable et ensemble,
nous allons relever le défi face aux meilleures nations du monde.
On espère que vous serez au rendez-vous !
#ENC2026 #Esports #France
Français

@imjszhang @rryssf Dont worry bro, every frontier lab is already doing this inhouse. There is just a reason why people don't use the available RL framework and just build from scratch
English

BREAKING: NVIDIA just proved that the AI agent training bottleneck everyone blamed on model capability was actually an infrastructure design error.
Every framework SkyRL, VeRL-Tool, Agent Lightning, rLLM, GEM embeds rollout inside the training loop. I/O-intensive execution fighting GPU-intensive optimization. Nobody separated them.
> Treat rollout as a service. Qwen 8B nearly doubles on SWE-Bench. The compute was always there. It was just fighting itself.
> Training an AI agent with reinforcement learning requires two fundamentally different workloads running simultaneously. Rollout is I/O-intensive spinning up sandboxed environments, executing tool calls, waiting on shell commands, scoring outcomes. Training is GPU-intensive forward passes, backward passes, gradient synchronization. Every existing framework runs both inside the same process. The result is constant resource contention: rollout workers block on disk I/O while GPU compute sits idle, and gradient updates stall while environments finish executing.
> NVIDIA audited every major agentic RL framework and found the same architectural decision in all of them. SkyRL keeps rollout control inside the training driver. Agent Lightning embeds rollout workers as child processes of the trainer if training stops, rollout stops. VeRL-Tool, rLLM, and GEM all keep environment management and trajectory collection inside the training stack. Not because it's the right design. Because it was easier to build that way and nobody had fixed it yet.
> ProRL Agent separates them completely. The rollout system runs as a standalone HTTP service. The trainer sends a task instance. The rollout server handles environment initialization, multi-turn agent execution, tool calls, reward computation, and returns a completed trajectory. The trainer never touches the execution environment. The two systems communicate through one interface and run on separate machines optimized for their respective workloads.
> The results are not subtle. Qwen 8B: 9.6% on SWE-Bench Verified under standard training. ProRL Agent: 18.0%. That's close to 2x on the benchmark the entire software engineering AI field uses to measure progress. Qwen 14B goes from 15.4% to 23.6%. Qwen 4B goes from 14.8% to 21.2%. Same models. Same data. Different infrastructure.
→ Qwen 8B on SWE-Bench Verified: 9.6% baseline → 18.0% with ProRL Agent
→ Qwen 14B: 15.4% → 23.6%
→ Qwen 4B: 14.8% → 21.2%
→ Throughput scales near-linearly with compute nodes added
→ Efficient bash optimization alone: shell command latency drops from 0.78s to 0.42s per action
→ GPU utilization with full system: 78% vs 42% without load balancing
→ Every existing framework audited: zero had decoupled rollout from training
The engineering insight that gets buried in the results: the problem compounds at every tool call. A typical software engineering rollout spans dozens of sequential environment interactions. Each one blocks. Each one accumulates latency. At scale with hundreds of parallel rollouts, tool execution becomes the dominant bottleneck not model inference, not gradient computation. The entire field was measuring model capability while the infrastructure was quietly eating half the compute budget.

English

@marieoclock @GoogleDeepMind World models basically.
This is particularly useful for training the model themselves. We are still far from where we need to be so that it is useful but with a bit more that will be it
English

@GoogleDeepMind That's great, but what's the point? You have such an amazing tool and decide to spend the time and money on making this useless showoff feature? Nobody needs to generate a web in real time.
English

Watch how fast Gemini 3.1 Flash-Lite can generate websites. ⚡
This browser creates each page in real-time as you click, search, and navigate. Give it a try → goo.gle/4t9In1R
English

@DexterSka @SpencerHakimian @grok That is not relevant. You should rather check the difference in net income
English

@GannahLol @MovistarKOI_en T’abuses, faut reconnaître quand ça fait du bon content… on va pas hate pour hate
Français

@Shyguyy22 @KUKKI_ALL De sa gestion de vladi + les drafts + le cas BO c’est sûr qu’on a peut être un mauvais apriori, mais c’est dur de juger sans être dans le système. Il y a des bons retour sur lui dans la commu française donc je lui laisse le bénéfice du doute perso
Français

et mon gars Reha Ramanana il est ou ?
GIF
Aron@KC_AR0N
[RUMEURS] Wadi 🇫🇷 sera le Head Coach 👔 de Karmine Corp Blue Stars 🇫🇷 YOU DO YOU IS BACK
Français

@MoreauMore484 @_IDVL Oui, cette vidéo a été filmée dans le nord de la France. Elle montre des policiers faisant des dérapages sur la neige dans un parking la nuit.
Français

@LoL_France J’espère que vous mettrez à disposition dans l’api si le joueur est wasd ou non. Histoire que la commu puisse monitorer elle même si c’est broken…
Français

j'ai réfléchis en 3 min là et bah prenez moi pour un fou mais je pense que c'est une bonne chose
Aron@KC_AR0N
[RUMEURS] Wadi 🇫🇷 sera le Head Coach 👔 de Karmine Corp Blue Stars 🇫🇷 YOU DO YOU IS BACK
Français

@Shyguyy22 @02Xebec Si t’avais regardé sont stream de rétrospective, tu saurais que c’est pire que ça: Il pense que l’impact des drafts est assez faible (anyway il disait qu’il aimait pas vraiment ça et que c’était pas lui qui s’en occupait)
Français
Thibault retweetledi

Enfin de retour…
TraYtoN@TraYt0N
French Flair - Winter LFL 2026. ADAM🇫🇷 - BWIPO🇧🇪 - SAKEN🇫🇷 - 3XA🇫🇷 - TARGAMAS🇧🇪
Français
Thibault retweetledi
Thibault retweetledi
Thibault retweetledi

@Triisteetemps @LazyG292149 @Pafanadahollow @3XA_LoL Individuellement peut être, mais 3xa n’a pas le skill d’Upset pour faire collapse les teams
Français

Pas mal ce début de roster du côté de l'équipe "Trayton".
Deux bons choix pour le Top/ADC, avec deux joueurs qui avaient leur place en LEC !!
Pressé de voir la suite du roster ça a du potentiel 👀
Sheep Esports@Sheep_Esports
[SOURCES] Adam 🇫🇷 et 3XA 🇫🇷 vont rejoindre l'équipe de TraYtoN 🇫🇷 pour le LFL Invitational Report par @LEC_Wooloo ⬇️ sheepesports.com/en/articles/so…
Français




























