Maxime Batandeo

4K posts

Maxime Batandeo

Maxime Batandeo

@BATANDEOM

Price beliefs. Simulate markets. Let the best arguments win. Building an epistemic engine to provide you with a differentiated worldview

Paris 가입일 Şubat 2012
1.6K 팔로잉459 팔로워
Benjamin Code
Benjamin Code@benjamincode·
À la base, le but de mon tweet c'était surtout d'attirer l'attention sur le fait que les Dev et les Français sont pas vraiment les mieux cablés pour faire du marketing... C'est un peu parti en sucette. 😄 Par rapport à ce qui est revenu le plus : Mon audience YouTube, constituée pour la grande majorité de devs, n'a rien à voir avec la cible de Meet Sponsors. La grande majorité de mes ventes sont faites à des clients non Francophones qui n'ont jamais entendu parlé ni de moi ni de mon SaaS. Un truc vrai c'est que je suis arrivé en premier sur ce marché (maintenant on est plusieurs), et mes abonnés ont pas mal cliqué sur les liens en description donc j'imagine que ça a du booster mon ranking dans les moteurs de recherche. Je suis normalement rank #1. Les LLM aussi me recommendent beaucoup. Quand je dis que j'ai jamais fait de marketing, c'est factuellement faux. J'ai bossé avec @Skyzox qui m'a aidé sur une campagne d'acquisition qui a eu de bons résultats mais on s'est un peu perdu de vue et ça fait presque un an que y'a pas eu d'acquisition. Ce que je voulais dire par "j'ai jamais fait de marketing" c'est que j'ai personnellement encore jamais utilisé toutes les nombreuses armes qu'on peut utiliser pour faire de l'acquisition : cold mailing, influence, creation de contenu SEO, ad sur les réseaux sociaux, ad sur YouTube et tant d'autre... J'ai rien fait de tout ça, et c'est toutes les armes que je me gardais dans une boite jusqu'à ce que j'estime mon produit prêt. Peut être que j'aurais du ouvrir la boite y'a bien longtemps, peut être que mon temps de gestation a été nécessaire, l'avenir nous le dira. En attendant je suis content de la vitesse à laquelle je vais depuis le début. Je vous documente quasi toutes les étapes, je partage avec vous sur YouTube un max de choses ce qui me ralentit nécessairement dans le développement de la plateforme. Ça prend du temps de faire ces vidéos ! En tout cas, c'est une aventure de dingue. J'apprends des milliers de choses tous les jours et tous les jours je reçois de nombreux commentaires et DM qui me font réfléchir ou m'apprennent des choses. C'est ça que je tire du build in public. Tous les gens que ça attire à moi. C'est déjà une énorme valeur. Ça me va que les choses soient ainsi et de n'avoir rien à leur vendre (pour l'instant 😈) Merci à tout ceux qui soutiennent et à bientôt pour les résultats de ce nouvel arc Marketing !
Français
28
3
187
20K
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
Perplexity Computer catching a $14k tax error your CPA missed is impressive execution. But that’s not what you’re paying McKinsey, Big 4, or your CPA for. You’re paying them to hold the bag when it blows up. AI agents are crushing the “doing” part. The real product professionals sell is liability transfer + third-party validation + reputation skin in the game. When something goes wrong : audit failure, bad tax advice, strategic disaster the firm eats the lawsuit, the settlement, the reputation hit. You can’t sue Claude the same way. Not yet. They don’t underwrite outcomes. The numbers make this concrete: Consulting : McKinsey advised Purdue Pharma on “turbocharging” OxyContin sales. Total settlements: $650M to DOJ, $573M across 49 states, $125M to the Purdue bankruptcy estate, $900M+ in prior deals. $2B+ total. No AI agent has a balance sheet that absorbs that. Audit : Big 4 failures triggered $1.5B in securities class-action settlements in 2025 alone. 51% of all securities class-action dollars that year. When an auditor gets sued, their clients’ loan spreads jump 7.6% from reputational damage, lasting up to 2 years. That’s why clients pay for the validation. Tax malpractice : One documented CPA screw-up: client owed $2.25M to IRS in taxes, penalties, and interest → $800k malpractice settlement. Small CPAs carry E&O insurance at $500–2k/year precisely because clients sue. Big 4 pay far more. Legal malpractice : Average claims run $160k–$237k. High-stakes cases hit millions. The premium exists because the risk is real and quantifiable. The list goes on and on The actual dynamic: AI does the work faster and cheaper. Great. But until Perplexity or any agent underwrites the risk : real indemnity, guaranteed outcomes, balance sheet behind mistakes, clients will keep paying humans to hold the bag. If anything, agent errors will make third-party risk-bearers more valuable. Every AI mistake that costs someone real money is a reminder of what that CPA premium was actually buying. McKinsey, Big 4, CPAs, lawyers aren’t going anywhere. The product was never the deliverable. It was always the liability transfer.
Aravind Srinivas@AravSrinivas

Perplexity Computer is more reliable than a CPA for filing taxes.

English
0
0
0
51
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
Anthropic is running the Vercel playbook and I believe it is smart as fuck. It is no secret Vercel is a wrapper of AWS, their functions literally run on Lambda, confirmed by Vercel's own engineering blog. They turned raw infrastructure into a DX layer, captured the developer, and now charge margin on top of AWS. Anthropics is doing the exact same thing only one abstraction layer higher. The model size race tells you why the infrastructure layer matters so much: - Meta Llama 4 Behemoth: ~2 trillion total parameters (still in training) - Grok 3: ~3 trillion parameters, trained on 200,000 H100s - Grok 5 incoming: reportedly 6 trillion parameters No one self-hosts this. No one can. The compute required is physically beyond what any enterprise team manages on their own. So the stack becomes: AWS/Google/Azure → chips + power Anthropics → model + managed agents layer You → you ship features Vercel abstracted Lambda. Anthropic abstracts the entire training and inference stack across three hyperscalers simultaneously… and uses them as leverage against each other while doing it. The lock-in isn't the model. It's the agent infrastructure, the harness, the tooling that your prod code depends on. Exactly like no one migrates off Vercel once their CI/CD, preview envs and edge config are wired in. Token price goes to zero. Outcome per token is the new pricing model. And Anthropic is positioning to own that layer.
Aakash Gupta@aakashgupta

The real story here is why Anthropic agreed to train on a chip that's objectively slower than Nvidia's best. Trainium2 loses to GB200 on raw TFLOPS. Sounds disqualifying until you understand what matters for training modern reasoning models: memory bandwidth per dollar. Reinforcement learning is memory-bound, not compute-bound. Amazon wins that comparison. Anthropic's engineers didn't just accept the chip. They co-designed it. Wrote low-level kernels interfacing directly with the silicon. Helped shape the Neuron software stack. In exchange, Amazon built 1.3 gigawatts of dedicated capacity and committed $8 billion. Then Anthropic went to Google and got 1 million TPUs. Then kept running Nvidia GPUs too. Three chip ecosystems. Three hyperscalers competing for the same workloads. Each one spending billions to be one of three training partners. Anthropic's run-rate revenue just crossed $30 billion, up from $9 billion six months ago. Over 1,000 enterprise customers spending $1M+ annually. The company that chose the "slower" chip is now the fastest-growing AI lab on the planet. Garman wants this to be a Trainium victory lap. It is one. But the deeper read is that Anthropic turned three competing silicon roadmaps into leverage against each other, and each hyperscaler is spending billions for the privilege of being one of three options.

English
0
0
0
39
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
The real issue is whether you can algorithmize your own thinking. That’s exactly why LLMs shine as soon as you have computational thinking. But if you can’t clearly explain, in logical, step-by-step, instruction-based terms why you did what you did and why you thought what you thought, then you can’t algorithmize it — and your LLM won’t help you much. It’s funny because you see people saying: “Yeah but actually, it enables discovery, it lets you explore new ways to solve complex problems.” All of that is true… but it actually depends on the person’s level of abstraction and, more importantly, on their ability to understand themselves — their “power” over how they generate their own thoughts. That part is extremely hard. Almost nobody does this in real life. Or at least, I don’t know many who do. I talk to myself every day with a massive inner monologue. It’s very real. I constantly ask myself: “Where do my thoughts actually come from?” So in a way, I’m a kind of philologist of my own mind — a genealogist of my thoughts. I do intellectual archaeology on my own reasoning. Even that was incredibly difficult to reach. I realized that what I call an “argument” is really just the way I function. And even that isn’t accurate. I don’t actually think in terms of “here’s the key point, here’s the proof, here’s the logic.” Not at all. The way I really think is: first I have an intuition. Then I ask myself: “What is this intuition based on?” And very often it’s based on sensations or feelings. Only afterwards do I rationalize it and construct the logic a posteriori. Most of the time we’re using heuristics, not actual proofs. In reality, nobody truly demonstrates anything in this world. Even the things you think you “know,” you haven’t proven them yourself — you just take them on faith because smarter people or authorities have supposedly done the work. So when it comes to LLMs, it’s like having an army of people smarter than you who have already demonstrated things. You use their heuristics and theoretical frameworks to validate or invalidate your own intuition. The output I generate? Honestly, a lot of it is “I feel it, it’s vibes.” The reward signal is the feeling I get when I look at the quality of the result. That’s why sometimes I don’t even need to re-read the whole thing. Once you deeply understand your own generation function, you don’t always need to check the final output anymore. So yeah algorithmic thinking is not up to grab for everyone
a16z@a16z

Steven Sinofsky on why it's hard for AI to diffuse through firms: "Algorithmic thinking is really, really, really hard for the vast majority of people who have jobs… If you were to go into any person and ask them to create a flow chart for a particular thing that they have to go do, they would probably fail at producing that flow chart." "So within any organization, say doing a marketing plan… one person probably understands and could document the flow chart. So if you put one of these agents or this coworking tool in front of people… their ability to explain to it what to do is really, really limited." "You're basically just developing the next abstraction layer for how people interact… at each level of the abstraction layer, [it's] been a highly skilled, very specific individual within an organization… and then the little parts they build become little toollets… and some people can stitch together and some can't." @stevesi

English
0
0
0
27
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
Claude Managed Agents c’est l’infra qui scale + harness calibré pour leur modèle, zero maintenance. Exactement comme Vercel avec Next.js : tu prends le tout-en-un, tu es lock-in par vendor dependency, et une fois que le use case performe et que tu envoies du code qui marche vraiment (safe, livrable, scalable), tu changes plus. Étant donné qu’il y’a plus d’argent dans le token brut (ça va être commoditisé). Ils vendent l’outcome par token : le résultat business réel vs (training + inference + infra cost). Si on passe dans un monde full AI agents, Anthropic est en train de se positionner en AI Cloud Provider et va direct challenger Google / Amazon / Microsoft sur le cloud. Affaire à suivre
Claude@claudeai

Introducing Claude Managed Agents: everything you need to build and deploy agents at scale. It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days. Now in public beta on the Claude Platform.

Français
0
0
0
41
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
I think a lot about productivity, especially now that AI is effectively turning teams into collections of highly amplified individuals. At the individual level, it’s hard to deny the gains. People clearly produce more. In many cases, dramatically more. But more output doesn’t automatically translate into more value or more revenue. If it doesn’t move outcomes, it’s just inventory or worse, waste. That distinction matters even more in an AI context. Because the real constraint was never pure output. It was coordination. As a former PM, this is the part that feels unchanged: getting multiple stakeholders to align around a shared direction is still the hardest problem. You can put six highly capable people in a room now each with AI “superpowers” and still fail to agree on what to do. Or worse, agree superficially while each executes in a slightly different direction. If you think of individuals as vectors, AI mostly increases the magnitude the ability to move the needle. But it does nothing, by default, for direction. And two strong vectors pointing in opposite directions still cancel each other out. In fact, the stronger they are, the more damage they can do collectively while each feels individually productive. So collective productivity is not the sum of individual productivity. It’s a function of alignment. And alignment is not just a soft constraint, it’s a scaling one. It’s also not simply the inverse of individual strength. You can have highly capable individuals who are also highly alignable, call it coachability, or alignment capacity. But that’s a separate dimension. The real risk is the highly talented, highly empowered individual who is not aligned with the collective direction. AI makes that risk sharper. Because now, someone can be 10x or 100x more productive locally, while optimizing for the wrong objective. They can get stuck in local maxima, build convincing artifacts, and create momentum around directions that don’t serve the broader goal. From their perspective, they’re highly effective. From the system’s perspective, they’re creating divergence. It’s not obvious that this nets out positively. In fact, it wouldn’t be surprising if, in some organizations, collective productivity decreases with AI. Not because people are less capable, but because misalignment scales faster than coordination. We didn’t solve alignment at scale before AI. If anything, we worked around it with slower execution and higher coordination overhead. Now we’ve removed some of that friction on the production side without solving the coordination side. So the bottleneck becomes clearer. PMs were never just there to increase output. They were there to manage direction: to align stakeholders, mediate tradeoffs, and maintain coherence across parallel efforts. That problem doesn’t go away with AI, if anything, it becomes more central. Until we can align amplified individuals as effectively as we amplify them, it’s not clear we’ll see true, system-level productivity gains.
Lenny Rachitsky@lennysan

My biggest takeaways from @AnthropicAI's Head of Growth Amol Avasare: 1. Engineering is getting the most AI leverage—and it’s squeezing PMs and designers. With Claude Code, a five-engineer team now produces the output of 15 to 20 engineers. But PM and design productivity haven’t scaled proportionally. The result is a compressed ratio where one PM is effectively managing the output of a much larger engineering team. Anthropic's growth team is responding in two ways: hiring even more PMs (!), and formally deputizing product-minded engineers to act as mini-PMs for any project with less than two weeks of engineering time. 2. Anthropic is using Claude to automate its own growth. The internal initiative is called CASH (Claude Accelerates Sustainable Hypergrowth). It works across four stages: identifying opportunities, building features, testing quality, and analyzing results. Right now it handles copy changes and minor UI tweaks. The win rate is comparable to a junior PM with two to three years of experience, and improving rapidly. 3. The one part of PM work that AI can’t automate yet: getting six people in a room to agree. Amol and his head of design joke that even with AGI, it’ll still be impossible to align six stakeholders. Cross-functional coordination—managing opinions, navigating politics, mediating tradeoffs—remains the bottleneck that AI doesn’t touch for larger projects. This is why Amol believes PM roles aren’t going away, and may actually grow. 4. 60-80% of Anthropic’s growth team's projects have no PRD. For smaller work, kickoffs happen on Slack—messages back and forth with product-minded engineers who can push back and ask the right questions. For larger projects, Amol believes in a proper 30-minute cross-functional kickoff (legal, safeguards, stakeholders) to surface concerns early. 5. Adding friction to onboarding drives growth—if the friction helps users understand why the product is for them. His work Mercury, MasterClass, Calm, and now Anthropic, adding steps to onboarding flows consistently improved conversion. The key: cut annoying friction that doesn’t add value, but add friction that helps users understand why the product is for them. 6. AI companies need to focus on bigger bets, not better A/B tests. Amol’s argument: if your core product value is driven by AI, then the future value is orders of magnitude higher than today’s value, because model capabilities grow exponentially. In that world, micro-optimizations capture a shrinking share of a growing pie. Traditional growth teams do 60% to 70% small optimizations and 20% to 30% big swings. At Anthropic, they flip this ratio. 7. Amol built a weekly AI agent that scans Slack for cross-functional misalignment. Using Cowork with the Slack MCP, he has a scheduled task that looks across his projects and conversations and surfaces areas where teams are about to do overlapping work or pull in different directions. A colleague on the enterprise team already caught major misalignment that would have caused weeks of wasted effort. 8. A traumatic brain injury taught Amol the principle that now drives his work: freedom through constraints. In early 2022, a kick to the head during a Muay Thai sparring session caused a traumatic brain injury. Amol spent nine months off work and months relearning to walk, unable to look at screens or listen to music for more than 20 seconds. He was re-injured a month after joining Mercury and had to take two more months off. He’s still not fully healed. But the constraints—no alcohol, no caffeine, mandatory breaks, daily meditation—have become the habits that let him operate at the intensity Anthropic demands. “The true freedom in life is learning how to be content when you don’t get what you want.”

English
0
0
0
43
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
In an industry built on determinism, I feel we might be underestimating the amount of work required to actually integrate LLMs, precisely because they are nondeterministic. There’s this assumption that iteration is part of the value: that by prompting, refining, and looping, we improve not only the output but also our own understanding of the task. And that’s true to some extent, often the initial specification isn’t precise enough, and iteration helps surface what we really meant. But this only works if what the model produces is a faithful (even if imperfect) realization of the idea. In practice, it’s often something else: a rough approximation of the intent, mixed with noise, defaults, and patterns that weren’t asked for. So the iteration doesn’t strictly converge toward the original idea, it drifts around it. You end up refining the model’s interpretation as much as your own. And that creates a real risk: over time, you lose track of the initial intent. You can spend dozens of iterations shaping something that looks coherent, only to realize much later that it’s not what you had in mind at all, that you already expressed that idea early on, but it got diluted through the loop. This is where the mismatch becomes critical. In most software systems, especially automation and workflows, determinism isn’t a “nice to have.” It’s the baseline. The expectation is not “usually correct,” but “does not make mistakes.” LLMs flip that expectation. They are inherently probabilistic, and integrating them means accepting variability, ambiguity, and the cost of constant verification and correction. At the same time, they do provide something fundamentally new: they make ideas concrete quickly. Even if the output is only 70–80% aligned, it exists it gives you something to react to, critique, and refine. That energy matters. But I think we’re still underestimating the tradeoff: the shift from executing well-specified tasks to continuously managing approximation, drift, and error inside systems that were never designed for it.
Gergely Orosz@GergelyOrosz

In an industry built on determinism, I feel we might be underestimating the work we all will need to do with LLMs exactly because they are nondeterministic. But for so much of automation/workflows, determinism (aka "make sure it doesn't make a mistake") is a baseline expectation

English
0
0
0
38
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
C’est exactement pour cela que les Trumpistes sont fan du grand Don Ils veulent purger de la société les barbus, les décroissants, les gauchiasses, les woke Fréquentant des gens de gauche et des gens de droite ça me semble assez évident que les gens ne veulent plus vivre ensemble, sont incapables de discuter sans se disputer et qu’il y a des tentations séparatistes Et que les élections sont devenues le moment où un camp peut prendre le pouvoir pour imposer une volonté unilatérale à l’autre. Quand je dis que la gauche observe Trump pour pouvoir faire la même chose que lui une fois arrivée au pouvoir c’est parce que je sais qu’il ya des gens comme lui qui milite pour ça et qui trouve de l’écho chez les sympathisants À droite comme à gauche de plus en plus de gens pensent que la démocratie et l’état de droit sont un frein pour répondre aux problématiques de notre temps Et cela c’est un signal d’une régression morale claire dans nos sociétés
BLAST, Le souffle de l'info@blast_france

Faire sédition : un projet pour la gauche ➡️ youtu.be/mJWgSI7UWQI Dans cette vidéo, @gdelagasnerie interroge ce qui pourrait nous sembler le plus évidemment positif dans l’idéal démocratique : ces notions de cité, de citoyenneté, de discussions destinées à arbitrer des conflits internes à une communauté. Il pose cette question : mais pourquoi cohabitons-nous ?

Français
0
0
0
18
Andrew Farah
Andrew Farah@andrewfarah·
sharing my first open source project a CLI for downloading and syncing your X bookmarks locally so your agent can access them. it's free › npm install -g fieldtheory › login to your X account in a chrome tab › ft sync (done!) bonus: › ft viz › ft classify
English
284
271
4.3K
500.1K
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
Refactoring with LLM are a nightmare
English
0
0
0
156
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
À ce stade il est trop tôt pour le dire. Le seul libéral véritable dans le champ public français c’est Alain Madelin. Aujourd’hui personne n’arrive à son niveau. Ils n’en ont ni l’ethos ni la volonté véritable Je pointe Lisnard parce que je pense que c’est un escroc intellectuel qui s’il était libéral véritable aurait commencé à mettre en œuvre une feuille de route dans ce sens en tant que président de l’Association des maires de France.
Français
1
0
2
2.6K
Sardoche 🇫🇷
Sardoche 🇫🇷@Sardoche_Lol·
Marx m'a ouvert les yeux et... vous aviez raison. Les milliardaires ont acheté l'État français, ils profitent des subventions et écrivent les lois qui les arrangent. Le système est truqué en leur faveur, c'est indiscutable. Marx avait raison sur son diagnostic. 😭 Vous vous battez pour plus de droits sociaux, plus de protection, plus de redistribution. Des causes nobles qui ont fait la grandeur de la France. Mais... (🙃) chaque droit social crée un dispositif. Chaque dispositif crée une administration. Chaque administration crée un marché pour des prestataires privés. Chaque prestataire fait du lobbying pour que ça continue. Et ceux qui en profitent le plus c'est les Bernard Arnault... 🥲 Petit exemple marrant : Le ticket resto: 20€ que vous verrez jamais mais que vous avez payé par votre travail. À la place: une carte, utilisable le midi, en semaine, sans alcool. La société privée qui gère ça: Edenred, 40 milliards en bourse. Business model: Nous empêcher d'acheter une bière avec notre salaire. (je bois pas mais ça marche quand même) Chaque "conquête sociale" suit ce schéma. - Formations CPF : 30 milliards par an, détournés par des organismes bidon pendant des décennies. - APL: On donne pas d'argent aux pauvres, on gonfle les loyers de 300€ et on engraisse une bureaucratie de contrôle. - RSA: Tellement de paperasse que des ayants-droit abandonnent. Régulation par le découragement. Sur 100€ "pour les pauvres", combien arrivent aux pauvres ? Le socialisme français, c'est faire transiter l'argent des pauvres par les riches en espérant qu'il en reste au passage. Quand je dis en stream que la France est depuis 45 ans dans le socialisme, on m'insulte. Forcément, la droite a été au pouvoir la majorité du temps. Mais l'État n'a jamais été aussi gros, la dette jamais aussi haute, les prélèvements obligatoires jamais aussi lourds. - 2008: Sarkozy garantit 360 milliards aux banques. On paie les pertes, on touche pas les gains. 🙃 - 2020: Macron déverse 240 milliards sur les entreprises pendant le Covid. Milliards d'argent public qui ont été redistribué en dividendes dans la foulée. 🙃🙃 La droite française n'a jamais supprimé une niche fiscale, jamais fermé une agence, jamais réduit un budget. Elle change juste le nom sur les chèques. La droite au pouvoir : l'État pour le capital. La gauche au pouvoir : l'État pour l'État. L'État grossit dans les deux cas. C'est ça le socialisme français : pas une question de qui gouverne, mais de combien il y a à distribuer. Bernard Arnault s'adapte très facilement aux deux. Il a les avocats, les lobbyistes, les dîners. Nous on a rien. Le seul qui vient titiller ces forces en présence : Mélenchon, et il a entièrement raison. Il est le seul à pointer du doigt les problèmes comme l'aurait fait Marx. La capture du politique par le capital. Le système truqué. La connivence au sommet. Sur le diagnostic, il est lucide, sur les solutions... 😭 Son programme a besoin de l'État plus que quiconque. - Planification écologique: 100 milliards sur le quinquennat. Qui rédige les appels d'offres? Des hauts fonctionnaires en poste depuis 15 ans qui signeront avec les boîtes qui ont les meilleurs lobbyistes. - Garantie d'emploi: des millions de contrats à créer et à gérer... par des prestataires privés. Ceux qui savent parler à l'administration, les mêmes que pour le CPF, avec un petit tampon pour l'inclusivité et le social en plus. Peu importe le candidat et la pureté de ses convictions. Il nommera ses ministres, pas les 200.000 fonctionnaires en dessous qui décident concrètement où va l'argent. Eux et leurs réseaux sont indélogeables. Total a 150 lobbyistes rien qu'à Paris. Et ils seront presque plus heureux de parler à la gauche, parce qu'un gouvernement de gauche dépense plus, régule plus et subventionne plus. Les grandes fortunes et les grands groupes ne vont pas regarder ces centaines de milliards circuler sans s'adapter. Ils ont survécu à Mitterrand, prospéré sous Chirac, explosé sous Macron. Gauche, droite, peu importe. Ils ajustent le vocabulaire et récupèrent l'argent. Si on veut vraiment nuire aux milliardaires qui s'enrichissent sur le dos des Français. Il faut leur retirer leur arme préférée. Certainement pas le marché mais l'État. Tant qu'énormément d'argent circulera par l'État, l'État appartiendra aux plus riches. La solution a un nom. Un mot devenu sale en France: libéralisme. Le libéralisme, c'est pas "laisser les riches faire ce qu'ils veulent". C'est retirer à l'État le pouvoir de choisir les gagnants. Et non, être libéral ça ne veut pas dire être contre les aides sociales, ça veut dire vouloir qu'elles arrivent directement aux gens au lieu d'engraisser dix intermédiaires au passage. Un vrai libéral, ça fait chier tout le monde : les milliardaires qui vivent de l'État, les bureaucrates qui le gèrent, les politiques qui s'en servent. Et paradoxalement, ça ne séduit pas non plus ceux qu'il défend vraiment: les précaires, qui verraient enfin l'argent arriver sans qu'on se serve au passage. Et c'est précisément pour ça qu'on n'en élit jamais. Le libéral n'a pas de base passionnelle. Il ne promet pas de protéger "les travailleurs" contre "les patrons". Il ne promet pas de défendre "la nation" contre "les étrangers". Il ne donne pas de coupable à haïr ni de héros à adorer. Il ne fait pas des dizaines de milliers de RT sur un ratio de Bompard. Il dit juste : "On va arrêter de distribuer l'argent que vous n'avez pas à des gens qui ne le méritent pas via des systèmes que personne ne comprend." C'est la solution la plus pragmatique et c'est aussi le discours le moins excitant. 😭 Résultat : on n'en a pas eu un seul en 50 ans. Giscard: Technocrate étatiste, incarnation anti-libérale. Chirac: Le seul président de droite que la gauche cite en exemple, zéro privatisation, zéro simplification. Sarkozy: son premier geste de président c'était un bouclier fiscal pour les riches. Le dernier, un chèque pour les banques. Macron: le pire... parce qu'il a menti. Il a pas trahi le libéralisme. Il a juste volé le mot pour se faire élire. (Dépense publique: record; Dette : record; Nombre de fonctionnaires: record) Un vrai libéral, on le reconnaît pas à son programme. On le reconnaît à ce qu'il est prêt à supprimer, et aux ennemis qu'il se fait en le disant. C'est inconfortable pour tout le monde. C'est pas vendeur. C'est pour ça que c'est rare, et c'est pour ça qu'on n'en a jamais eu au pouvoir. Mais il en existe. Des élus qui tiennent ce discours depuis des années sans dévier, sans calculer. Qui gèrent leur ville comme ils voudraient gérer le pays. Vous les connaissez peut-être. Vous les avez peut-être ignorés parce qu'ils ne font pas de buzz, parce qu'ils ne désignent pas de bouc émissaire, parce qu'ils ne promettent pas la lune. C'est à nous de les trouver. De les pousser. De les rendre incontournables. Parce que le système ne changera pas tout seul. Ceux qui en profitent, de droite comme de gauche, n'ont aucun intérêt à ce que ça change. Ils continueront à s'écharper sur les plateaux pendant que l'argent circule exactement là où il a toujours circulé. Le seul levier, c'est nous. Si ce post vous a fait réfléchir, partagez-le. Pas pour moi, mais pour que cette idée existe enfin dans le débat. Pour qu'un jour, le mot "libéral" retrouve son sens et puisse servir le débat public.
Sardoche 🇫🇷 tweet media
Français
298
201
1.6K
1.1M
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
And of course, some people will argue that through iteration, you actually get better at defining the task itself. Maybe the frustration comes from the fact that the initial requirement isn’t precise enough, not specific enough—and that’s fair. Iterating does help clarify and refine the idea you had in mind, to make it more concrete. But that only really holds if what the AI produces is a faithful realization of your initial idea. And most of the time, it isn’t. It’s more like a rough approximation of your intent, mixed with noise. It captures something adjacent to what you meant, not the thing itself. So the iteration ends up orbiting that approximation rather than converging on your original idea. And that’s the trap. With each iteration, there’s a real risk you drift further away from what you initially had in mind. You can come back days—or even weeks—later and realize: this isn’t it. This is not what I meant at all. I actually said it clearly at the beginning. And now you’re at iteration 99, and in a way, 98 of those iterations were just… lost. At the same time, it would be wrong to dismiss it entirely. There’s something undeniably powerful in the fact that it does things. Because even a perfect idea, without implementation, is nothing. It doesn’t exist. And AI can often get you to 70–80%—not exactly what you envisioned, but something tangible, something real. And that has value. It brings ideas into existence. It gives you something concrete to react to, to critique, to refine. So yes, there’s frustration, and yes, there’s drift—but we shouldn’t throw the baby out with the bathwater.
English
0
0
0
146
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
You know, there’s a contradiction here. We all think AI will boost productivity—and maybe that’s true—but I’m honestly pretty doubtful. Take coding as an example. If you’re writing something basic, sure, AI can often do it better than you. But the moment you need something very specific—something off the usual path—you end up having to specify everything in detail, because it’s not the obvious way to build it. And then you spend even more time reviewing it. From my experience, it’s often just wrong. It doesn’t meet the bar you’d expect for genuinely hard, modular components, so you end up iterating over and over again. The real issue is that it introduces a lot of bad patterns—wrong ways of doing things. There’s a lot of noise. You try to steer the model to get the full solution, but at some point, it’s faster to just do it yourself. If you don’t, you end up stuck in loops of iteration that take even longer. That’s where I’m unsure. If AI is mainly good at generic work, then yes, generic work will get much faster. But real value doesn’t live in the generic—it hides in the edges. You need those edge cases, those non-obvious solutions. What I’m seeing right now is that we might end up disappointed. For generic tasks, AI is great—it’s helped me a lot. But as soon as you try to do something truly differentiated, it falls short. And you end up wasting time trying to make it right. That’s the frustrating part: you expect it to get there, and then you realize… it just doesn’t.
English
1
0
0
171
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
Most people building agents are doing it wrong. You sit down with every thought you’ve ever had, upload it all, and try to architect some perfect system from day one. Plan. Search. Analyze. Generate. Assess. Iterate. You think that’s how you build something great. It’s not. OpenAI didn’t do that with Codex. They started from scratch, set the first goal, ran the first eval, and let reality show them where it broke. Then they adjusted. The eval became a moving target. The system evolved through friction, not grand theory. That’s the uncomfortable truth. Building a real agent demands brutal discipline. You have to hold back. You can’t feed it everything. You give it just enough direction “do this to achieve that” and let it develop its own model empathy. Let it analyze its own traces. Let it improve through its own misconceptions. But here’s the trade-off nobody talks about: Models get stuck in local minima fast. You’ll watch them loop, spin, and fail. At some point, you have to make the call: let it keep improving on its own path, or step in and change direction entirely? That judgment call is the entire game. Start simple. Stay disciplined. Let reality guide the evolution. That’s how real intelligence gets built in agents and in life.
Kevin Gu@kevingu

x.com/i/article/2039…

English
0
0
0
106
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
@thsottiaux I think it is because of my love letter I was about to go back to cursor when I saw my rate limit reset this morning To be honest I think I had a bonheur x.com/batandeom/stat…
Maxime Batandeo@BATANDEOM

I cheated on @cursor_ai with @OpenAI Codex and I liked it. I am a poor guy hungry for tokens, so when I don’t get what I need from Cursor, I’m sometimes tempted to mess around with OpenCode, Windsurf (they’ve got Opus, which I like, and Gemini, which is useful for frontend). Never Claude Code — it’s high-end shit, really expensive, way out of my league. So when I met Codex, they had me with a free gift offer: double your usage the first month. And it was exactly when I had to ship new stuff. Codex — right place, right time — we hooked up. One day in, I lost my user data after two hours. To be honest, it feels like having an STD while fucking with a rubber — it feels wrong. But since I’m really hungry for tokens and I really have to ship, I gave it another try. And when GPT-5.4 dropped… wow, it was another level of coding agent experience (CAE). I still had Cursor for terminal, browser, and file work, but it’s been 3 days and I feel great with Codex. Luckily for Cursor, my credits refill today, and I’m out of tokens in Codex — I burned through my weekly tokens in 3 intense days. I don’t regret it. Why did I like Codex over Cursor? I think it’s the way they handle chat and messages — almost frictionless. It feels smooth to use. I feel like Cursor uses a lot of resources, and my computer is always at its max to make it work, like it’s doing a Barry’s HIIT session in a marathon.

English
0
0
0
269
Tibo
Tibo@thsottiaux·
Our Codex dashboards are showing increased rate of users hitting rate limits and since we don't fully understand why I have made the cautious decision of resetting the usage limits for all plans. Enjoy. I also wanted to celebrate us finding a pocket of fraudulent accounts that we banned and have helped us regain some compute. The fight against abuse never stops, but it's important to mark the moment and make it a little shared victory.
English
1.1K
571
12.1K
1.4M
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
Everyone tells you to iterate fast. They’re right. But they miss the real point: You have to become the kind of person who can iterate fast. That’s the hard part. Speed isn’t a tactic. It’s a byproduct of identity. First, you need the capacity to: •Make a lot of mistakes •Learn from them •Recognize what “great” actually looks like And in the beginning? You don’t know what great is. So you hesitate. You stay safe. You avoid looking stupid. Because learning in public exposes your weaknesses. And most people can’t handle that. It triggers something deep: •Defend your ego •Or disappear to protect it That’s why “build in public” sounds simple… But feels impossible. Iteration requires vulnerability. And vulnerability takes time to build. So no You don’t need to ship every day yet. You’re not that person yet. Focus on becoming them. Because once you do: Speed becomes natural. Learning compounds. Progress accelerates. Before you become a winner, you spend a long time being a loser. Not passively losing but learning how to lose well. That’s the difference. That’s the path.
English
0
0
0
44
Maxime Batandeo
Maxime Batandeo@BATANDEOM·
I cheated on @cursor_ai with @OpenAI Codex and I liked it. I am a poor guy hungry for tokens, so when I don’t get what I need from Cursor, I’m sometimes tempted to mess around with OpenCode, Windsurf (they’ve got Opus, which I like, and Gemini, which is useful for frontend). Never Claude Code — it’s high-end shit, really expensive, way out of my league. So when I met Codex, they had me with a free gift offer: double your usage the first month. And it was exactly when I had to ship new stuff. Codex — right place, right time — we hooked up. One day in, I lost my user data after two hours. To be honest, it feels like having an STD while fucking with a rubber — it feels wrong. But since I’m really hungry for tokens and I really have to ship, I gave it another try. And when GPT-5.4 dropped… wow, it was another level of coding agent experience (CAE). I still had Cursor for terminal, browser, and file work, but it’s been 3 days and I feel great with Codex. Luckily for Cursor, my credits refill today, and I’m out of tokens in Codex — I burned through my weekly tokens in 3 intense days. I don’t regret it. Why did I like Codex over Cursor? I think it’s the way they handle chat and messages — almost frictionless. It feels smooth to use. I feel like Cursor uses a lot of resources, and my computer is always at its max to make it work, like it’s doing a Barry’s HIIT session in a marathon.
English
0
0
0
377