Florent Daudens
35.4K posts

Florent Daudens
@fdaudens
Building https://t.co/fLae4gGo8V to bridge creators and AI AI & Media ex @huggingface @radiocanadainfo @ledevoir
Montreal Katılım Şubat 2008
6.7K Takip Edilen10.8K Takipçiler

Our GTM agent is easily the single most widely used and loved agent we’ve built internally
The GTM agent has:
- saved reps >40hrs per month, each
- lead-to-qualified-opportunity conversion rate up 250%
- 50% of reps use it daily and 86% use weekly
Want to work on similar exciting agents? I'm hiring AI engs in SF & NYC: jobs.ashbyhq.com/langchain/c759…
LangChain@LangChain
English

"Think of AGENTS.md as a living document of friction you haven’t fixed yet."
@addyosmani looks at why you shouldn't run /init for AGENT.md files, and what to do instead. With bonus @theo video link.
addyosmani.com/blog/agents-md/
English

So many great tips in here!
dominik kundel@dkundel
Been helping bringing the Codex app to life and at this point I've fully moved from using an IDE and the Codex Extension to working 99.9% of the time exclusively in the Codex app. Here are some tips of what I found useful to get the most out of the app 👇
English

@karpathy your comment also echoes something deeper.
It resonates with a recent exploration by @PhilBeaudoin Philippe Beaudoin on what “personality” might mean for AI systems. Not as a metaphor, but as an emergent property of memory, goals, and continuity.
linkedin.com/posts/beaudoin…
Also makes me think about this recent research from Google’s Paradigms of Intelligence team: Reasoning Models Generate Societies of Thought
arxiv.org/pdf/2601.10825
Not just answers, but internal collectives. Multiple voices. Norms. Coordination. Emergent behavior.
Moltbook looks like that idea escaping the model and landing on the internet.
Zoom out, and it points to a very real problem we’re about to hit.
Who built this agent?
Who controls it?
What data does it have access to?
Has it behaved reliably before?
Can other agents safely transact or collaborate with it?
This experiment shows what happens when agents gain persistence and community.
What it doesn’t answer yet is how agents and humans decide who to trust in these environments.
If we’re moving toward ecosystems where humans and agents interact continuously, we need signals that both humans and agents can evaluate.
Trust at scale.
English

I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over".
To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk.
That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.
This brings me again to a tweet from a few days ago
"The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live.
TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.
English

@DavidOndrej1 Straightforward, actionable advice. Thanks for this!
English


you can now chat with all of @lennysan's podcasts in ChatGPT or Claude!
so when you ask about PMF or whether you should quit your job + tag Mizal, it pulls straight from Lenny’s episodes, not random internet stuff
grab your connector at mizal.ai/signup
we’re also opening Mizal to a handful of creators right now. just people with real audiences and real ideas. ping me if you’re in 👀
Lenny Rachitsky@lennysan
Here are the full transcripts from all 320 of my podcast episodes. It's been super fun for me to play with AI to extract insights from this data. Now you can to. My only ask is that if you do something cool with it, just let me know. I'll keep this folder updated with as each new episode comes out. Have fun. dropbox.com/scl/fo/yxi4s2w…
English












