Philippe Hebrard

98 posts

Philippe Hebrard banner
Philippe Hebrard

Philippe Hebrard

@philippehebrard

Head of Product @Ledger | prev. co-founder @ana_health (acquired)

Paris, France Katılım Temmuz 2009
907 Takip Edilen168 Takipçiler
Philippe Hebrard
Philippe Hebrard@philippehebrard·
@nkhandel The entanglement gets worse when AI agents are in the loop. They don't know which decisions were meant to be temporary, so they reinforce them. What was reversible becomes load-bearing before anyone notices. That what we build as a solution at @uselockai
English
0
0
0
10
Nilesh Khandelwal
Nilesh Khandelwal@nkhandel·
The most dangerous product bets aren’t the ones that fail. They’re the ones that succeed or mildly succeed and entangle you in the wrong direction. One of the more interesting patterns in AI right now is the push toward “instant checkout” inside AI experiences. On the surface, it looks obvious: - reduce friction - increase conversion - capture transaction value But structurally, this creates tension. Most large retailers today aren’t just commerce platforms. They’re retail media businesses. Their highest-margin revenue doesn’t come from checkout. It comes from: - ads - placement - controlled discovery A third-party “frictionless checkout” layer works against that model. This is where the Entanglement Trap shows up. To build something like this, you don’t just ship a feature. You: - build partnerships - align incentives - integrate workflows - create expectations across teams and partners It becomes a company-level commitment, not a product experiment. The real risk isn’t just failure. If it fails, you shut it down. If it succeeds, you create: - partner conflict - misaligned incentives - structural dependency You’ve built something that’s hard to unwind. And in AI, time compounds faster. In most industries, six months is recoverable. In AI, six months is a full cycle: - models evolve - distribution shifts - competitors reposition The cost of servicing the wrong bet isn’t just wasted effort. It’s lost momentum. The takeaway: The Entanglement Trap isn’t about avoiding bold bets. It’s about recognizing when a “feature” is actually a company decision in disguise. And asking: - Who gets entangled if this works? - What incentives are we changing? - Can we unwind this if we’re wrong? Because in fast-moving systems, the cost of the wrong bet isn’t failure. It’s the time you spend paying interest on it. If you missed my post on how to spot these “vaults” before they lock your roadmap, see below 👇
English
2
0
1
24
Philippe Hebrard
Philippe Hebrard@philippehebrard·
@brianwut Exactly this. And the implication is underrated: if decisions are the source of truth and code is just their projection, then tracking decisions is more fundamental than tracking code. that's what we built @uselockai to be. The infrastructure layer that sits beneath the code.
English
0
0
1
26
Brian 🔰
Brian 🔰@brianwut·
code is a materialized view of the decision log
English
3
0
4
167
Philippe Hebrard
Philippe Hebrard@philippehebrard·
Context debt is the right name for it. The thing that makes it worse: most of the why behind decisions lives in Slack threads, heads, and closed AI sessions, nowhere the next developer (or AI) can find it. We built @uselockai to catch it at the source: record the decision the moment it's made, detect conflicts automatically, agents query the log before they ship.
English
1
0
0
38
Machina
Machina@EXM7777·
AI made writing code 10x faster, but it also made understanding your own codebase 10x harder... every AI-generated pull request adds code nobody fully understands i call this context debt: it works like technical debt but for knowledge, and it compounds quietly until something breaks in production and nobody knows where to look your monitoring team checks dashboards, support checks tickets, QA says tests passed, engineering says nothing changed you have 3 teams staring at the same problem through four different screens... yet nobody has the full picture so someone spends 4 hours digging through logs, Slack messages, and old tickets trying to piece together what happened most engineers spend 80% of their time doing this... they're not busy building, they're busy recovering context VCs poured billions into generating code faster but the actual cost center was always comprehending it and the acceleration of AI made that gap even worse: more code per day means more code nobody fully understands per day the fix is a context graph, a living record of every bug, fix, and decision your team has ever made just like a shared memory that compounds instead of walking out the door when someone quits the company knowledge is your business' number one asset, you want to treat it right
Animesh Koratana@akoratana

Introducing: PlayerZero The world's first Engineering World Model that puts debugging, fixing, and testing your code on autopilot. We've raised $20M from Foundation Capital, @matei_zaharia (Databricks), @pbailis (Workday), @rauchg (Vercel), @zoink (Figma), @drewhouston (Dropbox), and more PlayerZero frees up 30% of your engineering bandwidth by: 1.⁠ ⁠Finding the root cause for bugs & incidents in minutes that engineering teams take days to identify. 2.⁠ ⁠Predicting in minutes, edge case issues that a 300-person QA team would take weeks to find. ------ Here's why this matters: No one in your org has a complete picture of how your production software actually behaves. Support sees tickets. SRE sees infra. Dev sees code. Each team builds their own fragmented view - and none of these systems talk to each other. When something breaks, everyone scrambles to stitch the picture together by hand. PlayerZero connects all of it into a single context graph - → The Slack thread where your lead said "we went with X because Y fell apart in prod last time" → The PR review where an engineer explained the tradeoff → The lifetime history of your CI/CD pipeline, observability stack, incidents, and support tickets So you can trace any problem to its root cause across every silo. And it compounds. Every incident diagnosed teaches the model something new. The longer it runs, the deeper it understands - which code paths are high-risk, which configurations are fragile, which changes tend to break which customer flows. So when you sit down to debug a live issue, you have your entire org's collective reasoning and production memory behind you - instantly. ------ Zuora, Georgia-Pacific, and Nylas have reduced resolution time by 90% and caught 95% of breaking changes and freeing an average of $30M in engineering bandwidth. ------ Our guarantee: If we can't increase your engineering bandwidth by at least 20% within one week, we'll donate $10,000 to an open-source project of your choice. Book a demo - bit.ly/3NlLMeN

English
13
8
77
9.2K
Philippe Hebrard
Philippe Hebrard@philippehebrard·
The logging problem is real, but logging tells you what. The gap is why. And "why" mostly lives outside the agent, in the decisions the team made before the agent ever ran. Architecture choices, scope constraints, tradeoffs decided in a Slack thread six weeks ago. We built @uselockai to capture that layer: product decisions recorded where they happen, queryable by agents at runtime via CLI/MCP. Debug by checking what context the agent had access to, not just what it did.
English
1
0
0
13
Dinesh
Dinesh@dinesh_rwp·
Debugging AI agents is way harder than I expected. You can log everything and still not understand why the agent made a decision. Feels like we’re missing basic debugging tools here. #AI #Agents #LLM #AIEngineering #GenAI
English
1
0
1
20
Philippe Hebrard
Philippe Hebrard@philippehebrard·
100%, and the audit trail is the hardest one to retrofit. By the time you need it, the decisions that explain the code are already gone. We built @uselockai to catch them at the source — record decisions where they happen (Slack, terminal, agent sessions), so the audit trail builds itself. When Devin ships something, the WHY is already on record.
English
0
0
0
22
swyx
swyx@swyx·
example of the kind of Details that matter - sweating the enterprise needs to safely deploy agents in ways that dont make compliance and IT officers break out in cold sweats at night. Twitter may be happy with --dangerously-skip-permissions but lets get real here about what's needed to deploy this stuff across 10's of 000's of engineers per org
swyx tweet media
English
17
0
13
3K
swyx
swyx@swyx·
Reupping the @devinai explainer now that everyone is suddenly loving kloud koding because @ryancarson said so (btw devin usage has grown >50% MoM every month this year, it has shocked even scott)
swyx tweet media
swyx@swyx

@cognition new post on joining Cognition at it's $10b Series C: The Devin is in the Details swyx.io/cognition

English
30
10
122
28.1K
Philippe Hebrard
Philippe Hebrard@philippehebrard·
Decision and knowledge is going to be even more important as humans and agents make more of them. I know @EntireHQ answers the context to the commit, but we've been building @uselockai to have a central place for all decision logs when it comes to product building decisions (especially for large teams).
English
0
0
0
15
volarian
volarian@W44TA·
@karpathy forking the architecture is easy. the real unlock is that agentic orgs accumulate legible decision logs — every choice inspectable, every calibration transferable. human orgs had institutional knowledge; agentic orgs will have institutional memory you can actually diff.
English
1
0
0
10
Andrej Karpathy
Andrej Karpathy@karpathy·
Expectation: the age of the IDE is over Reality: we’re going to need a bigger IDE (imo). It just looks very different because humans now move upwards and program at a higher level - the basic unit of interest is not one file but one agent. It’s still programming.
Andrej Karpathy@karpathy

@nummanali tmux grids are awesome, but i feel a need to have a proper "agent command center" IDE for teams of them, which I could maximize per monitor. E.g. I want to see/hide toggle them, see if any are idle, pop open related tools (e.g. terminal), stats (usage), etc.

English
819
837
10.6K
2.4M
Philippe Hebrard
Philippe Hebrard@philippehebrard·
We've seen it these past few months, software is becoming cheaper and cheaper to produce. Two things emerge: - Attacks become largely autonomous and smart. Fake is becoming the norm. - Agents are moving to the forefront of human action, slowly shifting from human to agent, to agent to agent. Securing this new world means building the right infrastructure of trust. Have a read of the post below, and how the agent economy will run on @Ledger .
Pascal Gauthier @Ledger@_pgauthier

Software is no longer "eating the world." AI is eating software. The old moats are collapsing overnight. We are entering the Economy of Action, where the primary actors aren’t humans clicking buttons, but autonomous AI agents transacting in milliseconds. But there is a massive problem: Software cannot secure an AI-driven world. Software security is probabilistic. In a war of code vs. code, the attacker only needs to be right once. AI has turned "good enough" into "extinct." At @Ledger, we’ve spent 12 years preparing for this collision of Blockchain and AI. We are moving the "root of trust" out of the reach of code and into the Physics of Trust. By anchoring identity and governance in immutable silicon—Secure Elements logically isolated from the OS—we create a physical barrier that AI simply cannot cross. Ledger is the weapon for the Agentic Economy.

English
0
0
0
48
Philippe Hebrard
Philippe Hebrard@philippehebrard·
@bencera Makes sense - you only need a simple supervisor that could act as a group CEO -> high level resources consumption, revenue, margin and you can allocate to each CEO the guidelines as a board would do. Just high level figures
English
1
0
0
38
Ben Cera
Ben Cera@Bencera·
@philippehebrard I’m the supervisor. Although it’s getting overwhelming so i’m having another instance summarize at a higher level
English
1
0
0
123
Ben Cera
Ben Cera@Bencera·
Day 11 of letting my AI run my fundraise. 90 investors. 279 emails. 18 want in. My AI replied to every single one. ben at polsia dot com if you're interested :)
Ben Cera tweet media
English
22
0
70
7.6K
Philippe Hebrard
Philippe Hebrard@philippehebrard·
@doodlestein Did you measure any drift? Do you see any lower quality being delivered on long runs vs. Human in the loop semi-long runs ?
English
0
0
0
11
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
I've seen a lot of skepticism recently from people about the claims around how long agents like Claude Code w/ Opus 4.6 can go autonomously without some kind of automated loop feeding them new instructions. People think it's BS because they haven't observed it. It's real. 17+hrs
Jeffrey Emanuel tweet media
English
50
15
259
28.8K
Philippe Hebrard
Philippe Hebrard@philippehebrard·
@arscontexta That’s the biggest challenge. Today we still have employées with agents that leverage different pièce of content. Some have blond spots that others don’t because they don’t know about the right source. Visualising it as a context graph is helpful
English
0
0
0
54
Philippe Hebrard
Philippe Hebrard@philippehebrard·
3/ Earn yield via Kiln on Morpho, Aave. Put your treasury to work through Morpho or Aave vaults, with Ledger Clear Signing at the center. What we’re building at @Ledger : security that scales with organizations—verifiable intent, strong governance, and execution you can trust. multisig.ledger.com
English
0
0
0
40
Philippe Hebrard
Philippe Hebrard@philippehebrard·
2/ Nested Safes Enterprise governance needs hierarchy. Nested Safes let you manage parent → child Safe structures to model real controls—especially for smart contract administration and high-risk treasury ops
English
1
0
0
20
Philippe Hebrard
Philippe Hebrard@philippehebrard·
Multisig security isn’t just “use a Safe.” It’s: can real teams govern assets + contracts without blind approvals and brittle processes? We just shipped 3 upgrades to Ledger Multisig that push this forward. 🧵
English
1
0
0
31
Philippe Hebrard
Philippe Hebrard@philippehebrard·
@iancr @gm4thi4s @circle @coinbase More than a great hackathon project. Agentic commerce is coming fast. Agents will negotiate, pay, and transact on our behalf. But the trust layer is missing.
English
0
0
0
36
Philippe Hebrard
Philippe Hebrard@philippehebrard·
We integrated the x402 protocol from @coinbase— HTTP 402 Payment Required, but for AI agents. Agent hits a paid API → gets a 402 → creates a USDC payment intent → human signs an EIP-3009 TransferWithAuthorization on their Ledger → agent retries with the payment proof. Pay-per-call APIs, settled onchain with @circle, approved by a human.
English
1
0
0
44
Philippe Hebrard
Philippe Hebrard@philippehebrard·
Amazing building this with @iancr and @gm4thi4s during the @circle USDC hackathon. We tackled a problem that's only getting bigger: agents are spending money onchain, but how do you give an AI a wallet without giving it the keys to savings? A thread on what we built and why it matters. 🧵
Ledger@Ledger

An AI agent with a wallet sounds powerful, right? Until it over-optimizes, hits limits, and starts failing in public. New Ledger Podcast: why agentic commerce needs human-in-the-loop guardrails, secure screens, and keys that never leave the secure element 🔒 Built during the Circle USDC Hackathon by @philippehebrard, @gm4thi4s, @iancr and @claudeai Watch the full episode 👇

English
1
0
0
82