Valentin

88 posts

Valentin banner
Valentin

Valentin

@VLemort

🏴‍☠️ – building mcpresso (turn your API into a compliant MCP server ready to deploy) and granular software (sandboxes for creating safe and powerful AI agents)

Paris, France Beigetreten Ocak 2012
176 Folgt108 Follower
Angehefteter Tweet
Valentin
Valentin@VLemort·
AI agents are not workflows or chatbots. They should improve over time, produce good surprises, and never break anything. We are building sandboxes where agents can act, learn, and collaborate with humans, but only within limits defined by the developer. granular.software
English
0
0
0
32
Valentin
Valentin@VLemort·
@paulg The problem isn’t OpenClaw. The problem is giving probabilistic systems tools and letting them act in the real world. Noise is inevitable.
English
0
0
0
9
Paul Graham
Paul Graham@paulg·
I got a pointless email from someone. When I asked why he'd emailed me, he apologized and said that OpenClaw had sent it. That's a first. Wish it was the last, but it will presumably only become more common. Who knows how many other pointless emails I've already gotten this way?
English
231
61
2K
247.6K
Valentin
Valentin@VLemort·
Less is more. Never been more true. When producing 1,000 words takes 30 seconds, the scarce thing is not content. It’s clarity. Content abundance creates attention scarcity.
English
0
0
0
6
Valentin
Valentin@VLemort·
SaaS is dead’ is an intellectually lazy take. Historically, very few models die. They mutate. What’s under pressure isn’t SaaS, but SaaS that sells potential instead of outcomes. Declaring a death is easy. Building what comes next is the work.
English
1
0
0
59
Benjamin De Kraker
Benjamin De Kraker@BenjaminDEKR·
What is the moat of *any* software company right now?
English
484
13
415
395.3K
Valentin
Valentin@VLemort·
@levie Great piece. One addition: Context only compounds if agents can build on previous work. Most reset between sessions. granular.software gives agents runtime environments where their work persists and builds on itself.
English
0
0
0
64
Valentin
Valentin@VLemort·
@StBelkins @ivanburazin The hard part is the safely part. And “OS” here doesn’t mean a classical, human-designed OS, but one designed for AI. Giving LLMs access to regular computers hasn’t produced truly autonomous agents, because human OSes lack the right primitives.
English
0
0
1
26
Vlad Podoliako
Vlad Podoliako@StBelkins·
@ivanburazin Sandboxes are becoming the real substrate, once agents can safely touch the OS, everything else becomes an interface detail.
English
2
0
0
81
Valentin
Valentin@VLemort·
@zby Spent most of 2025 on this. The issue is that sandboxing solves containment, not agency. The real primitive feels closer to a runtime and a programmable environment for agents.
English
0
0
0
11
Valentin
Valentin@VLemort·
@lukestanley I agree. Prompt chaining and agentic harnesses aren’t the missing piece. Even with MCP, adaptability doesn’t magically emerge. You need a programmable runtime environment the agent can reason about and act within. That’s the layer I’m working on. Still early, but promising.
English
0
0
1
14
Hiten Shah
Hiten Shah@hnshah·
Every startup eventually hits the same wall. The company can only move at the speed the founder is willing to make irreversible decisions. AI exposes this because everything else can move instantly.
English
13
1
62
2.3K
Valentin
Valentin@VLemort·
@Austen Fully agree, when average quality collapses, anything genuinely good becomes disproportionately valuable.
English
0
0
0
18
Austen Allred
Austen Allred@Austen·
So many are predicting that AI will replace engineers. And it is doing a lot of work for them. But all I’m seeing is great engineers become more and more valuable.
English
26
14
137
14.6K
Valentin
Valentin@VLemort·
If OpenAI announces something like that, the market will probably hit Google and Meta’s stock immediately, which forces them to shift focus whether they want to or not. Reacting, they'll might create more openings.
English
0
0
0
48
Valentin
Valentin@VLemort·
@OpenAI why chase model leaderboards when you could drop the first AI-native ad platform? Integrated inside ChatGPT. Priced per sale, not per click. A product Google and Meta can't copy without breaking their revenue model. One move. Full market shift.
English
1
0
0
79
Valentin
Valentin@VLemort·
@kwharrison13 Make no money’ is meaningless. Amazon didn’t ‘make money’ for a decade, OpenAI neither. What mattered was cash flow growth and market capture. Profit isn’t the lens you use for venture
English
0
0
0
31
Kyle Harrison
Kyle Harrison@kwharrison13·
Venture investing in a curve: Year 1-5: > “I will make no money.” > You understand the game Year 6-8: > “I will make no money.” > You’ve made contrarian bets that take longer to materialize Year 9-10: > “I will make no money.” > You’re bad at this job and should stop.
English
14
6
222
22.5K
Valentin
Valentin@VLemort·
@ThomasSowell Friedman is right. But today the issue is different. Most Western politics are purely reactive and in that environment, measuring “results” barely means anything.
English
0
0
0
26
Thomas Sowell Quotes
Thomas Sowell Quotes@ThomasSowell·
“One of the great mistakes is to judge policies and programs by their intentions rather than their results.” — Milton Friedman
English
182
1.8K
8K
14.6M
Valentin
Valentin@VLemort·
@kevg1412 If you’d posted that same line as “Bob Doe”, it would have done 10 views. Same idea, same quality, zero reach. We don’t reward insight, we reward signatures and it’s absurd. .
English
0
0
0
68
Kevin Gee
Kevin Gee@kevg1412·
Larry Page: Even if you fail at your ambitious thing, it's very hard to fail completely. That's the thing that people don't get.
English
54
763
6.3K
199.2K
Valentin
Valentin@VLemort·
@paulg @mattyglesias Agree, a large part of today’s ‘AI revenues’ is still investor money going in circles: VC → LLM APIs → burned compute → customers paying only a fraction of the real cost.
English
0
0
0
139
Paul Graham
Paul Graham@paulg·
@mattyglesias The AI boom is definitely real, but this may not be the best example to prove it. A lot of that increase in revenue has come directly from the pockets of investors. paulgraham.com/bubble.html
English
36
29
858
116.8K
elvis
elvis@omarsar0·
Google just published a banger guide on effective context engineering for multi-agent systems. Pay attention to this one, AI devs! (bookmark it) Here are my key takeaways: Context windows aren't the bottleneck. Context engineering is. For more complex and long-horizon problems, context management cannot be treated as a simple "string manipulation" problem. The default approach to handling context in agent systems today remains stuffing everything into the prompt. More history, more tokens, more confusion. Most teams treat context as a string concatenation problem. But raw context dumps create three critical failures: > cost explosion from repetitive information > performance degradation from "lost in the middle" effects > increase in hallucination rates when agents misattribute actions across a system Context management becomes an architectural concern alongside storage and compute. This means that explicit transformations replace ad-hoc string concatenation. Agents receive the minimum required context by default and explicitly request additional information via tools. It seems that Google's Agent Development Kit is really thinking deeply about context management. It introduces a tiered architecture that treats context as "a compiled view over a stateful system" rather than a prompt-stuffing activity. What does this look like? 1) Structure: The Tiered Model The framework separates storage from presentation across four distinct layers: 1) Working Context handles ephemeral per-invocation views. 2) Session maintains the durable event log, capturing every message, tool call, and control signal. 3) Memory provides searchable, long-lived knowledge outliving single sessions. 4) Artifacts manage large binary data through versioned references rather than inline embedding. How does context compilation actually work? It works through ordered LLM Flows with explicit processors. A contents processor performs three operations: selection filters irrelevant events, transformation flattens events into properly-roled Content objects, and injection writes formatted history into the LLM request. The contents processor is essentially the bridge between a session and the working context. The architecture implements prefix caching by dividing context into stable prefixes (instructions, identity, summaries) and variable suffixes (latest turns, tool outputs). On top of that, a static_instruction primitive guarantees immutability for system prompts, preserving cache validity across invocations. 2) Agentic Management of What Matters Now Once you figure out the structure, the core challenge then becomes relevance. You need to figure out what belongs in the active window right now. ADK answers this through collaboration between human-defined architecture and agentic decision-making. Engineers define where data lives and how it's summarized. Agents decide dynamically when to "reach" for specific memory blocks or artifacts. For large payloads, ADK applies a handle pattern. A 5MB CSV or massive JSON response lives in artifact storage, not the prompt. Agents see only lightweight references by default. When raw data is needed, they call LoadArtifactsTool for temporary expansion. Once the task completes, the artifact offloads. This turns permanent context tax into precise, on-demand access. For long-term knowledge, the MemoryService provides two retrieval patterns: 1) Reactive recall: agents recognize knowledge gaps and explicitly search the corpus. 2) Proactive recall: pre-processors run similarity search on user input, injecting relevant snippets before model invocation. Agents recall exactly the snippets needed for the current step rather than carrying every conversation they've ever had. All of this reminds me of the tiered approach to Claude Skills, which does improve the efficient use of context in Claude Code. 3) Multi-agent Context Single-agent systems suffer from context bloat. When building multi-agents, this problem amplifies further, which easily leads to "context explosion" as you incorporate more sub-agents. For multi-agent coordination to work effectively, ADK provides two patterns. Agents-as-tools treats specialized agents as callables receiving focused prompts without an ancestral history. Agent Transfer, which enables full control handoffs where sub-agents inherit session views. The include_contents parameter controls context flow, defaulting to full working context or providing only the new prompt. What prevents hallucination during agent handoffs? The solution is conversation translation. Prior Assistant messages convert to narrative context with attribution tags. Tool calls from other agents are explicitly marked. Each agent assumes the Assistant role without misattributing the broader system's history to itself. Lastly, you don't need to use Google ADK to apply these insights. I think these could apply across the board when building multi-agent systems. (image courtesy of nano banana pro)
elvis tweet media
English
31
165
973
77.9K
Valentin
Valentin@VLemort·
@elonmusk True only if people actually have the tools, the time, and the desire to think. Most of the time, they just react emotionally and call it an opinion.
English
0
0
0
13
Elon Musk
Elon Musk@elonmusk·
Freedom of speech is the bedrock democracy. The only way to know what you are voting for.
English
10.8K
17.8K
134.7K
37.6M
Valentin
Valentin@VLemort·
@levie Oracle is a good example. Many newer tech companies tried to replace them. They’re still here.
English
0
0
0
15
Valentin
Valentin@VLemort·
@levie You’re right for builders. In companies, the model will change 10 times before anything actually gets replaced.
English
1
0
0
61
Aaron Levie
Aaron Levie@levie·
Building AI agents right now is a process of: 1. Build scaffolding to address a limitation of the AI model so your agent works 2. The AI model gets upgraded and solves the very problem you were trying to mitigate, rendering your scaffolding obsolete 3. Identify new, harder use case you want to solve and go back to step 1 There’s basically no way to avoid this process because if you don’t mitigate the model’s limitations then you’re dead on arrival, *and* you don’t know which of your mitigations will be surpassed and when. So the reality is that you just have to accept that you are going to be writing a lot of throwaway code for the next few years, and you have to be very unsentimental to the path you’ve pursued. Just do whatever it takes to make the agents work.
Erik Meijer@headinthebox

The bitter lesson of building LLM apps: models are getting smarter faster than you can hack around their current limitations.

English
70
59
659
136.9K