Edouard Godfrey

6 posts

Edouard Godfrey banner
Edouard Godfrey

Edouard Godfrey

@EdouardGodfrey

Building Fintool. Ex-Apple (9 yrs). Harvard CS, École Polytechnique.

San Francisco Katılım Kasım 2012
26 Takip Edilen135 Takipçiler
Edouard Godfrey
Edouard Godfrey@EdouardGodfrey·
@embeddingshapes Fair point. "Local" means the harness, not the LLM. But you could swap in a local model and it would work the same.
English
1
0
0
118
embedding-shapes
embedding-shapes@embeddingshapes·
@EdouardGodfrey But wait, the diagram says "Claude", and later reference a MCP package from/for Anthropic/Claude Code. Is this ultimately just running CC in that VM? I'm not sure this is "local" as typically understood. Calling it local because the agent harness doesn't run "in the cloud"?
English
1
0
0
223
U.H
U.H@0xUniHorse·
@EdouardGodfrey Hi Edouard, thanks for sharing the experience and insights
English
1
0
1
774
Soumya Sharma
Soumya Sharma@SomTheBuilder·
For people who work on Windows (yes we exist) - the simplest way to talk to agents is create a Google account/OneDrive account and just drop files there. The agent can also write files in the same folder. Local VMs are too hard, WSL gives too much access - VPS is the only option.
English
1
0
0
96
Nicolas Bustamante
Nicolas Bustamante@nicbstme·
@EdouardGodfrey is @fintool’s CTO, and we’ve been discussing why local AI agents on your computer might win in the future. Keep in mind that Edouard spent 9 years at @Apple, so he knows a lot about AI running locally. He built a prototype similar to Claude Cowork (a sandboxed AI agent with browser automation. The agent runs in a Lima VM and controls Chrome via Playwright MCP, etc.). His take: Local agents will win. Context wins, and context lives locally.
Edouard Godfrey@EdouardGodfrey

x.com/i/article/2016…

English
2
2
26
3.8K
Edouard Godfrey
Edouard Godfrey@EdouardGodfrey·
@trq212 Love it! We've been wrestling with how to explain the distinction between commands and skills to customers. "Save a prompt to reuse" is intuitive, it's a good way to introduce them to the magic of skills.
English
0
0
0
58
Edouard Godfrey
Edouard Godfrey@EdouardGodfrey·
Our skills live in S3. The system prompt tells the LLM what skills exist and when to use them. When it decides to invoke one, we fetch it and paste it into the context window. That's the whole trick - just text injected into the LLM's short-term memory. Nothing written to disk, nothing persists. Next conversation starts with a blank slate.
English
2
0
2
76
Jamie Quint
Jamie Quint@jamiequint·
@nicbstme How do you handle skill loading/unloading once the agent has decided to use the skill. Do you just load it directly into the context from SQL and never write it to disk? Do you load it onto disk for the session it is used in? Load it for that same user perpetually once used?
English
1
0
3
1.9K