Zef Hemel
13.9K posts

Zef Hemel
@zef
Head of AI Engineering at Jimdo. Nice as a person. Formerly of @Mattermost, @OLXGroupTech, @Egnyte, and @Cloud9IDE. Creator of https://t.co/7LfmwZqGEn
Poznań, Poland 参加日 Mart 2007
665 フォロー中3.3K フォロワー
Zef Hemel がリツイート

April was my last month at Vendr, so I'm on the lookout for a new role!
LinkedIn in the reply 👇 but here is a quick summary:
• 13 yrs in dev (TypeScript/React, Go, AWS, Cloudflare)
• 5 yrs leading teams across the US & Europe — I know how to build high-performing teams and a strong engineering culture
• Currently mostly interested in LLMs — last year I spent a fair amount of time exploring LLM/RAG topics (Assistants API, Vercel AI SDK/LangChain, Turbopuffer/LanceDB)
Open to staff+, engineering manager, or site lead roles — I'm driven by product impact, strong teams, and long-term growth, rather than the exact title.
English

There’s only one little issue.
You have to write tests. And nobody likes writing tests.
Nobody.
Except AI agents. They just want to impress you, their overlord. That’s all they want. A smile on your face.
zef.plus/agent-feedback…
English
Zef Hemel がリツイート

I'm a little upset with @Steve_Yegge right now. I love you, man, but you exited the CHOP train too early youtube.com/watch?v=vl9smw…

YouTube
English
Zef Hemel がリツイート
Zef Hemel がリツイート

There is a lot of (justified) of excitement about MCP, but just like most function calling approaches, I think it has a severe limitation at the moment, and I am kind of of flabbergasted why I am the only one that is bothered by it.
The problem is that when you define a tool using MCP [0] you can only specify the *inputSchema* but not the *outputSchema* (the same is the case for OpenAI [1], Claude [2], Mistral [3], Gemini [4], ...). And the optional description of a tool is just a "hint" to the model.
That means that when the model needs to decide to call a tool, it can only rely on the structure of the the inputs (not even on the description). But since it does not know the structure of the output of the call, it cannot reason about the composition of multiple tool calls to solve a complex problem. Instead it needs to take a guess, call some first tool, and then interpret the output before it can reason about how to continue.
The fact that tools are dynamically typed wrt to their results makes it also impossible for other tools, and humans, to reason about tools (that is a lot of tools :-). Imagine I would explain MCP to you by only showing the types you send to an MCP server, but none of the types that you receive from a server.
Since tools are for all intents and purposes really just fancy REST apis, just like OpenAPI definitions [5], tool calling and MCP tools should also specify their outputs.
English

This is amazing. I need to get myself some of these vibe coder glasses. youtube.com/watch?v=JeNS1Z…

YouTube
English

This is the source of the "big daddy rule". Sometimes Reddit is just the best. reddit.com/r/cursor/comme…
English

In which I'm vibe coding/CHOPing a YAML patching library using Cursor.ai and Big Daddy youtu.be/hmSsFVPmHKA

YouTube
English

Spending some of my Saturday pair-programming with @GeminiApp 's new gemini-2.5-pro model in @cursor_ai. So far so good! Funny though how each model has a different style (e.g. in comments it generates). Gemini really seems to like putting *** in comments for some reason.
English



