Olivier Chafik

877 posts

Olivier Chafik banner
Olivier Chafik

Olivier Chafik

@ochafik

Work @ Anthropic on MCP (views expressed = my own), ex-Google, past contrib. to OpenSCAD & llama.cpp; he/him 🏳️‍🌈 @ochafik.bsky.social @[email protected]

London, UK Katılım Haziran 2009
610 Takip Edilen752 Takipçiler
Olivier Chafik retweetledi
Claude
Claude@claudeai·
Claude can now build interactive charts and diagrams, directly in the chat. Available today in beta on all plans, including free. Try it out: claude.ai
English
1.6K
3.5K
42.1K
11M
Olivier Chafik
Olivier Chafik@ochafik·
@bendersej @AnthropicAI @bcherny Hey, can you make sure to put the CDP settings in the resource’s _meta? We have a few examples (github.com/search?q=repo%…), if you use the registerAppTool / registerAppResource helpers in TS it should all type-check. And please feel free to file a bug if it still won't work 🙏
English
0
0
0
24
Benjamin André-Micolon
Benjamin André-Micolon@bendersej·
@ochafik @AnthropicAI @bcherny Unfortunately this doesn't seem to work either in Claude Desktop, the CSPs are still hardcoded to assets.claude.ai and Claude Desktop does not honor connectDomains and resourceDomains. Thanks again for your help, I'll pause the work there as it's a fundamental limitation
English
1
0
0
19
Benjamin André-Micolon
Benjamin André-Micolon@bendersej·
Building an MCP app that needs to embed an iframe. The @AnthropicAI ext-apps SDK defines frameDomains in its schema ("Origins for nested iframes"), but Claude Desktop enforces frame-src 'self' blob: data: regardless. Is this supported yet? @ochafik, @bcherny
English
1
0
0
56
Olivier Chafik retweetledi
Sean Strong
Sean Strong@sean_t_strong·
You can now run apps within Claude.ai, powered by MCP Apps. Analyze data, edit tickets, draft messages, generate diagrams and more with Claude's interactive connectors. Grateful to our team, the open source community, and our partners for this launch 🚀
English
38
83
1.2K
132.3K
Olivier Chafik
Olivier Chafik@ochafik·
@ericcurtin17 @ollama @ggml_org @parthsareen I think there's room for a variety of approaches to OSS. I (and my past employer) have been happy to contribute code under a permissive license ((almost) no strings attached✌️) And I'm thankful that Ollama's codebase is open-source 🤗 (even if I'm not a fan of Go 🤪)
English
0
0
2
59
Eric Curtin
Eric Curtin@ericcurtin17·
@ollama @ochafik @ggml_org @parthsareen Not sure tbh... I can run a comparison with the cpp code in llama.cpp next time I grab my laptop... That's not how a community-friendly project should work...
English
1
0
0
88
ollama
ollama@ollama·
Ollama v0.8 is here! Now it can stream responses with tool calling! Example of Ollama doing web search:
English
50
287
2.2K
148.2K
Olivier Chafik
Olivier Chafik@ochafik·
@ericcurtin17 @ollama @ggml_org Ideally Ollama would use the Jinja support and constrained tool calls from llama.cpp, removing the need for their bespoke templating engine and improving their output quality. Probably would just need to wrap the relevant APIs as C?
English
0
0
4
67
Olivier Chafik
Olivier Chafik@ochafik·
@ericcurtin17 @ollama @ggml_org The major differences AFAICT are Ollama's tool calls are not grammar-constrained (many models can have high failure unless temperature is kept low, see github.com/ggml-org/llama…), and they don't support streaming in their OpenAI-compatible endpoint (and no streaming of arguments)
English
2
0
2
72
Olivier Chafik
Olivier Chafik@ochafik·
Really slick integration! MCP all the (cool) things!
Vaibhav (VB) Srivastav@reach_vb

You really can just do things! Use *any* Hugging Face space as a MCP server along with your Local Models! 🔥 Here in we use Qwen 3 30B A3B with @ggml_org llama.cpp and @huggingface tiny agents to create images via FLUX powered by ZeroGPU ⚡ It's quite a bit crazy to see local models be capable of so much and just be able to understand/ infer from tool description! There's a lot of potential here in automating video generation workflows, content curation and a lot more.. Bonus: you can plug any other Inference Provider if you don't want to run locally too! npx @ huggingface/tiny-agents run [TASK] oh, and we provide both typescript and python client! 🐐

English
0
0
4
244
Olivier Chafik
Olivier Chafik@ochafik·
@profcelsofontes Support for disabling thinking is now available w/ `--reasoning-budget 0` across thinking models: x.com/ochafik/status… (the pending generic mechanism will be useful for other things) And my pleasure!
Olivier Chafik@ochafik

Wanna disable thinking in llama.cpp? Try the new `--reasoning-budget 0` flag github.com/ggml-org/llama… Should work w/ Qwen3, QwQ, DeepSeek R1 distills, Command R7B; please report any issues! (Upcoming per-request behaviour discussed on github.com/ggml-org/llama… @ngxson) #llamacpp

English
1
0
1
35
Prof Celso Fontes
Prof Celso Fontes@profcelsofontes·
@ochafik thanks for your explanation and your PR ! I hope the PR about enable_thinking = off for qwen 3 be merged soon too !
English
1
0
0
37
Olivier Chafik
Olivier Chafik@ochafik·
@profcelsofontes Yeah it may sound overkill, but then it allows issuing parallel calls as early as they're returned, and some more advanced scenarios (e.g. streaming diff arguments for file patching tools, think Cline / Roo - initially wanted to integrate to this: github.com/cline/cline/pu…)
English
2
0
2
87