
Bryan Berger
4.1K posts

Bryan Berger
@bryanberger
design @netflix | prev @discord | building https://t.co/AKxiiq5Nk5





I have been testing the new Obsidian CLI with Claude Code on my research vault (4,663 files, 16 GB)... I know too many notes!! Early results are significant. Its going to change the way in which Claude Code can interact with Obsidian The way I see it, there are three ways Claude can connect to your vault: Filesystem (MCP or bash) reads/writes markdown files. Covers maybe 40% of what Obsidian actually knows. No awareness of backlinks, tags, properties, or the graph. To search content, it has to open every file individually. REST API MCP — talks to Obsidian via plugin. Gets you to about 55%. Better search, some metadata. But fragile setup and limited. Obsidian CLI ...Yay @obsdmd and @kepano !! it queries Obsidian's actual indexes. This would be about 85% of Obsidian's capabilities. The missing 15% is purely visual the canvas layout, graph view rendering, live preview. Everything else is there: search, backlinks, orphan detection, properties, tags with hierarchy and counts. The speed difference is real: Finding orphan notes: bash grep 15.6s vs CLI 0.26s (54× faster) Searching vault: bash grep 1.95s vs CLI 0.32s (6× faster) Token cost for orphan detection via MCP: about 7 million tokens. Via CLI: 100 tokens. That's 70,000× cheaper. The CLI uses Obsidian's pre-built search index the same thing that makes Obsidian's own search instant. Grep scans every file from scratch every time. The catch: right now this only works via Claude Code (which can run CLI commands through bash). Claude Desktop and claude.ai can't access it directly. There is an early CLI MCP server (obsidian-ts-mcp) that would bridge this gap but I haven't tested it yet. (I think that if you ask nicely Claude Code could create a version for you!) I'm using this as part of a research assistant stack connecting Claude to Obsidian, Zotero, PubMed and more. Posts on each piece coming.




👋Say hello to Max! Max is Arena’s intelligent router, powered by 5+ million real-world community votes. Max routes each prompt to the most capable model with latency in mind. AI models excel at different things (code, math, speed, reasoning). Max orchestrates across model strengths to deliver reliable performance across real-world use cases. Available today in Direct chat!









My Ralph Wiggum breakdown went viral. It's a keep-it-simple-stupid approach to AI coding that lets you ship while you sleep. So here's a full explanation, example code, and demo.












