chenrui
724 posts

chenrui
@chenrui
@Meetup Engineer @MacHomebrew, @runatlantis, tflint maintainer, k8s, terraformer contributor
New York, NY 가입일 Şubat 2009
618 팔로잉246 팔로워

We're platinum sponsors of the MCP Dev Summit North America (the biggest summit on the MCP). We have 2 spare tickets to give away (valued at $800 each). You cannot miss this one.
If you're building MCP servers or MCP apps for ChatGPT, or looking to get started, comment "MCP" below this post.
I'll pick randomly and DM you with the ticket.
English

Security alert: litellm 1.82.8 on PyPI appears compromised via a malicious .pth startup hook; if you installed it, investigate immediately and rotate secrets — github.com/BerriAI/litell… #litellm #security
English
chenrui 리트윗함
chenrui 리트윗함
chenrui 리트윗함

We're doing a short user survey to inform future Homebrew development. Please fill in as many or few questions as you can: docs.google.com/forms/d/e/1FAI…
English
chenrui 리트윗함

Custom MCP vs Official One – @linear
I’m a big fan of Linear, and for months I have been using it through my AI Agent via a custom integration. When the official MCP was released, I installed it and found it to be good, but since it had some issues, I decided to build my own and share it as open source (link in the comment).
— the main rule here is:
API/GraphQL is for programmers and code. MCP tools are for LLMs, AI workflows, and agents.
— what I did differently and why?
- many tools feel flexible, but it’s noise. I merged things like TeamID, AssigneeID, LabelID into a single "workspace_metadata" tool. The LLM decides which to fetch.
- instructions, tool descriptions, and schema props are rich but concise, so there’s less guessing.
- responses aren’t plain JSON. They say exactly what happened, list changes, give soft suggestions, and help recover from errors. Readable for humans too.
- instead of "add_issue" I use "add_issues" for batch requests. Fewer calls, less context noise. For issue listing, default range is -7 / +7 days.
- I prefetch related values, like returning both a status ID and its name, so the LLM can match natural text.
- I hide tools when they’re not relevant, like skipping "cycles_list" if cycles are disabled in all teams.
- "assigneeId" defaults to the current user unless told otherwise.
- schemas match workspace settings, such as formats of the "priority" property.
^ all these points focus on reducing context noise, minimizing the number of steps, lowering the required tokens, increasing speed, and reducing costs.
below, you can see the difference in performing the exact same task, allowing you to compare the number of steps required to complete it.
and to be clear, I'm not saying that the Official Linear MCP is bad, as it is still under development and in a very early stage. -- I’m sharing this so you can look at the source code, which you can discuss with Cursor or Claude Code.
There’s a chance you might find something valuable there that could help you design your own MCPs.
Note: You can easily run this server locally using a Linear API key or OAuth with @Cloudflare Workers.
Feedback is welcome, as I clearly do not know everything about LLMs and MCP. Thank you in advance for sharing.
English










