LangChain JS

202 posts

LangChain JS banner
LangChain JS

LangChain JS

@LangChain_JS

Ship great agents fast with our open source JS frameworks – LangChain, LangGraph, and Deep Agents. Maintained by @LangChain.

Katılım Ocak 2026
14 Takip Edilen2K Takipçiler
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
your daily reminder that open models are plenty capable for a lot of coding work. easiest place to feel that out is deepagents! swap the model and go. i've been enjoying GLM-5.1, Kimi K2.6, MiniMax M2.7, DeepSeek V4 Pro. here's some examples using our CLI agent in headless mode
Mason Daugherty tweet media
English
4
10
33
12.5K
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
small workflow note that adds up. /staged-pr is a skill (via slash command) i run when wrapping up a PR. it takes my staged code changes and drafts a concise PR title & description, based on my preferences and our repo conventions. it's routine, formulaic, and well-scoped. i hit it 20+ times a day. in the Deep Agents CLI i use /model to swap mid-session: heavier frontier model for the actual coding, then over to glm 5.1 (via @OpenRouter ) or kimi 2.6 (via @baseten) for the skill. it yields indistinguishable quality differences, runs faster, and is ~5x cheaper than leading LLMs. the broader point: matching the model to the task beats picking one model for everything. open models are extremely good at the long tail of routine agent work, even if frontier still wins the hard stuff. a lot of what we point frontier models at isn't actually that hard! -- linking skill below for those interested in trying it out
Mason Daugherty tweet media
Mason Daugherty@masondrxy

deepagents-cli is quietly becoming the best place to start coding with open weight models. we've been investing heavily in making it a harness that's truly model-agnostic, without compromising performance! different models perform best with different harnesses -- prompts, middleware, settings. our recent profiles API (below) lets you bundle all of that per model, so Kimi, Qwen, GLM, etc. can drive the agent loop just as well as the closed frontier. more info on profiles x.com/Vtrivedy10/sta… other recent wins worth highlighting: - /agents - swap agent profiles mid-session (coding agent/content writer/custom) - /model - fuzzy switcher w/ live status; OpenRouter, LiteLLM, Baseten, hosted Ollama all built-in - headless mode w/ --json + --max-turns for scripting - --acp to run as an ACP server - /skill:name skills - MCP w/ OAuth full docs and quickstart ⬇️

English
3
4
11
2.4K
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
we're continuing to see clear examples where a model's harness is a major determinant of overall performance. with the same model, running on same task, it's easy to observe very different scores depending on (system) prompts, tools (& their descriptions), and middleware (steering hooks). this is exactly why we built a harness profiles abstraction in Deep Agents: per-provider or per-model overrides for base system prompts, tool names + implementations, etc., so swapping models doesn't mean losing the work that made the last one good! 10–20pt jumps on tau2-bench in our own testing. currently cooking up built-in profiles for popular open weight models 🧑‍🍳 langchain.com/blog/tuning-de…
mteam.eth@mteamisloading

you've heard that models are highly trained in their harnesses, but... it appears that pi is about 7-10% better than codex with gpt-5.4 on a ProgramBench task. Same exact prompt, same environment. It's a good harness.

English
6
7
34
15.1K
LangChain JS retweetledi
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
what model are you choosing for coding tasks?
English
7
5
10
2.6K
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
open-weight LLMs have come a long way on agent tasks! but the harness you wrap them in matters just as much as the model itself, and arguably the interface you use to drive that harness matters even more. dev workflows are deeply personal. what works well for one developer may hinder another, so it's difficult to converge on a single UX that isn't either compromising or too generalized (e.g. CLI vs. TUI vs. GUI vs. IDE extension) while it doesn't come without drawbacks, ACP a solid stopgap for running the same harness across multiple interfaces. pick your frontend, keep your agent. deepagents ships with this out of the box -- two ways to plug it in: - deepagents-acp is our standalone ACP server to serve *any* agent - `deepagents-cli --acp` to use our existing CLI agent over ACP point any ACP-compatible client at it and you've got the same deepagents harness, your choice of open-weight model & provider, and your choice of interface. some popular exemplars: - `toad` is an agent-agnostic TUI that ships deepagents support built-in, made possible via ACP github.com/batrachianai/t… (@willmcgugan @textualizeio) - you can use deepagents directly in any modern IDE, see this blog post from @jetbrains coauthored by our very own @Hacubu: blog.jetbrains.com/ai/2026/04/usi…) the model is yours to pick. the interface is yours to pick. the harness shouldn't be the thing that locks you in.
English
4
16
38
9.6K
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
deepagents-cli is quietly becoming the best place to start coding with open weight models. we've been investing heavily in making it a harness that's truly model-agnostic, without compromising performance! different models perform best with different harnesses -- prompts, middleware, settings. our recent profiles API (below) lets you bundle all of that per model, so Kimi, Qwen, GLM, etc. can drive the agent loop just as well as the closed frontier. more info on profiles x.com/Vtrivedy10/sta… other recent wins worth highlighting: - /agents - swap agent profiles mid-session (coding agent/content writer/custom) - /model - fuzzy switcher w/ live status; OpenRouter, LiteLLM, Baseten, hosted Ollama all built-in - headless mode w/ --json + --max-turns for scripting - --acp to run as an ACP server - /skill:name skills - MCP w/ OAuth full docs and quickstart ⬇️
English
11
11
61
45K
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
agentmaxxing
English
2
2
8
676
LangChain JS
LangChain JS@LangChain_JS·
Check out these features in deepagentsjs 1.9.0 🎉 Async subagents allow you to fan out parallel background tasks across multiple subagents while you keep the conversation going. Equip your agent with completion callback middleware to get results as soon as they are available. BackendProtocolV2 ensures your agent treats images, videos, audio, and PDFs as first-class inputs using the read_file tool. Your agent can now reason about more than just text files. Prompt caching optimizations ensure Claude models are now more token efficient.
LangChain@LangChain

🚀 deepagents 0.5 release 👉 Async subagents - kick off background tasks on any Agent Protocol backed server while you continue to interact with the main agent. Start multiple background tasks in parallel, keep the conversation going, and collect results as they come in. Tasks are stateful and maintain their own thread so you can send follow-up instructions mid-task without losing context or restarting from zero. Any Agent Protocol compliant server is a valid target. This means you have the flexibility of using LangSmith deployments or hosting async subagents using your own custom infra. 👉 Expanded multimodal support - your agent can now see images, listen to audio, watch video, and read PDFs. The read_file tool returns native content blocks, so your agent can reason across all these formats out of the box, unlocking a whole new set of workflows for your agents. 👉 Improved prompt caching - better token efficiency and lower costs for Claude models. Try it out in deepagents v0.5, deepagentsjs v1.9.0 Learn more in the Deep Agents v0.5 blog. blog.langchain.com/deep-agents-v0…

English
1
4
6
893
LangChain JS retweetledi
Mason Daugherty
Mason Daugherty@masondrxy·
GLM-5.1 from @Zai_org is available for use today in Deep Agents! as we wrote earlier this week, open weight models are becoming an increasingly attractive option for a wide spectrum of use cases. Read more ⬇️
Mason Daugherty tweet media
Z.ai@Zai_org

Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations. Blog: z.ai/blog/glm-5.1 Weights: huggingface.co/zai-org/GLM-5.1 API: docs.z.ai/guides/llm/glm… Coding Plan: z.ai/subscribe Coming to chat.z.ai in the next few days.

English
1
5
20
8.7K
LangChain JS
LangChain JS@LangChain_JS·
The read_file tool now returns native content blocks for images, audio, videos, and PDFs giving your agent the ability to reason about more than just text files. This functionality is implemented via the new BackendProtocolV2 interface which handles multimodal content blocks across all backends with backwards compatibility. Custom backend implementations with BackendProtocol can migrate when ready. 🧵3/3 docs.langchain.com/oss/javascript…
English
0
0
2
353
LangChain JS
LangChain JS@LangChain_JS·
Async subagents give the main agent five tools to manage background tasks: start_async_task, check_async_task, update_async_task, cancel_async_task, and list_async_tasks. Unlike inline subagents which block the main agent until complete, async subagents are fire-and-forget - they return a task ID immediately and execute independently. The main agent can start multiple async subagents in parallel, keep the conversation going, and collect results as they become available. Tasks are stateful and each one maintains its own thread. This allows you to send follow-up instructions mid-task without losing context. Any Agent Protocol compliant server is a valid target. This gives you the flexibility to use LangSmith deployments or to host an async subagent on your own infra! 🧵2/3 github.com/langchain-ai/d…
English
1
1
1
377
LangChain JS
LangChain JS@LangChain_JS·
🚀 deepagentsjs v1.9.0 is here! Here's what's new: 👉Async subagents - Kick off background tasks on any Agent Protocol backed server 👉Expanded multimodal support - allow your agent to understand PDFs, images, audio, and video files. 👉Improved prompt caching performance for Claude models 🧵1/3
English
2
17
76
9.8K
LangChain JS retweetledi
LangChain
LangChain@LangChain·
🚀 Interrupt sold out last year, so don't wait: May 13-14 in San Francisco. interrupt.langchain.com Last year, hundreds of builders packed a room to share what's actually working in agent production. This year, we're going bigger. @hwchase17, @AndrewYNg, @cj_mongodb, are headlining. Alongside them: real-world sessions from teams shipping agents today, time with LangChain engineers, and a pioneering AI builder community. If you're working on agents or thinking about it, this is two days with the people furthest along.
English
0
20
65
10.6K