
Sam Crowder
511 posts

Sam Crowder
@samecrowder
Head of Product, LangSmith at @LangChain 🚀 | prev: @Harvard MS/MBA, @RocksetCloud (acq. OpenAI), @BainCapVC, @ContraryCapital


Polly is our AI assistant built directly into LangSmith to help you debug, analyze, and improve your agents — now generally available. Now, Polly lives on every page of LangSmith, remembers your full session as you navigate, and can take action to update prompts, compare experiments, write evaluators, and more. Read the blog: blog.langchain.com/polly-langsmit… See the docs: docs.langchain.com/langsmith/polly Try Polly in LangSmith: smith.langchain.com

🚀 Today we're launching LangSmith Sandboxes Agents get a lot more useful when they can run code: analyze data, call APIs, build entire applications. Sandboxes give them a safe place to do it with ephemeral, locked-down environments you control. Now in Private Preview. Learn more: blog.langchain.com/introducing-la… Join the waitlist: langchain.com/langsmith-sand…




Join us Wednesday, March 18th at 12:30pm at GTC for “Open Models: Where We Are and Where We’re Headed”, a panel featuring Harrison, Jensen, and the CEOs of Cursor, Thinking Machines Lab, Perplexity, and more. Add it to your schedule ➡️ nvidia.com/gtc/session-ca…

🚀 LangSmith for Startups Spotlight: @cogent_security Cogent is building AI agents that protect the world's largest organizations from cyberattacks. One of the hardest problems in cybersecurity is going from finding a vulnerability to actually fixing it. Cogent is automating that entire process from end-to-end. Cogent is already working with dozens of Fortune 1000 and Global 2000 enterprise customers such as major universities, hospitality brands, and consumer retailers. Cogent uses LangSmith for production tracing and monitoring of our agents. Their team leverages execution traces for usage insight and use-case categorization, self-refinement loops to diagnose eval failures, and online evaluators to flag undesired behavior. Join their team if you want to build frontier AI for mission critical problems 🤝cogent.com/careers






🚀 Announcing LangSmith Skills + CLI 🚀 Agent improvements are increasingly driven by coding agents themselves. We're releasing LangSmith Skills alongside the LangSmith CLI to make coding agents experts at the agent engineering lifecycle. LangSmith Skills enable agents to debug traces, create datasets, and run experiments - and thanks to the CLI, agents are able to do it all natively through the terminal, where they're most comfortable. Try out LangSmith Skills and the CLI with your own coding agents! ➡️ Skills: github.com/langchain-ai/l… ➡️ CLI: github.com/langchain-ai/l…

We've released changes polishing up the run details page in LangSmith 💅 - Snappier navigation between sections - Collapsable sections - Markdown formatting support

@coinbase and @Rippling are at Interrupt. Evan Kormos on how Coinbase built a multi-agent system to scale AI-handled support from 20% to 80%. Ankur Bhatt on how Rippling built deep agents to diagnose payroll tax notices across 50 states. May 13-14 · San Francisco ➡️ interrupt.langchain.com

we're building ai into langsmith not just to be a generic assistant, but to actually help debug agents 🧵here's a real example where it helped me over the weekend: context: I'm building an agent on deepagents (github.com/langchain-ai/d…). It has a bunch of tools for interacting with files issue: I noticed thanks to langsmith monitoring (docs.langchain.com/langsmith/dash…) that ~1% of calls to `ls` were failing. sidenote - this is value of ai native monitoring, we automatically tracking failing tool calls. I clicked into an example run and saw that the model was generating the wrong parameter to `ls` - it was passing `file_path` not `path` at this point, i knew what the issue was, but had no idea WHY it was occurring. the trace here was very long and the prompt was long as well. i suspected that there was something wrong in the prompt - maybe a bad example? i asked polly (docs.langchain.com/langsmith/polly) our in app assistant to help me debug. she investigated, and found that other file tools in deepagents use `file_path`, and `ls` is the only one that uses `path`. see screenshot below I don't know how long it would have taken me to figure this out otherwise everyone is adding assistants into app for basic question/answering. imo really valuable assistants go beyond that - they are purposefully placed in situations where they can augment human intelligence nicely. in this case - reading long traces and prompts is something llms are great at!

What? LangChain is evolving! Meet our final form ➡️ langchain.com



LangChain has been named to The Agentic List 2026 — recognizing the top trending agentic AI companies most admired by industry executives. Selected by enterprise leaders and supported by in‑depth research, we’re honored to be recognized by the people actually building successful businesses with agentic AI. 35% of the Fortune 500 are actively using LangChain products, and half of the Fortune 10 use LangSmith for observing, evaluating, and deploying production agents. We're excited to continue powering large enterprises like Workday, ServiceNow, Cisco, Cloudflare, and more.



🔎 We shipped native tracing for Google ADK! See how easy it is to get started observing your ADK agents in LangSmith with just a few clicks. LangSmith works natively with over 25 frameworks and providers, and not to mention OpenTelemetry! 🔥 Docs 👉 docs.langchain.com/langsmith/trac…