nuanced.dev

29 posts

nuanced.dev banner
nuanced.dev

nuanced.dev

@nuanced_dev

Making AI tools smarter with semantic understanding.

San Francisco, CA Inscrit le Ekim 2023
24 Abonnements178 Abonnés
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
Today we’re open-sourcing Nuanced LSP and also releasing it as a Claude Code plugin. Nuanced LSP exposes real language servers behind a containerized API, giving agents precise, cross-file code navigation across languages. After evaluating its impact in production, we realized this work fits best as a building block others can build on. Repo & details: github.com/nuanced-dev/lsp
English
1
3
14
953
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
Does LSP actually make coding agents better? Last week, we published a deep dive on this exact question. Then, almost on cue, @AnthropicAI announced native LSP support for Claude Code. The timing made us smile. :) If you’re interested in the actual data, eval design, and what this taught us about where real leverage in agentic coding lives, our full write-up is here in the comments👇
ayman nadeem tweet media
English
8
2
54
7.6K
nuanced.dev
nuanced.dev@nuanced_dev·
🏎️🏎️🏎️
ayman nadeem@aymannadeem

large monorepos break most language servers. we hit the same wall, dug into what was going on, and ended up with a 10x speedup from an embarrassingly small change. full write-up is here: nuanced.dev/blog/how-we-ma… anyone who has worked in a huge monorepo knows how critical reliable code intelligence is. you can’t exactly build fast, safe tooling for agents or devs if your language server is constantly reindexing or timing out. as the repo grows, every slowdown compounds. we saw this while profiling nuanced lsp on large codebases. cold starts dragged on, CPU kept spiking, and even a simple “find references” call felt slow. we used the kubernetes repo as a stress test because it’s a classic example of a giant go codebase. and sure enough, gopls struggled there too. at first we assumed gopls just hates big repos. but it turned out gopls was doing exactly what we told it to. lsproxy, which nuanced lsp builds on, was walking the entire repo and collecting every `go.mod`, which made gopls try to initialize a workspace for each module at the same time. we fixed it by changing how root files are discovered, following what the gopls team actually recommends: - use a root `go.work` if it exists - otherwise only use the top-level go.mod - no recursion - fallback to repo root one workspace instead of hundreds competing. the impact was immediate. on the kubernetes test repo, indexing dropped from 6ish minutes to 36 seconds. cpu and memory stabilized, and cold starts became predictable. it looks like a tiny detail, but in large monorepos, these details matter! faster indexing means faster agents, faster dev tools, and a lot less wasted compute.

ART
0
0
1
788
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
today we launched something to fix context rot. as coding agents take on bigger tasks, the real challenge isn’t token limits, it’s that context gets noisy and outdated fast, leading to wasted time and inaccurate edits. Nuanced MCP gives agents the most relevant context as your code evolves: no brittle docs, artifacts or overly-manual context engineering required when you have up-to-date code intelligence delivered right to your agent. We do this by giving agents two types of data: - call graphs that supply architectural reasoning across the whole project. - LSP symbol data that fills in precision when the agent drills into a function. That means you get architecture when searching or figuring out how a change ripples across your codebase, and detail when editing.
English
1
3
25
1.6K
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
token efficiency isn’t just about speed, it’s also about accuracy. less noise in context == fewer hallucinations == better results. at nuanced.dev, we’ve been using call graphs + static analysis to give LLMs the best possible code context. big win not just for codegen, but for generating docs that stay true to the code itself.
Nick Khami@skeptrune

this is a big deal tokens in context window = time i use ai to save time, so reducing how many tokens go into the window by 30x makes all of my tasks faster and ai drastically more valuable not having to setup features like this yourself from scratch is a huge benefit of using Mintlify proud to see us ship this 💚💚!

English
3
1
33
16.1K
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
it's wild but I've hand-typed almost zero code this year. I sketch intent, describe constraints, and let AI take the wheel. The downside is babysitting confident mistakes and coaxing it back with re-prompts. We all agree that context is the critical lever for steering the agent, but the hard part is choosing the right context and getting the agent to actually use it throughout its workflow. We've found that agents work best when with structural context: who calls what, what depends on what, which invariants must hold, and when they have a disciplined way of using that context at each stage: before editing, while propagating changes, and when verifying results. Here’s what we found worked well in our experience:
English
2
2
23
2.8K
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
if you're vibe coding refactors, you need @nuanced_dev. I recorded a demo (link ⬇️) showing a refactor in Claude Code with and without Nuanced MCP. Same refactor, same prompt, run twice. The results: - Nuanced is more accurate on the first go - 35% faster - uses 32% less tokens - edits half the lines of code
ayman nadeem tweet media
English
6
5
26
2.9K
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
Today we're launching Nuanced MCP for TypeScript, available to individual developers. 🚀 Developers kept asking for an MCP server they could drop into Claude Code/Cursor--so we built it. It gives AI a compiler-grade call graph for grounded edits and fewer tokens. Links to our blog, launch video, and docs below.
English
14
9
63
19.9K
nuanced.dev
nuanced.dev@nuanced_dev·
Launch: introducing Nuanced MCP for TypeScript 🚀 Developers kept asking for an MCP server they could drop into Claude Code/Cursor—so we built it. It gives AI a compiler-grade call graph for grounded edits and fewer tokens. Links to our blog, launch video, and docs below.
English
2
0
0
205
nuanced.dev
nuanced.dev@nuanced_dev·
Nuanced provides compiler-grade context for AI coding. We just launched our explainer video: 📽️ youtu.be/674AwhMVoSE
YouTube video
YouTube
English
0
1
3
334
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
I prompted Claude Code to center a div and it had a meltdown. why? because your LLM is guessing with grep, not reasoning. I wrote about how programming languages are precise, but prompts aren’t. And how we fix that at Nuanced. (link below) 👇
ayman nadeem tweet media
English
2
2
10
1.1K
nuanced.dev
nuanced.dev@nuanced_dev·
New updates to nuanced-python: - v0.1.9 - cleaner CLI output - v0.1.8 - custom timeouts + better help text - v0.1.7 - version, deeper call graphs, and builtins toggle Less noise. More structure. Better for LLMs and agentic tools.
English
0
0
0
86
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
We just launched Nuanced for TypeScript 🚀 A static analysis tool that gives LLMs real code understanding. LLMs hallucinate because they don’t understand code--they just see tokens. That’s why your copilot invents functions, imports the wrong libraries, or silently breaks something two files away. We fixed that. Blog post + docs below 👇
English
1
5
40
8.7K
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
Read our full analysis on why we chose call graphs over LSPs: nuanced.dev/blog/why-we-ch… TL;DR: LSPs are built for editors, not AI. They prioritize responsiveness over completeness, while AI needs deterministic, semantic understanding of how code actually behaves.
English
1
1
8
1.9K
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
people often ask us: "are you building an LSP for AI?" or "why not just use LSPs for code understanding?" we've made a deliberate technical choice to ground our AI code understanding in call graphs instead. Here's our deep dive on this fundamental architecture decision 🧵
English
1
3
11
2K
nuanced.dev retweeté
ayman nadeem
ayman nadeem@aymannadeem·
exactly right. after 7 years building + scaling static analysis at GitHub, I'm now building Nuanced.dev to focus on semantics understanding for AI coding. while AI may abstract away syntax, it actually increases the need to understand how code truly interrelates. we've shipped call graph tools that map these dependencies so teams can predict change impacts and make safer decisions. our main thesis is that even with growing context windows, the output of LLMs hinges on the quality of the context we provide. and as AI increases the pace of generation and the resulting complexity, the more we need formal methods to understand their behavior and provide these probabilistic systems with more deterministic guarantees.
martin_casado@martin_casado

Re: Programming. We've been moving to a post language world for awhile now with all the libraries, runtime systems, frameworks etc. So syntax has been waning in importance. And AI will of course accelerate that. However, semantics remain important as ever. And natural language don't accurately describe them. As long as people need to describe systems, they'll need formal methods to do so.

English
0
2
21
6.5K