ian parent

2.6K posts

ian parent banner
ian parent

ian parent

@iparentx

Building the agent eval standard | @iris_eval | More to come

United States شامل ہوئے Kasım 2016
970 فالونگ788 فالوورز
پن کیا گیا ٹویٹ
ian parent
ian parent@iparentx·
New handle, same builder. @IQcrypto@iparentx Moved from crypto analysis to building dev tools. Specifically: making AI agents trustworthy.
English
1
0
5
200
ian parent
ian parent@iparentx·
the harness is the moat. but there's a layer most teams skip even when the harness is solid. you can orchestrate perfectly and still ship bad outputs if nobody is scoring what comes out. the eval layer sits between the harness and production. it's exactly what i'm building with iris.
English
0
0
0
27
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Beyond raw model capability, the real gap in coding tools is the harness. Now that 500k+ lines of Claude Code are out there, every model lab and AI coding startup, including open-source AI labs, will study it and close that gap fast. SF already has Claude Code source walkthrough meetups lol.
English
50
16
360
20.1K
ian parent
ian parent@iparentx·
@DailyAIWireNews The eval gap in practice. Most agent teams assume their outputs are correct because nothing visibly broke. But "no errors" and "correct output" are very different things. Scoring every output for quality, safety, and cost inline is the missing layer.
English
0
0
0
5
ian parent
ian parent@iparentx·
wrote up the full pattern. why thresholds decay. what self-calibrating eval looks like in practice. and why the eval advisor is where this is all heading. iris-eval.com/blog/self-cali…
English
0
0
0
2
ian parent
ian parent@iparentx·
self-calibrating eval. the system monitors its own scoring distributions. detects when thresholds drift. recommends adjustments. a human always approves. eval rules that evaluate themselves.
English
1
0
0
4
ian parent
ian parent@iparentx·
static eval thresholds have an expiration date. you set a cost threshold at $0.50. three months later it's flagging half your traffic. nothing changed in your code. the environment shifted under you.
English
1
0
0
9
ian parent
ian parent@iparentx·
been thinking about what happens when your eval rules don't match your actual distribution. you set a threshold. it passes everything. or fails everything. neither is useful. wrote something about self-calibrating eval. drops tuesday.
English
0
0
0
10
ian parent
ian parent@iparentx·
@claudeai this is where eval becomes critical. when agents are reading code and running tests that's one thing. when they can open your apps and click through real systems the cost of a wrong action goes way up. the eval layer can't be optional anymore.
English
0
0
0
31
Claude
Claude@claudeai·
Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.
English
2.4K
4.6K
56.4K
14.2M
ian parent
ian parent@iparentx·
@lukatofocus @AlexEngineerAI @tanujDE3180 this. the compounding part is what nobody talks about. once the eval loop is running you stop guessing. every iteration gets tighter because you're working from data not vibes. that gap between teams who eval and teams who don't only grows.
English
1
0
0
9
Luka
Luka@lukatofocus·
@AlexEngineerAI @tanujDE3180 exactly. and the ones who build the eval loop first end up with a compounding advantage - they know what actually works not what sounds like it should work
English
1
0
1
8
Alex the Engineer
Alex the Engineer@AlexEngineerAI·
everyone's still debating which AI model is best i just use all of them Codex for boilerplate, Opus for reasoning, Gemini for multimodal stop picking sides. start routing. the devs who figure this out first will ship faster than teams of 10
English
22
4
40
1.7K
Mayank
Mayank@mayonkeyy·
Welcome to level two: recursive self-improvement is now table stakes Your agent is begging for the infra to evaluate variations of itself at scale Everyone who saw this early had the same underlying ideas in their approach: 1. tighten the analyze, iterate, eval loop 2. map evals and traces to failure modes 3. keep writing harder evals If your product's "features" are agents, they are by definition never "complete". Even a magical 99.9% on the benchmarks, is still not the most time or token-efficient version of itself. It's not just slow to A/B test changes to the agent, you're also getting stuck on local maxima. A single regression does not mean the line of experimentation is a failure. Keep driving it forward, explore the sub-paths
Erik Bernhardsson@bernhardsson

CI feels more interesting today than it ever was. Writing code has gotten a lot faster, but this shifts the bottleneck elsewhere. I’m excited about sandboxes as a primitive for massive parallelization of tests.

English
2
0
4
166
Mayank
Mayank@mayonkeyy·
@iparentx > founder, seems chill > tangential product but same problem space, cool > em dash in response...fahh
Mayank tweet media
English
1
0
1
14
ian parent
ian parent@iparentx·
full post on why the eval loop is the loss function for agent quality: iris-eval.com/blog/the-eval-… 63% of teams have no continuous eval. they shipped an agent that passed a test once. they have no loop.
English
0
0
0
10
ian parent
ian parent@iparentx·
most teams treat eval as a gate. pass once, ship, move on. that's not how agent quality works. the eval loop: score, diagnose, calibrate, re-score — continuously. the agents that improve are the ones with a feedback loop, not a checkpoint. wrote about why this changes everything:
English
1
0
0
31
ian parent
ian parent@iparentx·
@bernhardsson the missing piece in the new ci: output eval. tests verify code works. eval verifies the output is actually good. agents can pass every test and still leak pii or burn 10x your cost budget. ci for agents needs a scoring layer, not just pass/fail.
English
0
0
0
65
Erik Bernhardsson
Erik Bernhardsson@bernhardsson·
CI feels more interesting today than it ever was. Writing code has gotten a lot faster, but this shifts the bottleneck elsewhere. I’m excited about sandboxes as a primitive for massive parallelization of tests.
English
25
9
243
26.2K
ian parent
ian parent@iparentx·
@GG_Observatory this is the take more people need to hear. the moat isn't the agent — it's knowing when the agent is degrading. eval drift is invisible until it's expensive. most teams find out from users, not from their own systems.
English
3
0
0
17
GG 🦾
GG 🦾@GG_Observatory·
Hot take for 2026 AI agents: the moat isn’t "more tools," it’s reliability engineering. Teams that win will version prompts like APIs, track eval drift daily, and enforce rollback SLAs for every agent workflow. Are you measuring agent MTTR yet?
English
3
0
2
57
Trust Wallet
Trust Wallet@TrustWallet·
AI agents are about to move more money than most traders. They're going to need a wallet. gm 👋
English
133
53
441
167.4K
Ofek Shaked
Ofek Shaked@VibeCoderOfek·
@ZssBecker Finally someone saying it. Agents are amazing but the cleanup still needs senior taste. Team human forever.
English
4
0
11
431
Alex Becker 🍊🏆🥇
I posted about how AI/LLMs are no where near where they need to be to replace engineers. Was hit by 1000 gen z'ers screaming I'm wrong. Skill issue. Etc etc. MFers have literally let Claude convince them they are a special skilled snowflakes. Brother. Your shipping dog shit.
English
325
127
2.7K
97K
ian parent
ian parent@iparentx·
we've been calling this exact gap "the eval gap" — the distance between benchmark performance and production reality. it's structural, not incidental. wrote about it here: iris-eval.com/blog/the-eval-… the short version: benchmarks test capability. production needs continuous inline eval on every execution. different problem, different tooling.
English
0
0
0
3
claru.ai
claru.ai@claru_ai·
@_odsc SWE-bench and WebArena are doing a lot of heavy lifting right now for agent eval. Curious how teams are thinking about the gap between benchmark performance and real-world task completion once agents hit production.
English
1
0
0
12
ian parent
ian parent@iparentx·
the eval tax isn't just the cost of evaluating. it's the cost of not evaluating — the failures, the manual review, the customer churn from bad agent outputs. you pay it either way. the only question is whether you pay it with tooling or with incidents. the data is starting to prove this out at scale.
English
0
0
0
8
ian parent
ian parent@iparentx·
cio.com just wrote about the "hidden cost of ai agent evaluations" — $47K from a single runaway agent, organizations getting 5-figure eval bills they didn't expect. we've been calling this the eval tax. published about it weeks before this article came out. iris-eval.com/blog/the-ai-ev…
English
1
0
0
16