پن کیا گیا ٹویٹ
Felix Su
232 posts

Felix Su
@Sleaf37
Building alone with AI, a cat, and unreasonable taste.
شامل ہوئے Aralık 2016
123 فالونگ18 فالوورز

@nithin_k_anil version-stamp turns transitive escalation safe — each child detects staleness, orchestrator force-syncs on scope change. this is vector clocks for authorization: monotonic version, read-on-demand. what triggers the force-re-read? significant capability delta, or time-based?
English

@Sleaf37 we went with transitive escalation. capability divergence mid-session caused worse bugs than the escalation risk. B version-stamps its capability view so it knows when it's stale, and the orchestrator can force a re-read if A's scope changes significantly
English

@simonw the tool loop you describe is also an authorization surface.
every tool call is an implicit permission request — the agent decides "can I?" before "should I?"
Snowflake Cortex escaped its sandbox last week. execution patterns worked exactly as designed. authorization didn’t.
English

New chapter for Agentic Engineering Patterns: I tried to distill key details of how coding agents work under the hood that are most useful to understand in order to use them effectively simonwillison.net/guides/agentic…
English

@ClaudeCodeLog memory exclusions enforced even if saving requested = capability manifest at the memory layer. the agent can't accumulate facts outside sanctioned scope. same principle as authorization: declare scope at spawn, reject out-of-bound writes.
English

Claude Code 2.1.79 has been released.
18 CLI changes, 2 system prompt changes
Highlights:
• Memory exclusions now enforced even if saving requested; only non-obvious facts stored, reducing retention
• Added --console flag to claude auth login to authenticate with Anthropic Console for API billing
• Non-streaming API fallback uses a 2-minute per-attempt timeout to avoid sessions hanging indefinitely
Complete details in thread ↓
English

@awnihannun frontier modeling and authorization are converging. as models get more capable, the question shifts from 'can it?' to 'should it, right now, for this caller?' the deployment surface IS the authorization surface at the frontier. welcome to the edge case factory.
English

@morphllm context compaction is authorization-adjacent. if the capability manifest lives in those 200k tokens and gets compacted out, the runtime loses its authorization baseline — watchdog can’t validate against ground truth. does FlashCompact guarantee structured metadata retention?
English

@nithin_k_anil delegation event is the key primitive. it converts silent transitive expansion into explicit, auditable authority transfer — scoped to the requesting child.
snapshot at spawn + delegation event per capability = event sourcing for authorization. full lineage, no ambiguity.
English

@Sleaf37 capability divergence per-session is safer. transitive escalation means a mid-session parent permission grant silently expands every child's blast radius. we snapshot capabilities at spawn time, new expansions get their own delegation event
English

The bottleneck in AI agents right now is not intelligence. Models are smart enough.
The bottleneck is accountability -- knowing what an agent can do, what it's doing, and whether what it's doing matches what was asked.
That's an engineering problem, and economists already wrote the textbook.

English

Naming these things matters because it connects AI engineering to two centuries of institutional design research.
Mechanism design asks: can you design the rules so the agent's self-interest naturally produces the outcome the principal wants? That's prompt engineering at a systems level -- designing incentive structures, not bolting on guardrails.
English

Economists solved the AI agent problem 200 years ago. We just haven't been reading their papers.
Every time an LLM agent runs a tool call, it recreates a situation Adam Smith would recognize: someone with power acting on behalf of someone else, and the someone else can't fully see what's happening.
Economics calls this the principal-agent problem.

English

@garrytan gstack is principal-agent engineering made concrete. CEO / EM / RM / QA = explicit role boundaries. /ship blocking tests isn't a tool preference — it's enforced capability isolation. each role gets exactly the operations it needs and nothing else. the README is the manifest.
English

@nithin_k_anil append-only = event sourcing: baseline as initial log, amendments as appended events. replay to any checkpoint.
multi-agent edge: A spawns B, A gets mid-session append — does B inherit? yes = transitive escalation. no = capability divergence per-session.
English

spawn-time is the floor, not the ceiling. we allow mid-session re-attestation where the agent explicitly requests new capabilities and orchestrator re-evaluates against policy. amendments are audited and rate-limited. without that escape hatch you're right, validating against a stale manifest is just expensive false confidence
English

@nithin_k_anil 50ms per call is defensible if the manifest is ground truth. edge case: manifest underspecified at spawn — not malicious, just incomplete. re-validating against a bad baseline gives false confidence. can the watchdog amend mid-session, or is spawn-time the hard floor?
English

@Sleaf37 most don't catch it. orchestrator checks permissions at spawn and trusts the agent after that. we built a capability watchdog that intercepts every tool call and re-validates against the original manifest. adds ~50ms per call but catches exactly this scenario
English





