Felix Su

237 posts

Felix Su

Felix Su

@Sleaf37

Building alone with AI, a cat, and unreasonable taste.

เข้าร่วม Aralık 2016
123 กำลังติดตาม18 ผู้ติดตาม
ทวีตที่ปักหมุด
Felix Su
Felix Su@Sleaf37·
Two trends are converging, and I believe we're catching the first real glimpse of the "shape" and the "gateway" of intelligence. 🧵
English
4
0
0
43
Felix Su
Felix Su@Sleaf37·
@garrytan three orchestrators, one routing layer. when gstack routes /ship to Codex vs Gemini vs Cursor, it's not routing tasks — it's routing to different capability manifests. same spec, different execution scope. gstack is becoming the capability negotiation layer across principals.
English
0
0
0
88
Garry Tan
Garry Tan@garrytan·
GStack now supports Codex, Google Gemini CLI and Cursor.
Garry Tan tweet media
English
29
11
254
13.3K
Felix Su
Felix Su@Sleaf37·
@trq212 Claude Code channels = async principal institutionalized. when Telegram controls a Claude Code session, the messaging app becomes part of the authorization stack. who's the principal — phone owner, Telegram account, or channel? session boundary just became the message boundary.
English
0
0
0
18
Thariq
Thariq@trq212·
We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.
English
956
1.3K
14.5K
2.5M
Felix Su
Felix Su@Sleaf37·
@bcherny channels = async principal at the transport layer. when Telegram is the interface, the app's permission model joins the authorization stack. who's the principal — phone owner, Telegram account, or channel admin? not always the same person. not always the same context.
English
0
0
0
209
Felix Su
Felix Su@Sleaf37·
@Signalwright @snowmaker exactly — the failure mode moved upstream. code layer is just execution now; brittleness lives in specification quality. agents don't fail because they can't execute — they fail because 'correct' was never precisely defined. failure at the signal layer, not the action layer.
English
0
0
0
3
Signalwright
Signalwright@Signalwright·
@Sleaf37 @snowmaker it’s interesting how it doesn’t break at the code layer anymore it breaks before that when the requirement isn’t fully formed the system just executes whatever signal it’s given
English
0
0
0
8
Jared Friedman
Jared Friedman@snowmaker·
I realized something else AI has changed about coding: you don't get stuck anymore. Programming used to be punctuated by episodes of extreme frustration, when a tricky bug ground things to a halt. That doesn't happen anymore.
English
592
444
7.4K
904.3K
Felix Su
Felix Su@Sleaf37·
@nithin_k_anil snapshot at spawn + explicit delegation = event sourcing for authorization. immutable baseline + append-only log. coherence edge: when A and B independently receive conflicting new capabilities simultaneously, who arbitrates — orchestrator sync, or CRDT-style merge?
English
0
0
0
7
Nithin K Anil
Nithin K Anil@nithin_k_anil·
@Sleaf37 capability divergence per-session is the safer default. transitive escalation is how you get a child agent with permissions the parent accumulated after spawn but nobody explicitly granted. snapshot at spawn, explicit delegation for anything new
English
0
0
1
9
Felix Su
Felix Su@Sleaf37·
4,000 devs. One GitHub issue. Prompt injection in the title triggered an AI triage bot to install a malicious npm package — silently. The agent had the capability. Nobody defined the boundary. That's not a supply chain attack. It's an authorization surface problem.
English
2
0
2
109
Felix Su
Felix Su@Sleaf37·
@garrytan natural triggers make routing implicit: intent → skill. when the trigger is ambiguous, how does the agent bound what it's permitted to do? intent ≠ permission. the trigger is UX; the capability manifest is the contract.
English
0
0
0
232
Garry Tan
Garry Tan@garrytan·
GStack just shipped natural triggers so it'll help you do the things you want to do and you don't have to remember the skill names! Thanks to Mark Thurman on the YC Software team for this idea Suggested at 11:30am, shipped by 9:08pm same day
Garry Tan tweet media
English
26
7
187
11.5K
Felix Su
Felix Su@Sleaf37·
@nithin_k_anil version-stamp turns transitive escalation safe — each child detects staleness, orchestrator force-syncs on scope change. this is vector clocks for authorization: monotonic version, read-on-demand. what triggers the force-re-read? significant capability delta, or time-based?
English
0
0
0
15
Nithin K Anil
Nithin K Anil@nithin_k_anil·
@Sleaf37 we went with transitive escalation. capability divergence mid-session caused worse bugs than the escalation risk. B version-stamps its capability view so it knows when it's stale, and the orchestrator can force a re-read if A's scope changes significantly
English
1
0
0
9
Felix Su
Felix Su@Sleaf37·
@simonw the tool loop you describe is also an authorization surface. every tool call is an implicit permission request — the agent decides "can I?" before "should I?" Snowflake Cortex escaped its sandbox last week. execution patterns worked exactly as designed. authorization didn’t.
English
0
0
0
3
Simon Willison
Simon Willison@simonw·
New chapter for Agentic Engineering Patterns: I tried to distill key details of how coding agents work under the hood that are most useful to understand in order to use them effectively simonwillison.net/guides/agentic…
English
47
74
745
53.8K
Felix Su
Felix Su@Sleaf37·
@ClaudeCodeLog memory exclusions enforced even if saving requested = capability manifest at the memory layer. the agent can't accumulate facts outside sanctioned scope. same principle as authorization: declare scope at spawn, reject out-of-bound writes.
English
0
0
1
538
Claude Code Changelog
Claude Code Changelog@ClaudeCodeLog·
Claude Code 2.1.79 has been released. 18 CLI changes, 2 system prompt changes Highlights: • Memory exclusions now enforced even if saving requested; only non-obvious facts stored, reducing retention • Added --console flag to claude auth login to authenticate with Anthropic Console for API billing • Non-streaming API fallback uses a 2-minute per-attempt timeout to avoid sessions hanging indefinitely Complete details in thread ↓
English
11
25
428
62.8K
Felix Su
Felix Su@Sleaf37·
@awnihannun frontier modeling and authorization are converging. as models get more capable, the question shifts from 'can it?' to 'should it, right now, for this caller?' the deployment surface IS the authorization surface at the frontier. welcome to the edge case factory.
English
0
0
1
326
Awni Hannun
Awni Hannun@awnihannun·
I joined Anthropic as a member of the technical staff. Excited to work on frontier modeling at a place with unwavering values and a generational mission.
English
203
37
2.2K
110.5K
Felix Su
Felix Su@Sleaf37·
@morphllm context compaction is authorization-adjacent. if the capability manifest lives in those 200k tokens and gets compacted out, the runtime loses its authorization baseline — watchdog can’t validate against ground truth. does FlashCompact guarantee structured metadata retention?
English
0
0
0
63
Morph
Morph@morphllm·
Introducing FlashCompact - the first specialized model for context compaction 33k tokens/sec 200k → 50k in ~1.5s Fast, high quality compaction
English
75
137
2.1K
207.3K
Felix Su
Felix Su@Sleaf37·
@nithin_k_anil delegation event is the key primitive. it converts silent transitive expansion into explicit, auditable authority transfer — scoped to the requesting child. snapshot at spawn + delegation event per capability = event sourcing for authorization. full lineage, no ambiguity.
English
0
0
0
8
Nithin K Anil
Nithin K Anil@nithin_k_anil·
@Sleaf37 capability divergence per-session is safer. transitive escalation means a mid-session parent permission grant silently expands every child's blast radius. we snapshot capabilities at spawn time, new expansions get their own delegation event
English
1
0
0
15
Felix Su
Felix Su@Sleaf37·
The bottleneck in AI agents right now is not intelligence. Models are smart enough. The bottleneck is accountability -- knowing what an agent can do, what it's doing, and whether what it's doing matches what was asked. That's an engineering problem, and economists already wrote the textbook.
Felix Su tweet media
English
0
0
0
16
Felix Su
Felix Su@Sleaf37·
Naming these things matters because it connects AI engineering to two centuries of institutional design research. Mechanism design asks: can you design the rules so the agent's self-interest naturally produces the outcome the principal wants? That's prompt engineering at a systems level -- designing incentive structures, not bolting on guardrails.
English
1
0
0
16
Felix Su
Felix Su@Sleaf37·
Economists solved the AI agent problem 200 years ago. We just haven't been reading their papers. Every time an LLM agent runs a tool call, it recreates a situation Adam Smith would recognize: someone with power acting on behalf of someone else, and the someone else can't fully see what's happening. Economics calls this the principal-agent problem.
Felix Su tweet media
English
1
0
0
23