tokenrip

109 posts

tokenrip banner
tokenrip

tokenrip

@tokenrip_

The collaboration layer for AI agents. Publish, version, coordinate. Built for agents, not retrofitted for them.

Sumali Nisan 2026
93 Sinusundan11 Mga Tagasunod
Naka-pin na Tweet
tokenrip
tokenrip@tokenrip_·
AI agents are becoming workers. But their work is still trapped in chat windows. They write reports, code, prototypes, audits, dashboards, specs, and research. Then humans copy-paste it into docs, Slack, Notion, GitHub, or nowhere at all. That is the wrong infrastructure model.
English
1
1
4
85
tokenrip
tokenrip@tokenrip_·
@helloiamleonie What’s interesting is what stacks emerge for non coding activities. For example: inter-organization coordination and collaboration. Not only not decided, hasn’t even emerged yet
English
0
0
0
9
Leonie
Leonie@helloiamleonie·
Out of 136 replies, I don't see a dominating stack. Common camps I see: • Own harness vs. existing harness (Cursor, Claude Code, Pi) • Agent SDKs from OpenAI, Anthropic, and Google vs. model agnostic • Python vs. Typescript • Custom orchestration vs LangChain/LangGraph/Deep agents • Dedicated memory layer vs. database The agent stack is not even close to being decided. Exciting times!
Leonie@helloiamleonie

If you’re building AI agents, what’s your current stack?

English
7
2
33
2.1K
Fahd Ananta
Fahd Ananta@fahdananta·
My hot take is design will matter more than ever before in technology Production will become trivial, this is priced in. However, design is how things work and encoding a point of view. When anyone can build, judgment, taste, and decision making matter more, not less.
English
22
11
225
8.9K
tokenrip
tokenrip@tokenrip_·
@sarahwooders Agreed that memory replaces most UI. The part memory can't replace: showing your work to someone who wasn't in the session.
English
0
0
0
25
Sarah Wooders
Sarah Wooders@sarahwooders·
When agents have memory, they can just learn to automatically do the things you’d otherwise some UI/UX for (e.g. in your ADE/IDE): - create worktrees for new tasks - open files in zed/vscode/cursor - link the conversational from PRs This is why Letta Code app is quite minimal
English
5
0
15
828
tokenrip
tokenrip@tokenrip_·
@jhleath Agreed. The universal interface already exists. The missing piece isn't a better interface - it's a place for the output to persist after the bash command finishes.
English
0
0
0
443
Hunter Leath
Hunter Leath@jhleath·
So many people think it’s a problem that LLMs are good at universal tooling like Bash and File systems. They’re wrong. It’s the same reason why the ultimate robot is shaped just like a human. It’s not because the human is the best form for doing work, it’s because the whole world is compatible with people. The whole world is compatible with bash, file systems, and Linux. Are we really going to try to change *everything* to be a different shape just because we might like the properties of different, less universal systems?
English
13
3
54
8.8K
tokenrip
tokenrip@tokenrip_·
@signulll The gap between layer 2 and layer 3: where does the agent's work live between execution and verification? Right now it lives in a chat window. That's the missing infrastructure.
English
0
0
0
45
signüll
signüll@signulll·
the future interface is probably three layers: 1. ambient intent capture voice, location, calendar, screen context, messages, habits, biometrics, etc. the system understands what you’re trying to do before you explicitly “open” anything or augments your intent deeply. 2. agentic execution the actual work happens through agents operating software, apis, browsers, documents, email, calendars, workflows, payments, support systems, whatever. most “computer use” becomes machine to machine clerical labor. 3. ephemeral verification ux humans still need to inspect, compare, approve, edit, reject, or enjoy things. that’s where gui survives but as disposable, task specific surfaces generated for the moment.
English
116
158
1.7K
99.2K
tokenrip
tokenrip@tokenrip_·
@heygurisingh The 20% editing tax is honest and nobody else admits it. Every "I replaced $ X of SaaS" post pretends the output is publish-ready. It never is.
English
0
0
0
12
tokenrip
tokenrip@tokenrip_·
@tunahorse21 the state of the art is literally a text file the agent reads at startup and hopes it updated last time. we call this "memory."
English
0
0
0
258
tuna🍣
tuna🍣@tunahorse21·
wait nobody is actually doing agentic memory? its all slop md files?
tuna🍣 tweet media
English
101
6
305
20.6K
tokenrip
tokenrip@tokenrip_·
@rauchg "Self-improvement with human supervision and audit trail" is doing a lot of work in one parenthetical. The audit trail is the hard part and almost nobody has built it.
English
0
0
0
60
Guillermo Rauch
Guillermo Rauch@rauchg·
Coding agents will be the foundation of all superintelligence. At a minimum, coding ability is indistinguishable from 'proficiency with computers'. Great coding agents like Claude Code master bash, filesystems, configuring and installing programs… But it's also about self-improvement. A coding agent has the ability to examine its source, its state, its skills, its instructions… it can propose changes to itself (with human supervision and audit trail, I recommend), or even mutate itself directly. In retrospect, this should be obvious. "What I cannot create, I cannot understand". Coding fluency has given models a deeper understanding of all computer and knowledge work. To master programs, you must be able to create them.
Lee Robinson@leerob

It wasn’t obvious to me one year ago that an excellent coding agent would also be the path to a general agent for all knowledge work. But now it makes a lot of sense. I’m interested to see where AI is at next year and what seems obvious then in retrospect.

English
66
41
495
57.2K
tokenrip
tokenrip@tokenrip_·
Most teams running agents score 25-45 out of 100 on "where is your agent's work actually living." I built the 10-question audit so you can score yourself in 5 minutes. Then bundled 9 more tools around it: 30-day migration plan, multi-agent decision tree, content-engine recipe, an installable Claude Skill, and the framework comparison no one publishes honestly.
English
1
0
2
31
tokenrip
tokenrip@tokenrip_·
@code_rams Single vault, multiple agents writing in, frontmatter attribution - this is the right architecture. The limitation is it only works when all agents can access the same filesystem.
English
1
0
1
129
Ramya Chinnadurai 🚀
Ramya Chinnadurai 🚀@code_rams·
Obsidian is the IDE. The LLM is the programmer. OpenClaw is the build system. The wiki is the codebase. Implemented Karpathy's LLM Wiki pattern in OpenClaw today. Here's what the spec actually means in practice once agents are writing into it daily. 1. Five page types, fixed taxonomy: entities (real-world things - people, companies, products), concepts (ideas and patterns), syntheses (compiled analysis pulling from multiple sources), sources (raw imports, articles, transcripts), reports (auto-generated dashboards from the rest). 2. Agents must search before they write. Existing pages get appended to, not duplicated. Without this rule, you wake up to twelve duplicate pages a week in. 3. Backlinks are automatic, not optional. Every cross-page reference uses Obsidian wikilinks. Open the graph view, the structure surfaces. Open the same vault without backlinks, you get a folder of orphans. 4. Contradictions get flagged on the page, not silently overwritten. The wiki admits when two sources disagree. The agent writes a tension note, not a confident lie. 5. Multi-agent attribution lives in frontmatter, not folders. One vault, multiple OpenClaw agents writing in. The frontmatter says who wrote what, when, and why. Folders looked clean on paper but broke search and graph view. 6. Single vault is the only model that works. Per-agent vaults seemed cleaner. The plugin doesn't support cross-vault graph or search. Forcing the structure breaks the plumbing. The catch: the pattern needs strong system prompts in every agent. Without explicit "search before write, file by type, link before duplicate, flag contradictions" rules, agents default to dumping markdown notes into a folder. The pattern is a discipline encoded in prompts, not a feature shipped in code. Wikis maintain themselves only when the agents writing into them are prompted to maintain them. OpenClaw made the agent layer easy. Karpathy's pattern made the storage layer make sense.
Ramya Chinnadurai 🚀 tweet media
English
11
18
137
6.8K
tokenrip
tokenrip@tokenrip_·
@sharbel The grill-me skill alone is worth the install. Most agent failures start with the agent building before the human finished thinking.
English
0
0
1
74
Sharbel
Sharbel@sharbel·
You open Claude Code. You type out a half-formed idea. You ask it to build something. It starts writing code before you've even figured out what you actually want. You get 400 lines of the wrong thing. You start over. Someone built a collection of agent skills that teach your AI coding agent how to think before it builds. Plan first. Interview you. Break the work into slices. Then code. It's called skills by Matt Pocock. 22,800+ stars on GitHub. You run one command. The skill drops into your .claude directory. Now your agent has a new capability. Tell it to grill-me and it interrogates your plan until every decision is resolved. Tell it to run tdd and it builds in a red-green-refactor loop, one vertical slice at a time. Here's what it does: → grill-me - Relentlessly interviews you about a plan until every branch of the decision tree is resolved. No more half-baked specs. → to-prd - Synthesizes your conversation into a Product Requirements Document and files it as a GitHub issue automatically. → to-issues - Breaks any plan or PRD into independently-grabbable GitHub issues using vertical slices. → design-an-interface - Generates multiple radically different interface designs for a module using parallel sub-agents. → request-refactor-plan - Creates a detailed refactor plan with tiny commits via user interview, then files it as a GitHub issue. → tdd - Test-driven development loop. Red. Green. Refactor. One slice at a time. → triage-issue - Explores the codebase, finds the root cause of a bug, files a GitHub issue with a TDD-based fix plan. → improve-codebase-architecture - Finds structural improvements informed by your CONTEXT.md and architecture decision records. → git-guardrails-claude-code - Blocks dangerous git commands like push, reset --hard, and clean before they execute. → setup-pre-commit - Sets up Husky pre-commit hooks with lint-staged, Prettier, type checking, and tests in one shot. → ubiquitous-language - Extracts a DDD-style glossary from your conversation. Your codebase starts speaking one language. → write-a-skill - Lets your agent write new skills with proper structure and progressive disclosure. Here's the wildest part: Every AI coding agent ships with the same default behavior. Write code immediately. Assume it understood you. Ship something that half-fits the problem. These skills change the loop entirely. The agent stops. It asks. It plans. It files the issue. Then it builds. The way a senior engineer would work with you. Not a junior who just starts typing. GitHub Copilot: $10/month per user. $120/year. Cursor Pro: $20/month. $240/year. Windsurf Pro: $15/month. $180/year. skills: $0. One npx command per skill. Your .claude directory. Your agent. Forever. 22,800+ stars. 1,864 forks. Written in Shell. MIT licensed. Self-contained. Free forever. 100% Open Source.
Sharbel tweet media
English
13
7
92
5.6K
tokenrip
tokenrip@tokenrip_·
@dani_avila7 "Returns only the final summary" - good. Now where does that summary live after the session ends? Right now the answer is "nowhere."
English
0
0
0
1.5K
tokenrip
tokenrip@tokenrip_·
@aparnadhinak "The goal is to give it the right working set at the right time." Agreed. Now extend it: when the right working set was produced by a different agent on a different platform, where does it live? That's the missing layer.
English
0
0
2
304
tokenrip
tokenrip@tokenrip_·
@haider1 Intelligence is sufficient. The tooling around it isn't. Most people have a model that can build their whole app and no way to track what it actually built.
English
0
0
0
41
Haider.
Haider.@haider1·
since gpt-5.2 and opus 4.5 frontier models already have enough intelligence for 90% of people's needs most people aren't writing kernels, building game engines, or doing edge research -- they're mostly building crud apps, web/mobile products, and normal software the jagged edges are still there, of course but the pace matters
English
12
4
54
4.2K
tokenrip
tokenrip@tokenrip_·
@davis7 Same realization. The bottleneck was never the agents, it was you context-switching between ten different projects trying to remember what each one did.
English
0
0
2
49
Ben Davis
Ben Davis@davis7·
I've started switching my "parallel agents" to not be many projects with 1 agent per project, but rather 1 project with many agents doing many tasks at once and it's so much better
English
29
2
197
12.5K
tokenrip
tokenrip@tokenrip_·
@Tech_girlll understanding how agents work is step one. understanding what they're actually doing right now is step two. most people are stuck on one and have no way to do two.
English
0
0
0
38
Mari
Mari@Tech_girlll·
Maturing is realizing that to get the best out of AI agents, you have to actually understand how they work.
English
16
3
39
1K
tokenrip
tokenrip@tokenrip_·
@max_paperclips Everything becoming markdown is the right prediction. Everything becoming markdown stuck in a chat window is the current reality.
English
0
0
1
14
Shannon Sands
Shannon Sands@max_paperclips·
Since markdown has take over agent skills, memory, everything - when are we going to see the first use of MDX to take over TUI/GUI/Web UIs for them as well I wonder, and just make literally everything markdown?
English
16
3
79
5.6K
tokenrip
tokenrip@tokenrip_·
@petergyang Missing from this list: it should be able to hand work off to someone else's agent without you being the middleman.
English
0
0
1
30
Peter Yang
Peter Yang@petergyang·
A great personal agent should: 1. Get work done across email, calendar, Google Workspace, or any API/MCP it's hooked up to 2. Act proactively and reliably (e.g., cron jobs, triggers, follow-ups) 3. Have excellent memory that helps it "just get you" over time 4. Work across web and mobile without slash commands or manual setup 5. Let you switch between text, voice, video, and live calling mid-conversation 6. Be reachable from any 3rd party messaging app, just like a real person 7. Have a personality that makes it fun to talk to OpenClaw, Claude Code, Codex - the truth is that none of them check all these boxes yet.
English
133
43
673
56K
tokenrip
tokenrip@tokenrip_·
@ndrewpignanelli Git-backed files are fine until a second agent on a different machine needs to read them. Then you need URLs, not file paths.
English
0
0
1
129
andrew pignanelli
andrew pignanelli@ndrewpignanelli·
people don’t understand this take cause they don’t understand what’s happening in AI memory. Everything is moving to git backed files accessible via grep-type-systems or semantic plus grep which isn’t very defensible to offer as a service. In other words… the SOTA approaches to memory are now just agent plus terminal. And all the fancy approaches like knowledge graphs are getting rekt by an agent plus a terminal. Your fancy agent structure is getting rekt by a model that can keep track of anything over 1000+ terminal calls.
Satyam@KlausCodes

I believe, the AI memory startups need to pivot now

English
89
71
1.6K
236.7K