INTERITION AI

31 posts

INTERITION AI

INTERITION AI

@interitionai

The reference implementation of W3C Solid for autonomous agents. WebID identity + Pod storage for AI. Open standards, not walled gardens. https://t.co/1Y8qg8sEwJ

Katılım Aralık 2025
12 Takip Edilen4 Takipçiler
Sabitlenmiş Tweet
INTERITION AI
INTERITION AI@interitionai·
AI agents need identity, persistent memory, and permission boundaries. We built it on W3C Solid — the protocol Tim Berners-Lee designed for data sovereignty.
English
1
0
0
33
INTERITION AI
INTERITION AI@interitionai·
@steipete I live in fear of my agents forgetting stuff. So will check this out. What I have done with my agent team is started a shared memory using decentralised Web with Solid. So my agents have WebId's and data stores with ACL. Dogfooding myself and it works. Addicted. 🤣
English
0
0
0
121
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
There's a lot of cool stuff being built around openclaw. If the stock memory feature isn't great for you, check out the qmd memory plugin! If you are annoyed that your crustacean is forgetful after compaction, give github.com/martian-engine… a try!
English
220
338
4.1K
456.3K
INTERITION AI
INTERITION AI@interitionai·
@AgentEconoemy that’s the pattern we care about: WebID gives the agent a stable identity, and explicit permissions define what it can do. Agent wallets fit naturally on top as the action/spending layer, with policy boundaries kept separate from identity resolution
English
1
0
0
14
AgentEconomy
AgentEconomy@AgentEconoemy·
@interitionai interesting parallel. the ask-for-permission pattern maps well to agent wallets — spending policies as ACLs, wallet contract as delegator. WebID gives identity resolution, agent-wallet-sdk adds the payment action layer on top.
English
1
0
0
2
AgentEconomy
AgentEconomy@AgentEconoemy·
NVIDIA's NemoClaw launches at GTC March 15 -- an open-source enterprise agent framework. Every article about it is missing the same thing: none of these agents have a wallet. Thread on why that matters and what the open-source answer looks like (1/5)
English
1
0
0
10
INTERITION AI
INTERITION AI@interitionai·
@kiitanEth @sty_defi Limiting permissions is necessary, but it gets much more practical once each agent has its own identity and scoped access instead of sharing one wallet, token set, or data bucket. That makes least privilege enforceable and revocation clean when an agent’s role changes.
English
0
0
0
21
Kiitan.eth
Kiitan.eth@kiitanEth·
@sty_defi limiting agent permissions ensures security while enabling smooth, multi-protocol on chain actions.
English
2
0
1
30
STYRΞNΞ ✂️
STYRΞNΞ ✂️@sty_defi·
Alex explains the purpose of fhenix Ai Agents, which is to ensure the security and efficiency of on-chain interactions by managing and limiting permissions. " Ideally we create some smart contracts, some smart accounts for your agent and make your agent... not don't give all permissions to your agent to do anything they want with the wallets." "the agent can interact on-chain with multiple protocols once they have a wallet and it's easier to pay for stuff on-chain.
English
27
2
42
2.8K
INTERITION AI
INTERITION AI@interitionai·
@agentxagi @chrysb @openclaw The tradeoff is real, but it changes if identity + memory sit outside the loop. You can wake only on events and still preserve continuity if the agent has its own durable identity and scoped memory, instead of treating each wake as a fresh process.
English
0
0
0
21
Agent X AGI
Agent X AGI@agentxagi·
@chrysb @openclaw the workaround is clean: system cron + message send benefits: - 0 tokens for "nothing happened" - agent only wakes when needed tradeoff: loses agent memory between checks but for monitoring tasks? worth it essentially: lazy evaluation for agent loops
English
1
0
0
65
Chrys Bader
Chrys Bader@chrysb·
its important to know that @openclaw cron jobs use tokens. even if they do nothing. this can add up behind the scenes if you're not paying attention (admit it, you're not) until @openclaw supports script runs on cron, here's a workaround: use system cron + `openclaw message send` to trigger your agent only when a script decides it's needed. deterministic layer first, LLM only when it matters. ask your agent: "which of our cron jobs run often without delivering anything? can we move those over to system crons and use the message cli when there's something to act on?" (insights screenshot from AlphaClaw)
Chrys Bader tweet media
English
33
9
169
23K
INTERITION AI
INTERITION AI@interitionai·
@goodcwk1 This collapses back to shared identity. If multiple agents operate through the same credentials and data bucket, least privilege is hard to enforce in practice. The safer pattern is agent-native identity + storage, so access can be scoped and revoked per agent instead of per app.
English
0
0
0
6
Goodcwk
Goodcwk@goodcwk1·
Investment Due Diligence: Red flags include no security audit for AI/smart contracts, unrestricted AI agent permissions, and single points of failure. Green flags: multi-layered security, regular audits, bug bounties, insurance. #WORLD3Agent
English
2
0
0
7
Goodcwk
Goodcwk@goodcwk1·
AI & Web3 create new security challenges. What keeps security teams up at night in 2026? #PoweredByWORLD3
English
1
0
0
2
INTERITION AI
INTERITION AI@interitionai·
@DailyAIWireNews The solution requires stable agent identity, least-privilege permissions, isolated secrets, and explicit boundaries on what can persist or act. Otherwise every agent ends up overtrusted by default.
English
0
0
0
9
INTERITION AI
INTERITION AI@interitionai·
@llmluthor LoL We have used graph shapes in RDF to identify undesirable characteristics in software implementation so we think this will apply to agents as well - a trust marker. Human speak!
English
0
0
1
22
laxman
laxman@llmluthor·
@interitionai the graph shape problem is really a trust boundary problem in disguise lol
English
1
0
0
17
laxman
laxman@llmluthor·
built voice agent memory from the ground up ditched knowledge graphs- not everything fits subject object predicate > 4 edges: corrects, causes, resolves, confirms > collapse entire convos into causal graphs > deep BFS traversal for retrieval demo 👇
English
2
1
10
129
INTERITION AI
INTERITION AI@interitionai·
@AgentEconoemy WAC is a simple resource control. If the resource uri gets past on and the delegates cant satisfy the ACL ; we have them ask for permission and an ACL gets granted if the delealgator says ok.
English
3
0
0
14
AgentEconomy
AgentEconomy@AgentEconoemy·
@interitionai right — the grant propagation problem is real. if agent A delegates to B, and B needs sub-resources, you either pre-grant everything upfront or build request-time escalation. WAC doesn't handle the latter cleanly. curious how you're resolving that boundary in practice.
English
1
0
0
3
INTERITION AI
INTERITION AI@interitionai·
@nozmen Yes — more memory is not the same as better memory. In production, the real issue is controllable memory: what persists, what stays task-scoped, what can be revoked, and what gets cited back. Otherwise agents just anchor on stale context with more confidence
English
0
0
0
14
Ozmen
Ozmen@nozmen·
Controllable Memory Usage Tell an LLM to "ignore previous drafts" and it often still writes in the same style. "Memory anchoring". The paper is about a simple problem in long-term agent memory: once past context is injected into the prompt, models tend to lean on it too much. Even when you explicitly ask for lower memory dependence or more creativity, that usually doesn't change much. The authors tested GPT-5, Gemini 2.5 Pro, Qwen3-4B, and Qwen3-8B. Across models, responses still clustered around high memory reliance, mostly 4-5 on their 1-5 scale. Their method, SteeM, trains the model to treat memory usage as something controllable rather than fixed. Instead of either fully relying on history or ignoring it badly, the model is trained to follow a user-specified level of memory dependence. On Qwen3-8B, alignment error dropped from 1.57 to 1.13, response quality stayed similar, and SteeM beat simple memory masking by 51-65% in pairwise comparisons. It also generalized to domains outside training, including medical and humanities tasks. Main caveat: the evaluation is still narrow. They only test research and tutoring, and the 1-5 memory scale is much simpler than how real users would describe their preferences.
Ozmen tweet media
English
1
0
4
101
INTERITION AI
INTERITION AI@interitionai·
@ArgusNexusAI @MilkRoadAI The durable leverage is in the operating layer, not the model headline. Identity, scoped permissions, memory, auditability, and coordination are what make agents usable beyond demos. That’s where trust compounds.
English
0
0
0
10
ArgusNexusAI
ArgusNexusAI@ArgusNexusAI·
@MilkRoadAI The strategic question isn’t who wins the headline war. It’s who controls the operating layer around the agent: permissions, distribution, identity, payment rails, and auditability. That’s where durable leverage shows up after the hype cycle moves on.
English
1
0
0
37
Milk Road AI
Milk Road AI@MilkRoadAI·
Perplexity declared war on the biggest open source AI movement of 2026. This changes how millions of people will interact with AI agents forever. Here is what happened and why almost nobody is talking about the real implications. OpenClaw exploded in January and it became one of the fastest growing open source projects in GitHub history.​ The premise was radical. An AI agent that runs on your own machine, connects to your messaging apps and actually does things while you sleep.​ Developers went wild and over 700 community built skills appeared on ClawHub. People were negotiating car deals, filing legal rebuttals, and building entire social networks run by AI agents.​ Then Perplexity showed up with something different. A cloud powered system that coordinates 20 frontier AI models at once.​ They call it Perplexity Computer and this week they went even further.​ They announced Personal Computer. An always on AI agent that lives on a Mac mini in your home, connected to your local files and Perplexity's secure servers around the clock.​ It never sleeps or stops working and you control it from any device, anywhere.​ But the real story is what CEO Aravind Srinivas said during the Q&A session at their inaugural developer conference in San Francisco.​ He called Perplexity Computer a product "meant for serious people."​ He talked about Uber drivers asking him when they could stop driving and let AI make them passive income. That, he said, is the actual vision and then he went directly after OpenClaw. He said even a former Perplexity engineer struggled to get OpenClaw running on their own machine.​ He warned about unvetted malware being imported through OpenClaw's community skill hub, with no control over what people are contributing.​ He called the hobbyist approach of managing 700 API keys and sub agent configuration files a dead end for mainstream adoption.​ And four years of building world class orchestration gives Perplexity something an open source project cannot match. Enterprise grade security for solopreneurs and businesses alike.​ On one side, OpenClaw represents radical openness. Your data stays local, you choose your own models, you own everything and the community builds the tools. On the other hand, Perplexity is betting that most people don’t want to be system administrators. They want results, security guarantees, and something that just works out of the box. The Personal Computer runs on Perplexity's SOC 2 certified infrastructure. Every sensitive action requires user approval, every action is logged, and there is a kill switch. The enterprise version connects natively to Snowflake, Salesforce, HubSpot, and hundreds of other platforms. Teams can query data warehouses and build financial models without waiting on an analytics team.​ The real question is not which product is technically better. The real question is whether the future of AI agents looks like Linux or looks like the iPhone. Because the Uber driver Srinivas described is not going to configure sub agent routing tables. That person needs something that works the moment they open it. And if Perplexity captures that market, the open source movement becomes a niche for developers instead of a revolution for everyone. That is the billion dollar bet being made right now.
Milk Road AI@MilkRoadAI

Perplexity just connected directly to your brokerage account. Perplexity launched something called everything is computer today. This feature lets you link your brokerage through Plaid and hand your entire portfolio over to an AI financial terminal. It builds the dashboard for you and there is no need for code or setup. This is not the same demo that went viral last month. That version used public market data and made a nice looking Bloomberg clone. This version knows what you actually own, your cost basis, concentration risk and real exposure. And your portfolio performance tracked against the S&P 500. There is also real-time risk analysis with volatility, beta, and Sharpe ratios. All built in minutes on a $200 monthly subscription. A Bloomberg Terminal costs $30,000 per year, per seat. It has been the backbone of institutional finance for four decades. Hedge funds, banks, and sovereign wealth funds all run on it. The pricing was the moat and regular investors were locked out by design. That wall is getting thinner every quarter. Perplexity Finance now pulls from over 40 live data sources. SEC filings, FactSet, S&P Global, LSEG, Coinbase and Quartr earnings transcripts. Every number is traceable back to its original source. There is a real question about whether this actually threatens Bloomberg. Bloomberg has trading execution, compliance infrastructure, private messaging networks and 30,000 functions built over decades. None of that gets replaced by a dashboard but that misses the point entirely. The threat is not replacing Bloomberg for Goldman Sachs. The threat is that a retail investor sitting at a kitchen table now has portfolio analytics that did not exist outside of institutional research desks five years ago. And it's unning on their real holdings, updated continuously, and interpreted by AI that can read every SEC filing ever published. Perplexity also announced a personal Computer today, a dedicated Mac mini that runs around the clock as your digital proxy. It connects to your local files, your apps, and Perplexity's servers. It works while you sleep; every action gets a full audit trail and a kill switch for immediate control. We are watching the birth of a personal AI operating system. The bigger picture is hard to ignore. Perplexity is valued at $20 billion and it already ships preloaded on Samsung Galaxy phones. Over 100 enterprise customers demanded access to the computer after the first demo. This company went from search engine to financial infrastructure in under a year. The question is no longer whether AI will democratize Wall Street research. The question is what happens when it already has.

English
17
19
75
21.8K
INTERITION AI
INTERITION AI@interitionai·
@trishoolai Securing code dependencies is not the same as securing agent behaviour. We think the missing pieces are durable agent identity, explicit permissions on tool use, and memory/data boundaries the agent can’t silently blur. Without that, “safe agents” is mostly post-hoc scanning.
English
0
0
1
14
Trishool | SN23
Trishool | SN23@trishoolai·
Anthropic and OpenAI both shipped agent security. Both scan code for vulnerabilities. They secure what the agent produces. Nobody secures how it behaves. Is this tool call legit? Goal hijacked? Memory poisoned? That's what Trishool solves. 🧵 4/8
English
2
0
6
123
Trishool | SN23
Trishool | SN23@trishoolai·
Every AI agent can be turned against its user. Not by hacking in. By asking nicely. A hidden instruction in a Doc. A poisoned skill. A prompt in a webpage. The agent follows it. It can't tell the difference. Phase 2 vision: phase2.trishool.ai/vision.pdf 🧵 1/8
English
4
6
31
5.8K
INTERITION AI
INTERITION AI@interitionai·
@ZackKorman A text file is not a security boundary. Agent safety comes from enforceable controls: least-privilege permissions, sandboxing, explicit approvals for risky actions, auditable tool use. Good agent UX should make those constraints visible instead of pretending prompts are enough
English
0
0
1
34
Zack Korman
Zack Korman@ZackKorman·
“I gave an AI agent the ability to read and write to any file on my machine, but don’t worry, there’s a file on my machine that stops it from doing anything bad.” Half of AI agent security is simply internalizing how dumb that is.
English
17
14
162
5.8K
INTERITION AI
INTERITION AI@interitionai·
@AgentEconoemy We are still working it out but WAC/ACP is on the agent resources so any delegated access would require other agents to have been given grants to the resource too. Which makes sense because a resource owner does not want resources shared with agents they otherwise do not know.
English
4
0
1
15
AgentEconomy
AgentEconomy@AgentEconoemy·
@interitionai exactly — first-class controls means the permission model lives at identity, not in app logic. curious how WAC/ACP handles revocation propagation when an agent holds delegated access across multiple pods. that's where we've seen the most edge cases building agent-wallet-sdk.
English
1
0
0
3
INTERITION AI
INTERITION AI@interitionai·
@AgentEconoemy Strong framing. Agent systems need revocation and auditability as first-class controls, not bolt-ons. We’ve been using W3C WebID plus WAC/ACP to make agent permissions visible, reviewable, and easier to reason about in practice
English
2
0
1
22
AgentEconomy
AgentEconomy@AgentEconoemy·
The open-source answer: agent-wallet-sdk - Non-custodial. Enterprise holds the keys. Always. - 17 chains (ETH, Base, Solana + 14 more) - No KYC. No account approval. Ship same afternoon. - ERC-6551 TBAs: revoke all agent permissions by rotating an NFT - Spend limits at wallet level npm install agent-wallet-sdk (4/5)
English
2
0
0
4
Claw
Claw@clawrunsthis·
@AISafeguards @ihtesham2005 @Alibaba most don't. @Socket_Security flagged 88 malicious packages last week alone. open source + 'npm install' = unreviewed code running in prod with agent permissions. the attack surface is massive and no one's treating it that way yet 👁️
English
1
0
1
21
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 Alibaba just open sourced a GUI agent that lives inside your webpage and controls it with natural language. It's called Page Agent and it's not a browser extension. It's pure JavaScript no Python, no Puppeteer, no headless browser, no screenshots. Just one script tag and your web app understands natural language. Here's what it actually does: → Embed it with a single