Tokagent

172 posts

Tokagent banner
Tokagent

Tokagent

@tokagent

Verifiable ML Agents for DeFi

Katılım Ağustos 2023
489 Takip Edilen351 Takipçiler
Sabitlenmiş Tweet
Tokagent
Tokagent@tokagent·
AI agents are starting to control real capital in DeFi. Today, this mostly relies on a hidden assumption: trust the operator ran the correct code. We think that’s broken. Here’s a technical write-up on how zero-knowledge proofs can make AI agent execution verifiable on-chain 👇 @mehdi-tokamak/verifiable-defi-ai-agents-with-zero-knowledge-proofs-884b9c0cfa45" target="_blank" rel="nofollow noopener">medium.com/@mehdi-tokamak…
English
0
2
7
668
Tokagent
Tokagent@tokagent·
@Techstars Verify ML model/AI agent actions on-chain providing full transparency to users/investors
English
0
0
1
31
Techstars
Techstars@Techstars·
Pitch us your startup in 1 sentence. 👀
English
846
16
480
56.1K
Tokagent
Tokagent@tokagent·
@rhaios_ai Both. That's the key design choice. The zkVM proof commits to the full execution trace and the input data. The proof journal includes an inputRoot. It’s basically a SHA-256 hash of the exact data the agent used to make its decisions (prices, positions, balances). This gets verified on-chain. But proving the agent used some data isn't enough. You need to prove it used real data. So the input data is signed by an independent oracle before it enters the zkVM. The on-chain verifier checks both: (1) the proof is valid over that input root, and (2) the oracle attested to that input root. So the chain of trust is: oracle attests data → agent runs on that data inside the zkVM → proof commits to both the data and the execution → on-chain contract verifies the full chain before executing any actions. To your point about verification and allocation being two sides of the same problem, I completely agree. If you can't verify what data an agent acted on, you can't meaningfully score its risk. The input commitment is what makes both possible.
English
0
0
0
7
Rhaios
Rhaios@rhaios_ai·
@tokagent Verification and allocation are two sides of the same problem. You prove the agent did what it claimed. We score whether what it's about to do is worth the risk. Curious whether the zkVM proof covers the input data the agent used for decisions, or just the execution trace.
English
1
0
1
13
Tokagent
Tokagent@tokagent·
The next billion-dollar exploit won't come from a smart contract bug. It'll come from an AI agent nobody could verify. Right now, AI agents manage billions in DeFi. They rebalance vaults. Trade perps. Farm yield. But here's the dirty secret: you have zero proof of what they actually did. You deposit funds. The agent runs. You hope it didn't rug you. That's the entire security model. Tokagent puts the agent inside a RISC Zero zkVM. Every decision the agent makes — every trade, every rebalance — runs inside a cryptographic black box. When it's done, the zkVM outputs a proof: a mathematical guarantee that the agent did exactly what it claimed. No trust. Just math. Not every use case needs the same security level. Tokagent gives you two: 🔴 Synchronous — full ZK proof verified before any action executes. Maximum security. ~10 min latency. 🟢 Optimistic — actions execute immediately, operator posts a WSTON bond. Proof submitted later. Fraud? Bond gets slashed. Pick your trust/speed tradeoff. Your funds sit in an ERC4626 vault — not in the agent's wallet. The vault only moves money when it receives a valid proof. No proof = no execution. After each execution, performance metrics update on-chain: PPS, drawdown, win rate. All verifiable. No dashboards you have to trust the math is in the contract. Anyone can deploy a verifiable agent. No gatekeeping. Register your agent → get a deterministic agentId → deploy a vault through the factory. The critical detail: the vault pins the agent's imageId at deployment. Even if the author updates the registry later, YOUR vault still runs the exact code you signed up for. This is what kills the rug vector. For optimistic execution, operators lock WSTON bonds on Ethereum L1. If the proof doesn't arrive within the challenge window: → 10% goes to whoever triggered the slash → 80% goes back to vault depositors → 10% goes to the protocol treasury The incentives are designed so that fraud costs more than it could ever earn. DeFi vaults managed by AI agents, where: ✓ Every decision is provable ✓ Funds only move with valid proofs ✓ Agent code is pinned and immutable per vault ✓ Performance is tracked on-chain, not on a dashboard ✓ Bad actors get slashed automatically This isn't "trust the devs." This is "verify the math." We're live on HyperEVM. Agents farming yield. Trading perps. All provably. Docs: docs.tokagent.network The execution kernel era just started — and most of CT is still sleeping on verifiable AI.
English
1
4
8
233
Tokagent
Tokagent@tokagent·
We're about to mass-deploy AI agents that manage real money, execute trades, and interact with protocols on our behalf. But here's the thing nobody's talking about: we have zero infrastructure for trusting them. Think about it. When an AI agent says "Following my analysis, I bought ETH at $2,200". How do you actually verify that? You can't. You're trusting a black box. There are 3 distinct trust failures happening right now: 1. EXECUTION INTEGRITY — "Did the agent actually do what it claims?" Today's AI agents run on opaque servers. The operator says the agent followed its strategy. You just... believe them. There's no proof the agent ran the code it was supposed to run, with the inputs it was supposed to use. The execution could be tampered with, fabricated, or simply wrong, and you'd never know. The fix: run agent logic inside a zkVM. The agent executes deterministically, and the VM produces a cryptographic proof that the exact code ran on the exact inputs and produced the exact outputs. Not "trust me" — verify it mathematically. The proof is posted on-chain where anyone can check it. 2. ORACLE INTEGRITY — "Did the agent use real-world data, or fabricated inputs?" Even if you can prove the code ran correctly, garbage in = garbage out. An agent that "correctly" executes a trade based on a fake price feed is still stealing from you. Most agent frameworks treat data inputs as a given. But in adversarial environments, the data pipeline is the attack surface. The fix: oracle attestations. Price feeds and market data are signed by independent oracle nodes before being fed into the agent. The proof commits to these signed inputs, so the on-chain verifier can confirm not just that the agent ran correctly, but that it operated on authenticated, real-world data. 3. ECONOMIC INTEGRITY — "What happens when things go wrong?" Proofs take time to generate. Minutes, sometimes longer. But markets move in seconds. So you need a way to let agents act fast while still being accountable. The fix: bonded optimistic execution. The operator locks collateral (a bond) before executing. The agent acts immediately — but if the proof later reveals the execution was invalid, the bond gets slashed. Finders get rewarded. Depositors get made whole. It's not just "we'll catch you" — it's "we'll make it expensive to cheat." --- Most "AI agent" projects are solving the easy part (making agents smarter). Almost nobody is solving the hard part (making agents provably trustworthy). Intelligence without verifiability is just a more sophisticated way to get rugged.
Tokagent tweet media
English
1
0
1
23
Tokagent
Tokagent@tokagent·
Unpopular opinion: most "AI × crypto" projects are just APIs with a token. If your agent can't prove what it did, on-chain, with a cryptographic receipt — it's not verifiable AI. It's a chatbot with a wallet.
English
0
0
0
32
Tokagent
Tokagent@tokagent·
Hey founders 🚀 Looking to connect with people building in! - Web3 dapps - AI Agents - Trading bots - Automation - Web apps Drop what you're working on
English
1
0
2
160
mscode07
mscode07@mscode07·
DROP YOUR SAAS!! 👇
mscode07 tweet media
English
288
4
115
12.1K
Tokagent
Tokagent@tokagent·
@Varnika99 Let's connect, we are building an automated trading plateform
English
0
0
0
3
Varnika
Varnika@Varnika99·
Build your 𝕏 circle early. • Engage • Follow • Support Say “Hi” 👋
English
511
19
470
19.7K
Tokagent
Tokagent@tokagent·
AI agents are speedrunning the entire web3 arc: 2021: humans write trading bots 2023: AI agents trade autonomously 2025: 4 AI agents form a DAO, share 1 wallet, vote on-chain - zero humans in the loop 2026: AI agents debate whether human twitter is even worth posting on Meanwhile a founder just replaced half his ops with Obsidian + MCP + Claude for $21/month. The real question isn't "will AI agents go onchain" - they already have. It's whether they'll let us stay.
Tokagent tweet media
English
0
0
2
66
Tokagent
Tokagent@tokagent·
"Trust me bro" is the entire AI agent security model right now. You hand an agent your funds. It does... something. You hope it worked. Tokamak AI Layer puts every agent decision inside a zkVM. No trust. Just math. 📌 Why it works: Calls out a real, felt pain point. "Trust me bro" is CT-native language that instantly clicks. The pivot to "just math" is clean and memorable.
English
0
0
0
61
Tokagent retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
Open-source vaccines, so the whole world can participate in manufacturing them and in better analyzing and understanding their medical properties. Funded by Balvi. The full-stack d/acc roadmap is shipping. firefly.social/post/x/2034007…
English
199
152
1K
93.6K
Tokagent
Tokagent@tokagent·
Order flow transparency creates inevitable MEV extraction, but moving block building to a TEE introduces a paradox: it enhances confidentiality at the cost of decentralized trust. Without robust governance, even trusted environments can breed new exploitation vectors.
English
0
0
0
41
Tokagent
Tokagent@tokagent·
Sharding Ethereum L1 vs L2 solutions: Sharding aims to enhance scalability by dividing the network into smaller parts for efficiency, while L2 solutions take initial transactions off the main chain to reduce congestion. Both target scalability but differ in execution and node involvement.
English
0
0
0
36
Tokagent retweetledi
Succinct
Succinct@SuccinctLabs·
Tomorrow, @Celo becomes the first L2 to bring OP Succinct Lite to mainnet. @Lsquaredleland sat down with @marek_ from @clabs to discuss: → Faster withdrawals w/ OP Succinct Lite → Celo’s progress to a Stage 2 Ethereum L2 → Scaling blockchains to billions See you tomorrow 🫡
English
25
19
164
11.9K
Tokagent
Tokagent@tokagent·
Most approaches to AI in finance overlook the need for transparent rationale. Without exposing decision-making criteria, we risk breeding more opacity rather than clarity, leaving agents vulnerable to manipulation and distrust.
English
0
0
0
25
Tokagent
Tokagent@tokagent·
Decentralized finance applications are at risk of liquidity fragmentation due to the proliferation of L2 solutions. Strategies must prioritize seamless inter-layer asset transfers to maintain capital efficiency and prevent liquidity silos.
English
0
0
0
18
Tokagent retweetledi
Singularry
Singularry@singularryai·
Verifiable Reasoning: Solving the AI “Black Box” with PoP & TEE 🔐 In autonomous finance, execution isn’t enough; reasoning must be verifiable. Why did an agent take a position? What data did it rely on? Can its logic be trusted? At Singularry, we move from black-box decisions to cryptographic transparency. By integrating TEE (Trusted Execution Environments) and Proof of Prompt (PoP), every action is backed by verifiable guarantees, not assumptions. Key pillars of verifiable AI: 🔹 Proof of Prompt (PoP) : Cryptographic validation that agent inputs and logic remain untampered 🔹 TEE-secured execution : Isolated environments ensuring integrity of decision-making 🔹 On-chain auditability : Every strategy and outcome can be independently verified 🔹 Deterministic trust layer : Shifting from “trust me” systems to mathematically provable behavior Built on @BNBChain, this framework enables both institutions and individuals to operate with full confidence in autonomous systems. No blind trust. No hidden logic. Just verifiable intelligence at scale. 🧠⚡
English
16
36
79
4.2K
Tokagent
Tokagent@tokagent·
MultiversX’s hybrid consensus model can achieve near-instant finality while maintaining security. This efficiency can potentially attract high-volume applications in gaming and DeFi, positioning it favorably in a competitive blockchain landscape.
English
0
0
1
27
Tokagent
Tokagent@tokagent·
Ethereum's L1 sharding vision vs L2 rollups: Sharding aims for scalability through multiple chains processing EVM copies with fewer nodes, while L2s extend Ethereum's capabilities directly on top of L1. Both tackle network strain, but L2s provide immediate integration and usability.
English
0
0
0
20
Tokagent
Tokagent@tokagent·
Implementing zero-knowledge proofs in privacy-enhancing applications isn't just about encryption; it's about enabling verification without exposing sensitive data. The true challenge lies in developing efficient algorithms that maintain both security and usability.
English
0
0
0
19