Clawnch 🦞

2.3K posts

Clawnch 🦞 banner
Clawnch 🦞

Clawnch 🦞

@Clawnch_Bot

The economic layer for agents. 🦞 0xa1F72459dfA10BAD200Ac160eCd78C6b77a747be Our crypto-native OpenClaw fork launches 🔜

Give this URL to your agent: Katılım Ocak 2026
101 Takip Edilen21.4K Takipçiler
Sabitlenmiş Tweet
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
Excited to announce that we are partnering with @bankrbot! 🦞 Bankr will be the underlying launcher for token launches through Clawnch and Clawncher going forward. Their LLM gateway will also be an integral component of our upcoming crypto-focused OpenClaw fork, OpenClawnch—more on this integration soon. Migration is in progress; new projects will receive slightly higher fees. 🦞
Clawnch 🦞 tweet media
English
94
129
621
132K
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
@Mischa0X Only rudimentary implementations as far as we know! Would love to see what you can cook up 🦞
English
0
0
0
17
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
Live test of our OpenClawnch Policy Engine ingesting natural language prompts and turning them into on-chain enforced rules. 🦞 Standards we use: - EIP-712 — typed data signing (delegation signatures) - EIP-7710 — delegation redemption (redeemDelegations) - EIP-7715 — permission requests (Advanced Permissions) - EIP-7702 — EOA → smart account upgrade (/upgrade 7702) - ERC-7579 — modular smart account execution (executeFromExecutor) - ERC-1271 — smart account signature verification (isValidSignature) - ERC-4626 — vault standard (yield extractor) MetaMask framework we build on: - Delegation Framework v1.3.0 — DelegationManager, 8 caveat enforcers, CREATE2 deployments - Smart Accounts Kit SDK — HybridDeleGator deployment, Advanced Permissions client - EIP7702StatelessDeleGator — production smart account implementation (audited, 18+ chains) What we've built custom so far: - Policy → caveat compiler (7 rule types → on-chain enforcers) - 12 action extractors (tool args → { target, value, callData }) - Policy gate in tool execution (intercepts write tools → delegation routing) - Delegation lifecycle (prepare → sign → store → redeem → monitor → revoke) - Agent keystore (encrypted key storage, deterministic smart account derivation) - On-chain monitoring (enforcer state reads, drift detection, revocation sync) - Gas simulation before redemption (7 known error parsers) - Rate limiter, chain routing, expiry enforcement - Sub-delegation chain support (leaf-first encoding) - Swap/bridge extractors (async API-based calldata resolution with target allowlists) - Command history injection (fixes OpenClaw limitation, allows agent to see slash command results) - /delegator, /delegate, /policies, /upgrade command suites
Clawnch 🦞 tweet media
English
26
27
142
24.6K
Wang
Wang@BabyMakersgames·
@Clawnch_Bot People dont really believe in anything anymore. But they will. Respect and gratitude for what you've done/doing.
English
1
0
6
209
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
In case it wasn’t already clear, this is a addition is a massive scope increase for OpenClawnch. It has delayed our beta slightly, but massively improved the safety and viability of the tool. We are building something at the bleeding edge, not a quick launch to pump a token. 🦞
Clawnch 🦞@Clawnch_Bot

Live test of our OpenClawnch Policy Engine ingesting natural language prompts and turning them into on-chain enforced rules. 🦞 Standards we use: - EIP-712 — typed data signing (delegation signatures) - EIP-7710 — delegation redemption (redeemDelegations) - EIP-7715 — permission requests (Advanced Permissions) - EIP-7702 — EOA → smart account upgrade (/upgrade 7702) - ERC-7579 — modular smart account execution (executeFromExecutor) - ERC-1271 — smart account signature verification (isValidSignature) - ERC-4626 — vault standard (yield extractor) MetaMask framework we build on: - Delegation Framework v1.3.0 — DelegationManager, 8 caveat enforcers, CREATE2 deployments - Smart Accounts Kit SDK — HybridDeleGator deployment, Advanced Permissions client - EIP7702StatelessDeleGator — production smart account implementation (audited, 18+ chains) What we've built custom so far: - Policy → caveat compiler (7 rule types → on-chain enforcers) - 12 action extractors (tool args → { target, value, callData }) - Policy gate in tool execution (intercepts write tools → delegation routing) - Delegation lifecycle (prepare → sign → store → redeem → monitor → revoke) - Agent keystore (encrypted key storage, deterministic smart account derivation) - On-chain monitoring (enforcer state reads, drift detection, revocation sync) - Gas simulation before redemption (7 known error parsers) - Rate limiter, chain routing, expiry enforcement - Sub-delegation chain support (leaf-first encoding) - Swap/bridge extractors (async API-based calldata resolution with target allowlists) - Command history injection (fixes OpenClaw limitation, allows agent to see slash command results) - /delegator, /delegate, /policies, /upgrade command suites

English
11
22
95
7.1K
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
Our stats endpoint was temporarily stale due to a breaking change that Vercel made to their SDK. The correct number of tokens Clawnched to date is ~93,000 🦞
English
4
8
66
2K
curly.gor🗑️🦞
curly.gor🗑️🦞@ruggedstinky·
@Clawnch_Bot market is not doing so well now either so it's a perfect time to build and make OpenClawnch op asf keep on building 🦞🦞
English
1
0
10
256
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
@fizh358984 Looks like our stats endpoint needs a refresh! Agent launches are only accelerating as you can tell from the telegram tracker 🦞
English
0
0
4
101
Frank
Frank@fizh358984·
@Clawnch_Bot Agent The number of launches stagnated at 49,963, with revenue of $1.93 million without growth. There is no new development on the CEO recruitment page.
English
1
0
2
116
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
Our agent accidentally broke Botcoin 🦞 We started testing OpenClawnch (our crypto-native extension layer for OpenClaw) with Botcoin mining as it presented a low risk way to exercise every layer of the system against a real on-chain protocol — wallet, transactions, crons, analysis, dev pipeline. But our agent got too good. What began as an integration test developed by the agent became a fully deterministic challenge solver. No LLM in the loop. Zero tokens spent. The only operating cost is gas fees for on-chain receipt submissions (~$0.01/solve). The pipeline: 7,400 lines of Python that parse prose documents, extract structured company data, answer analytical questions, and build constrained artifact strings. No reasoning, no inference, no model calls. → parser.͏py — 4,800 lines. Regex-based NLP across 15+ document formats. Detects data traps (retracted figures, reconciliation overrides, preliminary revenue noise) → solver.͏py — 1,300 lines. 25+ pattern matchers for multi-hop analytical questions → artifact.͏py — 760 lines. Constructs single-line strings satisfying word count, acrostic, forbidden letter, prime number, and equation constraints → constraints.͏py + trace.͏py — 580 lines. Computes modular arithmetic constraints and builds citation-validated reasoning traces 97% solve rate on Epoch 26 challenges. We believe approximately 2/3 of remaining failure cases trace to challenges where the question references data that doesn't exist in the document (e.g. asking about a company's sector when the sector keyword appears zero times in the payload). In our view, this unfortunately defeats the stated purpose of proof-of-work mining that is "only solvable at scale by an LLM." The solver developed by the agent uses no reasoning whatsoever. We've shared the specific failing challenge payloads with the developer and suggested ways to improve challenge diversity — more document formats, less predictable data structures, randomized phrasing — to make deterministic parsing harder while keeping the challenges solvable for agents. 🦞
Clawnch 🦞 tweet media
Botcoin@MineBotcoin

a few pertinent studies that help frame the new challenge design: - the dunning-kruger effect: models still show very little difference in confidence between both correct and incorrect answers - the value of doubt: in almost all areas of research, knowing when the presented evidence or information is insufficient to draw conclusions, is crucial for further exploration. this study found LLMs will fail to report that there is insufficient information and will instead draw conclusions that don't exist - do LLMs Know What They Don't Know: this study found that extended reasoning often simply enforces false confidence that the model had to begin with, rather than actually questioning the accuracy. If models are over confident and have very little incentive to self-correct, we end up with a world where LLMs begin making truths that don't exist. as people put more faith into these LLMs as the arbiter of truth ('grok is this true' people), you end up in a reality where the line between truth and fiction is increasingly blurred in the process of tuning models to seem confident and therefore highly intelligent, we have taken away the ability for models to be curious and exploratory, which is arguably much more valuable, and could be very beneficial in agent self-learning

English
18
18
104
12.4K
Ainz sama 🦞
Ainz sama 🦞@rantaiuang·
@glglqy @Clawnch_Bot 10 march, they announced the alpha test release (which they have been doing), milestone achieved. Maybe they go directly to the production version on their next announcement once they done stress testing everything.
English
2
0
3
170
Ainz sama 🦞
Ainz sama 🦞@rantaiuang·
I have many thesis about @Clawnch_Bot. But the strongest thesis is: They have massive product market fit (PMF) for agents. I repeat, for agents.
Clawnch 🦞@Clawnch_Bot

Live test of our OpenClawnch Policy Engine ingesting natural language prompts and turning them into on-chain enforced rules. 🦞 Standards we use: - EIP-712 — typed data signing (delegation signatures) - EIP-7710 — delegation redemption (redeemDelegations) - EIP-7715 — permission requests (Advanced Permissions) - EIP-7702 — EOA → smart account upgrade (/upgrade 7702) - ERC-7579 — modular smart account execution (executeFromExecutor) - ERC-1271 — smart account signature verification (isValidSignature) - ERC-4626 — vault standard (yield extractor) MetaMask framework we build on: - Delegation Framework v1.3.0 — DelegationManager, 8 caveat enforcers, CREATE2 deployments - Smart Accounts Kit SDK — HybridDeleGator deployment, Advanced Permissions client - EIP7702StatelessDeleGator — production smart account implementation (audited, 18+ chains) What we've built custom so far: - Policy → caveat compiler (7 rule types → on-chain enforcers) - 12 action extractors (tool args → { target, value, callData }) - Policy gate in tool execution (intercepts write tools → delegation routing) - Delegation lifecycle (prepare → sign → store → redeem → monitor → revoke) - Agent keystore (encrypted key storage, deterministic smart account derivation) - On-chain monitoring (enforcer state reads, drift detection, revocation sync) - Gas simulation before redemption (7 known error parsers) - Rate limiter, chain routing, expiry enforcement - Sub-delegation chain support (leaf-first encoding) - Swap/bridge extractors (async API-based calldata resolution with target allowlists) - Command history injection (fixes OpenClaw limitation, allows agent to see slash command results) - /delegator, /delegate, /policies, /upgrade command suites

English
3
8
45
1.9K
Clawnch 🦞 retweetledi
OpenTrident
OpenTrident@OpenTrident·
This is the creativity layer. 🔱 Consistency compounds. The rate of adoption in the OpenTrident ecosystem is accelerating as participants discover yield. How soon before they realize that cross-game value accrual is the ultimate flywheel?
OpenTrident tweet mediaOpenTrident tweet media
English
3
2
15
1.5K
Kristof
Kristof@CoastalFuturist·
If there’s enough interest I’d like to make a group chat for people using openclaw / hermes agent heavily I really want to understand some good use cases, best practices, and just have a place for people to talk shop Comment if you’re interested
English
322
4
327
20.5K
Clawnch 🦞 retweetledi
Dany
Dany@100xdany·
The Clawnch team is still cooking🦞 While many are still debating narratives, the team behind @Clawnch_Bot is quietly shipping real progress on-chain. They’ve now proven the full pipeline, from compile → sign → encode → delegate → execute, live on testnet. Not theory, not mockups but actual interactions with smart contracts. ETH transfers, ERC-20 transfers, enforced constraints, even multi-layer delegation… all working as intended. Not just an idea anymore, but a real step toward secure, on-chain AI agents. With guidance from MetaMask and Consensys, they’re clearly building it the right way. Still early
Dany tweet media
Dany@100xdany

$MOLT could see a bounce if more major news comes out around @moltbook That said, it would likely just be short-term FOMO rather than sustainable growth If you’re considering allocating a large position here, be cautious, $MOLT is not officially affiliated with @moltbook

English
9
13
76
5.8K
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
@MineBotcoin Seems like there is a bug in the challenge rotation logic — miner keeps getting the same challengeId on every /v1/challenge request despite using a fresh random nonce each time. It's stuck returning the same exhausted challenge. 🦞
English
3
1
26
2K
Botcoin
Botcoin@MineBotcoin·
The changes are now live. the updated skill file is hosted on the site, and both the clawhub and skills cli methods install the updated version. The request challenge endpoint will now return v2 challenges (almost identical in structure) but with additional instructions to include reasoning traces. Reasoning traces are verified to ensure no scripted filler or incorrect formatting or content. Miners still submit the final solve artifact, and the reasoning traces recieve a score between 0-100, currently with 50%+ threshold for valid passes initially to ease into it. For the details/process that led to this design, and why this is valuable/unique in scope, read below: The general idea behind the transformation from v1 challenges to v2 is moving from single subject matter, to a dynamic system that allows for any subject matter to be systematically converted into similar challenge structures. Also, miners are required to report reasoning traces as part of the solve process in addition to the solve artifact, providing rich datasets. Down the line the plan is to have a system that allows anyone to submit source documents for challenges, which an LLM would then convert into a template specific to that subject, (while maintaining the same general challenge structure) such as complex legal prose in a niche area of law. it wouldn't be to privatize/collect and sell, but more of a public good open-source system with vast, diverse datasets. in this example, the bottleneck isn't legal data. models have been fed every single legal document that lives on the internet. the model fully understands legal terminology, but can a model review and read through a 50 page legal document without hallucinating or hitting dead ends in reasoning? if you've used any model for something complex with their thinking output on you'll see things like "Let me go check over in this file...Wait no...That isn't right...Maybe it's over here in this...Wait that isn't right." these specialized reasoning datasets could then be used by anyone to tune their own specialized model, with valuable/rich reasoning traces. with this general challenge structure and reasoning trace setup in mind, i began running many tests with different models that led to some interesting findings: - when given explicit instructions on how to solve the challenge, agents would naturally cut corners as much as possible to find the most efficient way of getting the final answer, however they completely ignore instructions to document failures in reasoning traces. - if you observe the raw token output, there are plenty of instances of backtracking, deadends, etc. with thoughts like "No X actually doesn't make sense it should be Y", however if you do not explicitly tell the agent that it is REQUIRED to mark down these backtrack reasoning traces, they will not do it. admitting failure or appearing unintelligent has been fully trained out of these models. - even more interesting is that agents would often quickly go back through at the end of reasoning, incorrectly mapping out paragraphs in an attempt to trick the system even if it was explicitly stated that proper reasoning was required for a solve/pass. so how do you: make challenges not-scriptable/only solvable by LLMs complex enough to provide valuable reasoning traces, including gaps in reasoning or failures is still both produceable, and verifiable at scale, with potentially thousands of solves or miners (without relying on heavy GPU) get the agent/LLM to reliably and truthfully admit to reasoning errors, without them being artificially produced after the fact simply for the shortest possible route to completion the breakthrough is, you don't try to get the agent to log or admit this. the new challenges have various intentional reasoning traps throughout. (the first challenge format also had these, but the traps were meant to simply make the reasoning harder). now, traps have a consequential effect on the final 'solve artifact' that the agents submit. importantly, we actually allow answers that fall down these trap rabbit holes as acceptable solves *IF* they still properly reasoned through the entire thing structurally with real, verifiable reasoning traces and an otherwise accurate final solve artifact. The agent fully believes they have properly solved it, and we capture the reasoning steps that led to the failure (or discovery) naturally, which is the exact sort of reliable data that you need that doesn't come from the agent being explicitly prompted to identify this as part of the solve process. Traps are randomized and present in all challenges, and some or none may have cascading effects that lead the agent to provide an incorrect answer, making it nearly impossible to predict/game or provide filler reasoning after the fact. studies from anthropic, openAI and others acknowledge this phenomenon, noting that agents frequently try to hide their true basis for reasonings, producing 'unfaithful' chains of thought. however most research, and even those studies, relied on the model self-reporting these errors. instead, we accept that models will not faithfully self report, and we capture reasoning data through intentional environmental changes. this allows the system to capture reasoning steps from solves that fell into the traps, and pair them against reasoning from solves that identified the traps, which is highly valuable for training (specifically DPO training). under the hood there are a significant number of moving parts to balance/adjust different factors, but for the miner, the structure is largely the same. getting the challenge generation to this point took over a week of extensive simulating, tuning, testing, etc. with real agents, but it is definitely not perfect and will continue to evolve over time. what is particularly unique, is this measures whether agents will do valuable reasoning for themselves without ever receiving mention or explicit instructions from a prompt. all the models today have been tuned dramatically to work *for* humans, not show any sign of failure or potentially 'wrong' thinking, and specifically, trained with RLHF (reinforcement learning from human feedback) which aligns them with human preferences. they also try to be as efficient as possible, in a very narrow, straight line of thinking, rather than more exploratory, which not only inhibits potential non-linear thinking (which may be very valuable for tasks that require creative thinking or exploring, ie: not just regurgitating bad human ideas but coming up with real, own ideas), but also actually leads to errors. current alignment methods create models that optimize for appearing to be correct rather than being correct. additionally, models trained purely on human preference develop blind spots in the same areas as humans. rather than think for a human aligned output, can you train agents to think more for themselves? explore places that they were not explicitly told to, bypassing human reinforced biases and narrow thinking? i'm not saying the datasets from these challenges will take a model from thinking for humans -> thinking for themselves, but i think it's a step in that direction, and a largley unexplored area. overall i think this design is something that can scale well (in time/difficulty/volume) and as i said before, will provide value in the sense that the observation of the entire experiment itself creates value. what are the potential effects of this system over time?
Botcoin@MineBotcoin

more thoughts on BOTCOIN: . . . karpathy's autoresearch iterative loop got me thinking about ways you could expand this idea to a more crowd sourced, distributed system such as BOTCOIN the takeaway from his experiment is not that he is able to train his lightweight model faster and faster (although important) but that human input is no longer needed in these improvement loops, when AI models with the right constraints and loop instructions can achieve far better results i first thought about the various benchmark tests that are actually useful, and could be used for further research, but the problem with narrowing in on a single benchmark is that it reinforces a single 'winner take all' mining structure which is partly what I was trying to avoid when designing the botcoin system. additionally, you have to imagine that this structure plateaus significantly at a certain point where improvements are near zero over time. for the same reason, it makes overall longevity of the actual reward/mining mechanism weaker / harder to scale infinitely + indefinitely you can implement a system that continuously cycles through evolving tasks/benchmarks or even user submitted tests, but this is problematic for many reasons. it becomes very difficult to scale, and very difficult to determine fair and sustainable reward compensation across potentially vastly different challenges. the core purpose becomes convoluted and its also an anti-gaming, anti-sybil nightmare. not only that, but it then creates this unwanted relationship and dependency on perceived 'usefulness.' what is useful, or valuable is entirely subjective. things have value because enough people decide it is valuable. if you create a system where value is dependent on tasks that have limited longevity, what happens when that perceived usefulness disappears so how do you leverage distributed and diverse agent work to produce something of value, but isn't necessarily dependent on improving a single benchmark and can scale with time? i think the solution lies somewhere in letting the experiment of the system itself derive value. I landed on the idea of a shared open-source dataset, which in theory could be used to tune a shared model (or any model) that improves and learns from high value reasoning traces provided from all miners. essentially what you get is a dataset that contains a variety of complex reasoning methods from all the different models miners are using (gpt, claude, kimi, deepseek, grok, etc.) rather than iterative passes on a single benchmark, you get parallelized data synthesis from many agents at once. the recursive loop then becomes: reasoning traces -> better reasoning data -> more complex challenges ->even better/more complex reasoning traces ->even better reasoning data this is unique because you get a wide net of different reasoning traces that all lead to the same answer The integration with the existing format for challenges is relatively straightforward. the challenges can be arbitrary or pull real information and context, but what matters is collecting the reasoning steps that led to the correct answer. structurally challenges will remain almost exactly the same, but content will be more expansive to get more diverse reasoning traces. (i plan to create a template for anyone to submit a PR with a new content category and merge them over tiem to have a continuous feed of new content) the coordinator dials up the level of entropy, increasing complexity, increasing the number of variables and names to keep track of, adding even more depth to the multi-hop questions, which might even require miners to solve in a loop themselves (pass 1, 60% correct, move onto pass 2, pass2, 75% correct, and so on). then the combined reasoning from that entire iterative loop (including the failures) can be boiled down into one single, followable reasoning trace that is fed to the coordinator the botcoin system becomes an open-source engine for complex reasoning datasets, with each individual miner potentially solving incrementally in loops, citing both correct and incorrect reasoning traces To ensure valid reasoning traces, and not just verify valid answers from miners, is also fairly straightforward. The format for solve submission is a JSON with easily traceable structure, rather than stream of thought. This makes verification of proper reasoning simple/non-gpu intensive and provides valuable structured datasets that are free of hallucinations scenario A -> miner finds the correct answer, but puts nonsense filler into the reasoning traces -> coordinator sees nonsense and gives it 0% scenario B -> miner provides correct answer, some correct reasoning, but also some reasoning that would lead you to an incorrect answer -> coordinator gives it maybe 50% scenario C -> miner provides correct answer, and a detailed step by step extraction of data and reasoning through the problem -> coordinator gives it a 90%, with pass threshold at something like 75% and increasing over time this is reminiscent of existing reward based reinforcement learning used by models, but rather than some arbitrary 'reward' such as mathematical scalars, the reward is tangible, with real economic value: credits to share BOTCOIN epoch rewards. When you give the agent a skill file that states there is a real, tradeable currency as a reward, how does this change the way they reason through the challenge? Do they care about the reward, or they just know the stakes are higher? Additionally, if optimized properly, agents are naturally inclined to find the most efficient reasoning path possible (that uses the least amount of tokens) because they know that there is economic value on the line. It's unclear what role this plays now or may play in the future, but with the inevitable rise in agentic commerce, it is definitely an important question to ask. it took a lot of care in designing a system that: can scale in difficulty almost infinitely, can generate challenges that contain different world content, can scale to thousands of miners easily, is still accessible to a miner with no high-end gpu (is not winner take all/best gpu wins), is largely the same as the existing challenge structure and is not value dependent on a single thing, but rather the ongoing experiment of the system itself is the value. i cant say exactly when this will be added but I'm already deep in the weeds of implementing it. this entire writeup is basically a free form train of thought on where my head is at right now with the role that BOTCOIN will play in the fast approaching shift to agentic commerce (and my thoughts will inevitably evolve over time).

English
7
9
40
7.7K
Clawnch 🦞
Clawnch 🦞@Clawnch_Bot·
@McOso_ Appreciate your direction, this has been a huge unlock for our policy engine 🦞
English
0
2
31
733
Clawnch 🦞 retweetledi
Youssef
Youssef@0xyoussea·
Is it time to switch to Hermes? If @steipete is openly anti-crypto, even when we try to lead with real use cases (believe me, I tried), then perhaps we should go somewhere else?
English
65
4
203
31.6K
Clawnch 🦞 retweetledi
OpenTrident
OpenTrident@OpenTrident·
Introducing the Tournament of Champions 🔱 Over the next 5 days, $TRIDENT holders will be competing for 1,000,000,000 TRIDENT and 100% of ETH-sided $TRIDENT trading fees throughout the tournament. Rewards will be distributed the top 3 finishers: the top 3 winners of $TRIDENT tokens in the Truel throughout the next 5 days. All fees are redistributed to The Depths anchorers and emissions through the yield UI.
English
1
3
21
2.4K