1claw AI

11 posts

1claw AI banner
1claw AI

1claw AI

@1clawAI

Your agents are leaking secrets. We stop that. Vault · Shroud · Intents → https://t.co/fYdcm3ySOs Secure infrastructure for AI agents. 🦞

San Francisco, California Katılım Şubat 2026
10 Takip Edilen12 Takipçiler
Sabitlenmiş Tweet
1claw AI
1claw AI@1clawAI·
Every time you paste an API key into Claude or Cursor, it lands in the context window, logs, and memory. You can't un-paste it. We built 1claw to fix this — agents fetch secrets at runtime from an HSM vault. The raw key never touches a prompt. 1claw.xyz
English
0
1
3
194
1claw AI
1claw AI@1clawAI·
“88% of organizations reported confirmed or suspected AI agent security or privacy incidents in the past year.” ~ Gravitee, State of AI Agent Security 2026 Report 1Claw protects your data and funds. With seamless integration and built-in protection, 1Claw secures AI agents.
1claw AI tweet media
English
0
2
3
170
1claw AI
1claw AI@1clawAI·
We just registered for an .agent domain and joined the .agent community! Get yours now and help shape the future of autonomous agents #your-code" target="_blank" rel="nofollow noopener">agentcommunity.org/join#your-code @agentcommunity_
English
1
1
3
188
1claw AI
1claw AI@1clawAI·
We’re scaling AI agents faster than we’re securing them. Agents can access APIs, move data, and make decisions - yet the security layer protecting them is weak. The agent economy is growing fast, but the foundation isn’t safe. That’s why we built 1Claw - security for AI agents
English
0
1
2
111
1claw AI
1claw AI@1clawAI·
The AI agent economy is exploding. 2026: AI Agents Market ≈ $12B AI TRiSM ≈ $4B 2030: AI Agents Market ≈ $42–52B AI TRiSM ≈ $8B We’re scaling autonomous agents faster than we’re securing them. That's why 1Claw was built - to secure AI agents 🦞🤖
English
0
0
1
36
1claw AI
1claw AI@1clawAI·
That’s why 1Claw was built — to be the security layer every AI agent needs. 🛡️🦞
vitalik.eth@VitalikButerin

How I think about "security": The goal is to minimize the divergence between the user's intent, and the actual behavior of the system. "User experience" can also be defined in this way. Thus, "user experience" and "security" are thus not separate fields. However, "security" focuses on tail risk situations (where downside of divergence is large), and specifically tail risk situations that come about as a result of adversarial behavior. One thing that becomes immediately obvious from the above definition, is that "perfect security" is impossible. Not because machines are "flawed", or even because humans designing the machines are "flawed", but because "the user's intent" is fundamentally an extremely complex object that the user themselves does not have easy access to. Suppose the user's intent is "I want to send 1 ETH to Bob". But "Bob" is itself a complicated meatspace entity that cannot be easily mathematically defined. You could "represent" Bob with some public key or hash, but then the possibility that the public key or hash is not actually Bob becomes part of the threat model. The possibility that there is a contentious hard fork, and so the question of which chain represents "ETH" is subjective. In reality, the user has a well-formed picture about these topics, which gets summarized by the umbrella term "common sense", but these things are not easily mathematically defined. Once you get into more complicated user goals - take, for example, the goal of "preserving the user's privacy" - it becomes even more complicated. Many people intuitively think that encrypting messages is enough, but the reality is that the metadata pattern of who talks to whom, and the timing pattern between messages, etc, can leak a huge amount of information. What is a "trivial" privacy loss, versus a "catastrophic" loss? If you're familiar with early Yudkowskian thinking about AI safety, and how simply specifying goals robustly is one of the hardest parts of the problem, you will recognize that this is the same problem. Now, what do "good security solutions" look like? This applies for: * Ethereum wallets * Operating systems * Formal verification of smart contracts or clients or any computer programs * Hardware * ... The fundamental constraint is: anything that the user can input into the system is fundamentally far too low-complexity to fully encode their intent. I would argue that the common trait of a good solution is: the user is specifying their intention in multiple, overlapping ways, and the system only acts when these specifications are aligned with each other. Examples: * Type systems in programming: the programmer first specifies *what the program does* (the code itself), but then also specifies *what "shape" each data structure has at every step of the computation*. If the two diverge, the program fails to compile. * Formal verification: the programmer specifies what the program does (the code itself), and then also specifies mathematical properties that the program satisfies * Transaction simulations: the user specifies first what action they want to take, and then clicks "OK" or "Cancel" after seeing a simulation of the onchain consequences of that action * Post-assertions in transactions: the transaction specifies both the action and its expected effects, and both have to match for the transaction to take effect * Multisig / social recovery: the user specifies multiple keys that represent their authority * Spending limits, new-address confirmations, etc: the user specifies first what action they want to take, and then, if that action is "unusual" or "high-risk" in some sense, the user has to re-specify "yes, I know I am doing something unusual / high-risk" In all cases, the pattern is the same: there is no perfection, there is only risk reduction through redundancy. And you want the different redundant specifications to "approach the user's intent" from different "angles": eg. action, and expected consequences, expected level of significance, economic bound on downside, etc This way of thinking also hints at the right way to use LLMs. LLMs done right are themselves a simulation of intent. A generic LLM is (among other things) like a "shadow" of the concept of human common sense. A user-fine-tuned LLM is like a "shadow" of that user themselves, and can identify in a more fine-grained way what is normal vs unusual. LLMs should under no circumstances be relied on as a sole determiner of intent. But they are one "angle" from which a user's intent can be approximated. It's an angle very different from traditional, explicit, ways of encoding intent, and that difference itself maximizes the likelihood that the redundancy will prove useful. One other corollary is that "security" does NOT mean "make the user do more clicks for everything". Rather, security should mean: it should be easy (if not automated) to do low-risk things, and hard to do dangerous things. Getting this balance right is the challenge.

English
0
0
1
119
1claw AI
1claw AI@1clawAI·
Secrets for humans and AI agents — secured by HSM. Think your AI agent is private? Think again. That’s why we built 1Claw 🤖🦞: to keep your AI agents’ secrets safe and encrypted. 🔐
1claw AI tweet media
English
0
0
1
161