
zkPass 2.0 loading... Prove everything. Expose nothing. For humans. For agents. For the internet that comes next.
zkPass
2K posts

@zkPass
building the verifiable internet with zkTLS | where web2 private data becomes verifiable and portable | governed by @zkPassDAO | $ZKP

zkPass 2.0 loading... Prove everything. Expose nothing. For humans. For agents. For the internet that comes next.




Agentic capability is improving fast. We believe Proof of Human is becoming critical for the internet and many of the platforms we use (like X). This paper explains why FaceID, face biometrics & government IDs won’t solve the problem, and what properties are most important.




ZKPass is built on Ethereum. @zkPass is a privacy-preserving identity protocol letting users prove information about themselves without revealing the underlying data. It enables secure verification of credentials onchain while keeping personal information private by default.


BREAKING: New startup "RentAHuman" allows AI agents to rent humans to perform tasks they cannot physically perform themselves.

BREAKING: New startup "RentAHuman" allows AI agents to rent humans to perform tasks they cannot physically perform themselves.


An “Age Verification for all Operating Systems” (including Windows, macOS, & Linux) bill has been introduced in the state of Illinois. This new bill (Illinois SB 3977) is *very* similar to the recently passed California bill (and the introduced Colorado bill) and, if passed, would set a deadline of January 1st, 2028 for compliance. The Illinois version of the bill is being sponsored by Laura Ellman (Democrat). legiscan.com/IL/bill/SB3977…


Really interesting read from @0xRaghav on the "trust layer" for AI agents as they participate in the economy. A few thoughts: - We've gotten to the point where AI agents either (1) just say things and not do them, or (2) do the wrong/bad/harmful thing. - Both of these are typically hard to detect + fix - eg. my OpenClaw often lies about tasks/capabilities. I have to verify by asking Claude Code to check its code. You can imagine this problem being significantly worse for a nontechnical user. - A short term guardrail is extending everything we've built to protect secret keys (MPC, TEEs, sharding, multisigs) to API keys and other "authorization slips" at large. But you run into the "user signs everything in Metamask problem". - A medium-long term solution may be "verifiable compute" (remember ZKVMs?) of both intent and output - that the AI agent is not malicious + it is not lazy. ZKPs of objective yardsticks like number of HTTP pages visited, data streamed, commands executed etc. Can all be part of the solution (and ideally deterministic). - However, the big thorny problem in building the "trust layer" though is that there's an inherent tension between trust and monetization. Think about ads/SEO for malicious sites. What if there are bribery markets for trust?
