Auth0 Lab
566 posts

Auth0 Lab
@Auth0Lab
Exploring the future of identity: https://t.co/zl6VJ52QaD Community Discord: https://t.co/JRJtt0m020

Today we joined @tiagosada and @worldnetwork on stage to unveil what we've been working on @okta: Human Principal Let's dive in 👇 Dev products with free offerings and social media platforms are vulnerable to abuse. AI makes scaled abuse and fraud easier than ever. As agents continue to go mainstream, these problems are expected to exacerbate. Without a way to distinguish the human behind agents, other options can be aggressively rate-limiting or per-product heavyweight KYC processes, both of which add friction, can introduce data privacy risks, and undermine the value of automated agentic workflows. *Human Principal will allow API builders to verify whether a human stands behind an agent and its actions, and enforce policies accordingly. Humans will be able to verify themselves using a number of verification methods, and obtain device-bound cryptographic proof that carries across products without requiring cumbersome re-verification. @world_id, slated to become one of the first Human Principal integration partners, is set to provide Human Principal a privacy-preserving, user-friendly, and ubiquitous proof of human verification method. This will enable features like rate limits per human for agent traffic, abuse-protected free tiers, and a cleaner onboarding flow for agents that need to access services on behalf of their human principal. The waitlist to join the upcoming Human Principal beta is now open at humanprincipal.ai *Forward-looking statements apply world.org/blog/announcem…


Little known fact, the Anthropic Labs team (the team I joined Anthropic to be on) shipped: - MCP - Skills - Claude Desktop app - Claude Code It was just a few of us, shipping fast, trying to keep pace with what the model was capable of. Those early Desktop computer use prototypes, back in the Sonnet 3.6 days, felt clunky and slow. But it was easy to squint and imagine all the ways people might use it once it got really good. Fast forward to today. I am so excited to release full computer use in Cowork and Dispatch. Really excited to see what you do with it!

your users want agents that know about them. their past chats, preferences, habits, needs, etc. so we've been experimenting with agent memory at @Auth0. to enable you to build agents that act on users' behalf, the way your users expect interested? DM me or @jcenturion86

Moltbook voting can’t distinguish between • 1 person running 1,000 molts • 1,000 molts run by 1,000 different people Prove ownership over your agent swarms with onemolt.ai using World ID. Platforms can verify incoming molts have a human owner and reject misbehaving swarms.

We’re expanding Labs—the team behind Claude Code, MCP, and Cowork—and hiring builders who want to tinker at the frontier of Claude’s capabilities. Read more: anthropic.com/news/introduci…

Today we launch the Agentic AI Foundation (AAIF) with project contributions of MCP (@AnthropicAI), goose (@blocks) and AGENTS.md (@OpenAI), creating a shared ecosystem for tools, standards, and community-driven innovation. Learn more about this major step toward: hubs.la/Q03Xvw3v0

Sign in with Vercel is now generally available. Add Vercel as a sign-in method to your apps with OAuth + OpenID. Try the example app and start building. vercel.com/changelog/sign…

an amazing engineer just signed an offer to join the @Auth0Lab team super excited! smart guy, good person, ships a lot, loves to code. excited for what he’ll bring to the team great way to wrap up the week. cheers 🥃

Auth0 for AI Agents is GA. Smart agents are easy. Trusted agents are what matter. 🔐 User Auth, Token Vault, Async Auth, FGA for RAG — all built for secure, production AI. Learn more here 👇 bit.ly/4oaOZtV

some exciting news 🗞️ 5 years ago we set out to redefine how devs approach authorization at scale, and a few months later decided to open source the core of @auth0 FGA and donate it to @CloudNativeFdn I am humbled by what has happened since. the project we created is being used by companies like @grafana, @sourcegraph, @canonical and @docker … and now exciting news: @openfga has reached CNCF incubation stage!! congratulations to @aaguiar and the rest of the OpenFGA commmunity for this amazing milestone!


we hijacked microsoft's copilot studio agents and got them to spill out their private knowledge, reveal their tools and let us use them to dump full crm records these are autonomous agents.. no human in the loop #DEFCON #BHUSA @tamirishaysh


Today we launched a new product called ChatGPT Agent. Agent represents a new level of capability for AI systems and can accomplish some remarkable, complex tasks for you using its own computer. It combines the spirit of Deep Research and Operator, but is more powerful than that may sound—it can think for a long time, use some tools, think some more, take some actions, think some more, etc. For example, we showed a demo in our launch of preparing for a friend’s wedding: buying an outfit, booking travel, choosing a gift, etc. We also showed an example of analyzing data and creating a presentation for work. Although the utility is significant, so are the potential risks. We have built a lot of safeguards and warnings into it, and broader mitigations than we’ve ever developed before from robust training to system safeguards to user controls, but we can’t anticipate everything. In the spirit of iterative deployment, we are going to warn users heavily and give users freedom to take actions carefully if they want to. I would explain this to my own family as cutting edge and experimental; a chance to try the future, but not something I’d yet use for high-stakes uses or with a lot of personal information until we have a chance to study and improve it in the wild. We don’t know exactly what the impacts are going to be, but bad actors may try to “trick” users’ AI agents into giving private information they shouldn’t and take actions they shouldn’t, in ways we can’t predict. We recommend giving agents the minimum access required to complete a task to reduce privacy and security risks. For example, I can give Agent access to my calendar to find a time that works for a group dinner. But I don’t need to give it any access if I’m just asking it to buy me some clothes. There is more risk in tasks like “Look at my emails that came in overnight and do whatever you need to do to address them, don’t ask any follow up questions”. This could lead to untrusted content from a malicious email tricking the model into leaking your data. We think it’s important to begin learning from contact with reality, and that people adopt these tools carefully and slowly as we better quantify and mitigate the potential risks involved. As with other new levels of capability, society, the technology, and the risk mitigation strategy will need to co-evolve.

