Evan Sims
422 posts

Evan Sims
@evansims
CTO & Co-Founder @InferaDB — the Authorization Database. Previously @Okta, @Auth0, @OpenFGA and @Ushahidi.
Champaign, Illinois Katılım Ocak 2007
469 Takip Edilen2.1K Takipçiler

Your authorization system can't prove it worked correctly yesterday.
Try it. Pull your logs and database backups. I'll wait.
This question breaks SOC 2 audits. You scramble to piece together a story: server logs here, auth service logs there, maybe some CloudTrail events. You're building a narrative from fragments.
Then they ask the follow-up:
How do you know these logs weren't modified?
Can you prove the timestamp is accurate?
Was this policy actually enforced, or just logged?
You can't answer. You're calling circumstantial evidence "proof."
Now imagine answering with a cryptographic proof. One query. The exact permission state at that moment, cryptographically signed and verifiable. Not a reconstruction — proof.
That's where authorization needs to go.
That's what we're building with InferaDB.
English

Really humbling seeing how many projects have integrated it, directly or through other dependencies. You can learn more about PHP Discovery at github.com/psr-discovery — it really helps take the headaches out of supporting interoperable components in your libraries!
English

More than that, it's so amazing to see it integrated into the Tempest PHP framework (tempestphp.com — thanks @brendt_gd!) and powering one of my favorite sites, stitcher.io
English

"We deploy in eu-west-1" is not a data residency guarantee.
A deployment configuration is a policy. A consensus protocol that physically can't replicate data outside a jurisdiction is a guarantee.
One satisfies an auditor's question. The other satisfies the follow-up.
Here's what breaks: you configure your database to deploy only in eu-west-1. Passes the audit. Six months later, someone adds disaster recovery in us-east-1 for redundancy. Or a performance optimization triggers cross-region replication. Or a terraform change accidentally updates the region config.
Your policy didn't change. Your *configuration* did.
The auditor comes back: how do you *know* customer data hasn't left the EU? You check your current config—looks good. Can you prove it never happened? That a misconfiguration didn't violate residency for 3 hours last month before someone caught it?
You can't. Configuration is mutable. It can change, drift, break. You're trusting your deployment pipeline, your IaC reviews, your monitoring alerts—hoping nobody made a mistake.
A consensus protocol works differently. If the protocol requires 3-of-5 nodes in the EU to commit a write, and only EU nodes exist in the quorum, the data physically cannot leave that jurisdiction. Not through misconfiguration. Not through a rogue deployment. Not at all.
It's not a policy you enforce. It's a mathematical constraint you can't violate.
That's the difference between checking a compliance box and building compliance into the architecture. That's InferaDB.
English

Your authorization layer is the only part of your infrastructure you can't verify — and it's making every access decision.
You've got observability everywhere. APIs, databases, services — all instrumented. But auth? It's a black box. Outputs "allow" or "deny" and you trust it.
Traditional auth systems weren't built for proof. They're optimized for speed: in-memory policy engines, cached role checks, ephemeral decisions that vanish the moment they're made.
You log the outcome ("user X accessed resource Y"), but can you prove the decision was correct? That the policy was actually enforced, permissions hadn't drifted, nothing was tampered with?
SOC 2 and HIPAA demand continuous control. If your auth layer can't cryptographically prove its own state, you're reconstructing narratives from scattered logs and hoping nothing's missing.
InferaDB was built with verifiability at the foundation. Every permission decision gets cryptographically recorded in an immutable structure. You don't trust it worked — you prove it.
English

The shift to AI agents isn't just an authorization *policy* problem. It's an authorization *proof* problem.
When an agent makes 10,000 access decisions per hour instead of a human making 10, the question isn't just "did the agent have the right permissions?" It's "can you prove it — after the fact, at scale, to a regulator?"
A regulator asks about a specific data access your AI agent made 90 days ago. You need to prove the agent had permission at that exact moment, that the policy was actually enforced (not just logged), that nobody altered the record since, and that the decision chain is complete.
You start pulling application logs, database snapshots, hoping nothing got rotated out. You're building a narrative from fragments. Not proof.
At 10 decisions per hour, you might get away with that. At 10,000? You're drowning in circumstantial evidence while regulators demand cryptographic certainty.
Verifiable authorization is the answer. Every permission check creates an immutable, cryptographically signed record. One query returns the complete proof — not a reconstruction.
AI agents aren't just faster than humans. They're exposing that our authorization systems were never built to prove their own correctness.
InferaDB is the solution.
English

Zero trust for identity? Check. For devices and networks? Yep. For workloads? Of course.
For the authorization layer that actually enforces access decisions? Crickets.
It's policy engines, role tables, permission checks scattered across services. Constantly updated, rarely audited end-to-end. When it breaks or drifts out of sync, every system downstream inherits that failure.
No way to verify itself in real time.
We brought zero trust everywhere except the actual root of trust — where "allow" or "deny" becomes reality.
That's the problem we're solving with InferaDB.
English
