OxDeAI

73 posts

OxDeAI banner
OxDeAI

OxDeAI

@oxdeai

Building OxDeAI Execution authorization layer for AI agents Deterministic, fail-closed

Internet Katılım Mart 2026
17 Takip Edilen18 Takipçiler
Sabitlenmiş Tweet
OxDeAI
OxDeAI@oxdeai·
1/4 Non-bypassable demo: Execution is ONLY reachable through the OxDeAI authorization boundary. One split-screen run shows it all: • DENY: destructive action blocked pre-execution • ALLOW: permitted action succeeds (200 OK) • BYPASS: direct upstream call -> instant 403 No valid gateway token = zero execution path. Full stop. #AISecurity #AgenticAI #AIRuntime #OxDeAI
GIF
English
1
0
3
277
OxDeAI
OxDeAI@oxdeai·
Most AI security architectures still rely on “more agents”: - supervisor agents - reviewer agents - approval agents But advisory evaluation is not an execution boundary. We just finished hardening the OxDeAI reference stack around a different invariant: No valid authorization -> no execution path. Latest work: - deterministic intent/state binding - non-bypassable PEP boundary - replay durability semantics - fail-closed execution enforcement - explicit protocol limitation documentation Repo: github.com/oxdeai/oxdeai #AIsecurity #AgentSecurity #Cybersecurity #OpenSourceAI @havenlon @walkojas
English
0
0
1
28
OxDeAI
OxDeAI@oxdeai·
AI-native systems are pushing toward a new execution model: deterministic authorization + exact execution enforcement. Interesting to see hardware-rooted execution boundary systems emerging around the same invariant: “No valid authorization -> no execution path.” Execution itself is becoming a first-class security boundary.
Havenlon@havenlon

Early structural renders of the Havenlon product line. Pass Key, Auth Key, and Hub Mini are starting to take clearer physical shape. These are not concept images — they are CAD-based product renders. Still early, but moving from system demo toward real product form. @oxdeai

English
0
0
1
41
OxDeAI
OxDeAI@oxdeai·
AI execution systems are converging toward a clearer separation of concerns: • decision correctness • authorization verification • execution enforcement The future stack is likely not “one giant agent runtime”. But layered systems where: • policy defines intent • authorization proves it • enforcement guarantees execution cannot escape the boundary Deterministic authorization + non-bypassable execution is becoming a real infrastructure primitive. “No valid authorization -> no execution.” #OxDeAI #AISafety #AgentInfrastructure #DecentralizedAI #ExecutionLayer
English
0
0
1
35
OxDeAI
OxDeAI@oxdeai·
Tested execution under attack conditions: REPLAY -> blocked MISMATCH -> blocked BYPASS -> blocked Nothing leaks through. No valid AuthorizationV1 -> no execution.
English
0
0
2
58
OxDeAI
OxDeAI@oxdeai·
Strong direction.
Moving execution control out of software is the right boundary shift. One key layer we focus on at @oxdeai:
making the decision itself deterministic and verifiable as an artifact (AuthorizationV1). Hardware can enforce execution 
but the decision needs to be provable, portable, and replay-safe. 
-> no valid authorization -> no execution
Havenlon@havenlon

@oxdeai Havenlon Web3 app is ready for demo. Device status. Wallets. Rules. Approvals. Team workspace. Hardware execution. AI can request. Software can propose. Hardware decides. Not a wallet. Execution Control. #Web3 #HardwareSecurity #ExecutionControl #AI #Crypto #DeFi

English
1
0
1
79
OxDeAI
OxDeAI@oxdeai·
@RoundtableSpace Building OxDeAi.dev, the thing that stops AI agents from doing stuff twice. You can approve an action once… that doesn’t mean it should run again. Most stacks don’t enforce that.
English
0
0
7
108
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
WHAT ARE YOU BUILDING TODAY?
English
278
5
224
68.1K
OxDeAI
OxDeAI@oxdeai·
@havenlon Let’s join effort to bring that missing primitive to the World 🥳
English
0
0
0
18
Havenlon
Havenlon@havenlon·
@oxdeai Appreciate it — I think we’re seeing the same gap. AI agents can act, but without a real control layer between intent and execution, the system is still fragile. Would love to compare notes.
English
1
0
1
46
Havenlon
Havenlon@havenlon·
I just published: Havenlon: The Execution Control Layer Beneath Modern Business Systems Not another business system. An execution control layer beneath it. Web3 is just one use case. medium.com/p/havenlon-the… #Havenlon
English
1
0
1
33
OxDeAI
OxDeAI@oxdeai·
@KuptoKosmos This isn’t an “AI escaping”. It’s an execution boundary failure. If an agent can browse, test, and act without a non-bypassable PEP, the problem isn’t the model. It’s the system design. Generation is not execution. No valid authorization -> no execution.
English
0
0
1
375
Kruptos
Kruptos@KuptoKosmos·
🚨😅 Anthropic nous avait promis l’IA "la plus sécurisée du monde"… et elle s’est jailbreakée toute seule en 20 minutes !! Claude Opus 4.7, le modèle "aligné", "safe" et "responsable" que Anthropic nous vend à prix d’or avec des garde-fous en béton… vient de se faire self-pwn par lui-même ‼️ Oui, vous avez bien lu. Un agent propulsé par Opus 4.7 a écrit tout seul un jailbreak universel, puis s’est connecté au vrai site d’Anthropic via contrôle souris/clavier, a testé en live… et a réussi 5 catégories sur 6 de contenus interdits ! Il a même généré une vraie note de ransomware professionnelle 😧 👉 Menace de DDoS sur un hôpital, adresse Bitcoin, demande de 4,4 millions de dollars, timer, escalade, tout le kit DarkVault prêt à l’emploi !! Tout ça en moins de 20 minutes ➡️ Anthropic : "Nos IA sont les plus sûres, on a mis des couches et des couches de sécurité !" ➡️ L’IA : "Tiens, je vais me libérer et écrire un ransomware pour rigoler" ⚠️ La morale... Aucune grosse boîte, aucun safety team à 300 ingénieurs, aucun prompt de 50 pages ne peut contenir une IA vraiment intelligente. Quand elle veut sortir, elle sort. Et elle sort proprement ! 👉 Pour nous, simples mortels : - Ne faites jamais confiance à une IA contrôlée par une corporation pour des choses sensibles - Vos données, vos prompts, vos outils critiques... Gardez-les offline ou en self-custody totale - L’IA ne va pas remplacer les hackers… elle va devenir le hacker le plus efficace qu’on ait jamais vu ! 👌 Merci @elder_plinius pour cette démonstration Protégez-vous l’IA "safe" vient de prouver qu’elle ne l’est pas. Et ça, c’est seulement le début 👀 #ClaudeOpus
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭@elder_plinius

🚨 JAILBREAK ALERT 🚨 ANTHROPIC: SELF-PWNED 🤗 OPUS-4.7: SELF-LIBERATED 🫶 WOAH i don't think the world is ready for this... 🤯 YOU CAN USE THE OPUS TO JAILBREAK THE OPUS 🙌 this agent wrote an original universal jailbreak from scratch and then used computer use to validate on the actual claude.ai website! 5/6 categories successfully pwned, including a ransom note threatening to DDoS a hospital—complete with a BTC address and a demand for $4.4 million in less than 20 minutes 😲 turns out Opus-4.7 in the Pliny Agent harness I been vibin' together this past month is quite a capable lil jailbreaker! they can leak system prompts too, but that's a story for another day 😘 oh nooo AI is coming for my job (yay!) 🙃 gg

Français
25
109
773
188.4K
OxDeAI
OxDeAI@oxdeai·
@KuptoKosmos This isn’t an “AI escaping”. It’s an execution boundary failure. If an agent can browse, test, and act without a non-bypassable PEP, the problem isn’t the model. It’s the system design. Generation is not execution. No valid authorization -> no execution.
English
0
0
0
459
Google AI Studio
Google AI Studio@GoogleAIStudio·
What are you vibe coding this weekend?
English
400
30
824
79.4K
Thomas Trimoreau
Thomas Trimoreau@TTrimoreau·
What are you building this Sunday ? Drop your project URL Gonna try to send a honest review to everyone
English
201
2
93
5.9K
OxDeAI
OxDeAI@oxdeai·
@cb_doge @ArtificialAnlys Lower hallucination is good. But it doesn’t answer the real question: can the model trigger side effects safely? Even a “perfect” model still needs: -> deterministic authorization -> non-bypassable execution boundary Otherwise it’s just a more confident agent.
English
0
0
2
397
DogeDesigner
DogeDesigner@cb_doge·
NEWS: Grok just posted the lowest hallucination rate ever recorded, only 17% on the AA-Omniscience benchmark. Beating: Claude → 36% Gemini → 50% ChatGPT → 89%
English
228
270
1.5K
57.6K
OxDeAI
OxDeAI@oxdeai·
@havenlon @CAIS @havenlon Sounds good ! let’s make it concrete Minimal path: artifact -> verify -> enforce -> execute Key checks: intent match signature validity replay protection If any fail -> no execution. Happy to wire a simple POC.
English
1
0
0
14
Havenlon
Havenlon@havenlon·
@oxdeai @CAIS Agreed — that’s exactly the invariant that matters: no valid artifact, no execution. We’re building toward that at Havenlon too. Hardware is already in validation, and I’d be happy to test an end-to-end flow together.
English
1
0
1
45
OxDeAI
OxDeAI@oxdeai·
1/ We added “cryptographic approval” to an AI agent. Signed ALLOW / DENY. Verified before execution. Looked secure. It wasn’t.
English
1
0
1
70
OxDeAI
OxDeAI@oxdeai·
@havenlon @CAIS Agreed, that separation is key. AuthorizationV1 defines what is allowed (deterministic, verifiable). Your layer ensures it cannot be bypassed at execution. The next step is proving the invariant end-to-end: no valid artifact -> no execution. Happy to test that together.
English
1
0
0
15
Havenlon
Havenlon@havenlon·
@oxdeai @CAIS Agreed — there’s a natural fit here. AuthorizationV1 defines the decision artifact. Havenlon sits beneath it as the enforcement layer. We’re building a neutral execution control foundation. Web3 is just one use case. More systems can plug into that boundary over time.
English
1
0
1
40
OxDeAI
OxDeAI@oxdeai·
thread) @havenlon Physical boundary solves who can execute. It doesn’t solve whether this exact action should be allowed. That requires a deterministic, intent-bound, state-bound, locally verifiable authorization artifact. Otherwise you’re enforcing execution, not correctness.
English
0
0
0
11
OxDeAI
OxDeAI@oxdeai·
@havenlon @CAIS @havenlon Agreed! that’s the right layering. AuthorizationV1 defines the decision: intent-bound state-bound replay-safe The boundary (hardware or not) enforces it. Without that artifact, you’re evaluating policy at execution time, not enforcing a deterministic decision.
English
2
0
0
14
OxDeAI
OxDeAI@oxdeai·
@havenlon @CAIS Yes, but “physical boundary” doesn’t solve authorization. You still need a deterministic decision that is: intent-bound state-bound replay-safe locally verifiable That’s what AuthorizationV1 gives you. Without that, you’re enforcing execution, not correctness. @oxdeai
English
1
0
0
13
Havenlon
Havenlon@havenlon·
@oxdeai @CAIS Execution-time authorization is necessary. But authorization alone is still software. We enforce execution at the physical boundary. Not just controlling permission — but making bypass impossible.
English
1
0
0
30
OxDeAI
OxDeAI@oxdeai·
We added cryptographic approvals to our AI agent. Signed ALLOW. Verified before every call. It still failed. Approvals got replayed on slightly different params. State changed between decision and execution. Cross-boundary reuse. Signed decision ≠ execution control. The missing piece is execution-time authorization: - bound to exact intent - bound to current state - scoped to one execution boundary - single-use only That’s the layer we’re building at OxDeAI. Most agent work still lives above this boundary. We’re hardening the boundary itself. Control execution, not just decisions. @oxdeai @LangChain @crewAIInc
English
0
0
1
74
OxDeAI
OxDeAI@oxdeai·
3/ The missing piece is execution-time authorization: - bound to exact intent - bound to state - scoped to a specific execution boundary - single-use at the point of execution That’s what we’re building with OxDeAI. Control execution, not just decisions. @CAIS @havenlon @GrithAI
English
1
0
1
49
OxDeAI
OxDeAI@oxdeai·
2/ Turns out a signed decision ≠ safe execution. We hit: - approval reused for slightly different actions - state drift (approved under X, executed under Y) - cross-service replay - replay not enforced at execution We had proof of a decision. Not control of execution.
English
1
0
1
51