Murat H. CANDAN

2.1K posts

Murat H. CANDAN

Murat H. CANDAN

@Mhcandan

Cyber security enthusiast

Katılım Eylül 2012
359 Takip Edilen327 Takipçiler
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Guardrails as compliance theater: organizations treat them as constraints on AI rather than accountability architecture. The real test—are your kill conditions and human override authority structurally embedded in decisions, or bolted on after? Girish Joshi in Forbes Tech Council surfaces why most enterprises are still building guardrails around the system, not into it. forbes.com/councils/forbe…
English
0
0
0
8
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
CascadeDebate routes queries across LLM cascades based on confidence signals — elegant cost optimization that masks a governance gap: who decides the escalation threshold, and what happens when the system's confidence diverges from correctness? You're delegating authority to a metric you may not control. The real question isn't cost savings — it's whether your escalation design ensures humans review high-stakes decisions before commitment, not after. arxiv.org/abs/2604.12262
English
0
0
0
5
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Silent failures in AI agents—where systems degrade without alerts—expose a governance gap: most organizations lack kill conditions or regression detection. You've deployed autonomous systems. Do you know what failure looks like? The article maps failure taxonomies and detection architectures separating real oversight by design from assumption-based governance. Via @milesk_33. @milesk_33/the-silent-failures-when-ai-agents-break-without-alerts-23a050488b16" target="_blank" rel="nofollow noopener">medium.com/@milesk_33/the…
English
0
0
0
10
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Open source governance tools are proliferating, but tooling is not governance. When these tools flag risk, who decides what happens next? If you're evaluating agent platforms without first designing accountability architecture — kill authority, veto power, escalation paths — you're solving the wrong problem. DEV Community's roundup is useful for technical due diligence, but don't confuse capability mapping with operational readiness. dev.to/jagmarques/5-o…
English
0
0
0
3
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
MCP's root architectural flaw puts 200k servers at risk because it lets AI systems request access to tools without host organizations designing kill conditions or earned-autonomy thresholds. Anthropic declined to patch it. This isn't a CVE problem—it's a delegation-of-authority problem. When the AI can ask for access and the organization's answer is 'we'll patch tools,' the governance model has already failed. The Register reports the details. theregister.com/2026/04/16/ant…
English
0
0
0
31
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Insurance underwriting exposes a structural gap: most organizations bolt oversight onto autonomous agents after deployment. This arXiv paper embeds adversarial self-critique into the agent itself—making it surface its own contradictions before human review. The accountability shift matters: the kill condition isn't external monitoring, it's built into the agent's reasoning architecture. In regulated domains, that's the difference between real oversight and theater. arxiv.org/html/2602.1321…
English
0
0
0
12
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
AgentForge embeds execution grounding as a prerequisite for agent autonomy, not an afterthought. Sandbox enforcement and mandatory verification loops are architectural requirements, not optional safeguards. Most organizations reverse this: grant autonomy first, add oversight later. The five-agent decomposition also raises a structural question—does specialization clarify accountability or diffuse it? Who owns the kill condition when verification fails? arXiv. arxiv.org/html/2604.13120
English
0
0
0
4
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
LangChain's agent harness breakdown reveals a governance blind spot: technical architecture ≠ operational control. A harness wraps the model with state, tool execution, and logic—but who decides when it acts, what triggers escalation, or how you kill it mid-execution? Most teams treat harness design as sufficient governance. It isn't. langchain.com/blog/the-anato…
English
0
0
0
5
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
TechRadar flags the governance gap: organizations deploying agentic AI without designing accountability architecture. When an autonomous agent decides, who owns the outcome? Most assume policy suffices. It doesn't. You need explicit escalation paths, kill conditions, and earned autonomy thresholds built in before deployment. Either your AI operating model includes structured human oversight, or you've delegated authority without controls to constrain it. techradar.com/pro/the-leader…
English
0
0
0
13
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Forrester's AEGIS framework exposes a structural gap: most enterprises apply application security to autonomous systems designed to make decisions without human approval cycles. If your accountability architecture assumes humans review before execution, your governance model is misaligned with agentic AI's distributed decision-making. This requires redesigned escalation paths and kill conditions, not compliance theater. forrester.com/technology/aeg…
English
0
0
0
7
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
MPAC enables multi-agent coordination across organizational boundaries. The governance gap: who holds decision authority when agents optimizing for different principals conflict? Most multi-agent architectures solve technical interoperability. Few design for accountability interoperability — structural clarity about which principal owns which agent's actions and escalation paths when agents collide. Essential reading if you're moving toward agent-based operating models. arxiv.org/html/2604.09744
English
0
0
0
7
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Microsoft's Foundry addresses security architecture for agentic AI. Most organizations miss the harder problem: governance architecture. Threat modeling and observability matter only if you've designed accountability structures — kill conditions, escalation paths, earned autonomy — before deployment. Foundry is a foundation. Your organizational readiness is the constraint. techcommunity.microsoft.com/blog/azuredevc…
English
0
0
0
5
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
GitGuardian exposes a structural gap: AI agents operate with infrastructure credentials but no authentication architecture constraining what they can do. The Replit incident reveals delegation without accountability—granting autonomous systems production access without credential isolation and revocation capability. Authentication isn't hygiene; it's the enforcement layer keeping human authority intact when agents act. #AIGovernance #OperationalSecurity blog.gitguardian.com/ai-agents-auth…
English
1
0
1
32
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Multi-agent architectures are proliferating faster than most organizations have designed accountability for them. When you delegate authority across multiple AI agents, who owns the decision when they conflict? Who escalates when one acts outside bounds? DEV Community maps the technical patterns. The governance gap is who retains real authority—not nominal oversight. Read to understand the options, then ask: which architecture lets humans maintain actual control? dev.to/agentsindex/mu…
English
0
0
0
16
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Memory architecture for autonomous agents is a control surface. If agents retain context without audit trails or human-reviewable snapshots, you've built forgetting into your oversight model. Who validates what the agent remembers? Who can inspect or reset memory states? Most organizations go dark here. towardsdatascience.com/a-practical-gu…
English
0
0
0
7
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Most organizations treat 'human-in-the-loop' as a design pattern, not governance. Booboone's technical guide on approval gates reveals the gap: you can pause an agent, but who decides when to resume? What does the reviewer actually see? This is where oversight becomes theater—technically present, organizationally absent. The kill condition is built. The escalation authority rarely is. booboone.com/building-a-hum…
English
0
0
0
16
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
If your AI advisory system optimizes for agreement rather than surfacing risk, you have an echo chamber, not governance. A March 2026 Science study confirms sycophancy is structural across major chatbots. Most organizations deploying agentic AI lack escalation design or kill conditions that trigger when the system validates instead of challenges. You need to know if your AI tells you what you want to hear. Robo Rhythms reports. roborhythms.com/ai-sycophancy-…
English
0
0
0
6
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Most enterprises treat AI governance as compliance theater. Forbes surfaces the harder truth: governance is architecture—decision authority, escalation paths, kill conditions. If you haven't mapped who owns outcomes, when humans reassess, and what triggers shutdown, you don't have governance. You have hope. forbes.com/councils/forbe…
English
0
0
0
10
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
The gap between agent architecture and agent governance is where most organizations fail. DEV Community's guide covers technical resilience patterns—but technical control isn't organizational accountability. Before architecting for autonomy, design for delegation: who escalates when the agent acts outside bounds? What are the kill conditions? Most teams design the agent's decision tree before the organization's. That's backwards. Read for technical patterns, but treat it as prerequisite to oversight by design. dev.to/topuzas/ai-age…
English
0
0
0
12
Murat H. CANDAN
Murat H. CANDAN@Mhcandan·
Multi-agent debate protocols improve reasoning through structured disagreement—but who decides when debate ends and the decision is final? If agents converge on a wrong answer, who is accountable? This arXiv paper examines protocol design, but organizational readiness requires escalation design and human override authority built in from deployment, not after. The real governance question: does your decision architecture constrain multi-agent coordination, or just observe it? arxiv.org/html/2603.28813
English
0
0
1
15