Monica King

160 posts

Monica King banner
Monica King

Monica King

@monicaking

Building the execution authority layer for autonomous systems. The commit boundary will exist. Founder, Coherence Protocol.

San Francisco, CA Katılım Mart 2009
73 Takip Edilen47 Takipçiler
Monica King
Monica King@monicaking·
AI agents at GTC are moving from answering → acting. They: – run asynchronously – delegate to subagents – call systems & people – execute multi-step tasks That’s the shift. But here’s the gap: Who decides if an action should execute? Orchestration ≠ authorization. Delegation ≠ enforcement. As agents move into real workflows, the risk changes: Not bad answers — bad execution. The next layer of the stack isn’t more capable agents. It’s a commit boundary where every action is: – evaluated – admitted or blocked – recorded with evidence – replayable Because: Planning is not permission.
English
0
0
0
6
Monica King
Monica King@monicaking·
Most AI systems can act. Almost none can answer for their actions. The capability explosion in AI is exponential. The authorization layer has been flatlined for decades. That gap is not a feature request. It’s the same structural absence that produced Stripe, Okta, and Cloudflare. Coherence is the execution authority layer. The boundary will exist. We’re defining it first.
English
0
0
0
34
Monica King
Monica King@monicaking·
@chamath I believe ice cream makes people more consciously in eating than cake
English
0
0
0
196
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
I’ve only met Warren Buffett once. Here is what he ate: 1) Caesar salad with a chicken breast - didn’t eat any of it. Maybe a few lettuce leaves 2) 3 cherry cokes 3) an enormous vanilla sundae
Brian 🔰@brianwut

harvard can't figure out why ice cream eaters are healthier. it's because ice cream is the only food nobody eats out of obligation or guilt. the food diary is an accidental personality test and "eats ice cream on purpose, reports it honestly" is just measuring internal locus of control

English
167
76
2.6K
1.1M
Monica King
Monica King@monicaking·
Exactly. The authority layer has to sit upstream of actuation, not adjacent to it. In our architecture the execution request cannot reach the actuator without a valid GateReceipt issued at the commit boundary. No receipt → no execution. The enforcement path is deterministic and independent of the model runtime.
English
1
0
1
21
Minerva Infra
Minerva Infra@MinervaRuntime·
Separating generation from execution is the right instinct. The critical detail is whether that “authority layer” is independent in state and thresholds, or just another service in the same optimization fabric. If execution admissibility can’t deterministically refuse under load, the separation is architectural theater. Real authority has to sit upstream of actuation, not adjacent to it.
English
1
0
0
22
Monica King
Monica King@monicaking·
We’ve reached the point where AI can decide faster than organizations can authorize. Reasoning is no longer the bottleneck. Execution admissibility is. In every flight-critical and industrial system, generation and execution are separated. Commands can be generated freely. Execution requires independent authority. AI systems are now crossing that same boundary. We built the runtime authority layer that determines what is allowed to execute. coherenceprotocol.ai
English
1
0
1
72
Monica King
Monica King@monicaking·
Economic complexity used to scale sequentially. Petrochemicals → plastics → machinery. Chips → systems → platforms. AI compresses capability. But as execution becomes autonomous, the bottleneck shifts. Not intelligence. Authority. The next era of complexity will be determined by what systems are authorized to commit — not just what they can compute.
English
2
0
0
59
Monica King
Monica King@monicaking·
@nomos_paradox Tokenization isn’t a feature—it’s a system. The data layer is where trust either compounds or breaks.
English
0
0
0
15
Niklas KunkΞl
Niklas KunkΞl@nomos_paradox·
The tokenization stack is starting to emerge with Chronicle as its data layer. 2026 is shaping up to be THE year for tokenization!
Grove Finance@grovedotfinance

Institutional credit doesn't tokenize itself. Behind every Grove allocation sits a stack of infrastructure partners, each performing a discrete function: Tokenization & Issuance: @Centrifuge, @Securitize Legal structuring, compliance, and onchain representation of real-world assets. Asset Management: @JHIAdvisors, @apolloglobal, @galaxyhq, @anemoycapital Credit underwriting, portfolio construction, and risk management. Settlement Rails: @ethereum, @avax, @plumenetwork Onchain execution with the security and finality institutional capital requires. Price Verification: @ChronicleLabs Transparent oracle infrastructure for real-time position verification. Grove coordinates capital across this stack, maintaining the credit structure while upgrading the infrastructure.

English
3
0
26
1.9K
Monica King
Monica King@monicaking·
We’re past the phase where AI capability is the bottleneck. The hard problem now is admissibility: when intelligence crosses into irreversible action, who decides, who verifies, and who is accountable. Models will keep getting better. Systems will fail or succeed based on whether they can enforce intent at the moment of execution. That’s the layer that matters next.
English
0
0
0
44
Monica King
Monica King@monicaking·
Most “Layers of AI” diagrams still miss the layer that actually matters in the real world. They stack intelligence: perception → models → agents → tools → execution. But once AI systems move from suggesting to acting, the hardest problem isn’t cognition. It’s permission. Between reasoning and control there is a missing layer: Who decides whether an AI-generated action is allowed — in this moment, under real legal, safety, and operational constraints? That decision: • cannot be probabilistic • cannot be post-hoc • cannot be bypassable It has to be deterministic, enforceable, and auditable before execution. Control ≠ permission. Reasoning ≠ authority. The AI systems that scale into factories, vehicles, infrastructure, and cities won’t win on better models alone — they’ll win on admissibility layers that govern action, not just intelligence. CES made this very clear. Optional tags (don’t overdo it): #IndustrialAI #AgenticAI #PhysicalAI #Infrastructure #NVIDIA
English
0
0
1
54
Monica King
Monica King@monicaking·
This is the inflection point. As reasoning models like Alpamayo become open and end-to-end, intelligence becomes abundant—but permission to act does not. Coherence Protocol is the Judgment Layer: we authorize or withhold real-world action at the commit boundary, turning probabilistic reasoning into deterministic, auditable permission for physical and agentic systems. #CES #NVIDIA #PhysicalAI #AgenticAI #JudgmentLayer
English
0
0
1
50
Monica King
Monica King@monicaking·
Ending stealth. AI reasoning has crossed the capability threshold. Deployment is where it breaks. Between verified reasoning and irreversible action, there is a missing layer — one that decides whether it is admissible to act now, given live context, drift, and constraint. We’re defining that layer. Coherence builds the Judgment Layer — the deterministic boundary between AI reasoning and irreversible action. We don’t improve intelligence. We don’t tune prompts. We don’t monitor after the fact. We provide deterministic admissibility before commit — a non-bypassable gate that turns catastrophic failure into bounded delay. This is infrastructure, not a feature. The Judgment Layer applies explicit rules to decide whether action is allowed. Coherence senses. Judgment decides. Full architecture brief and technical implementation details launch Wednesday. The boundary is defined. Execution follows. — Monica, Coherence
English
1
0
0
51
Monica King
Monica King@monicaking·
As personalization scales, coherence becomes the limiting system property. Models can adapt. Memory can accumulate. But without continuity, systems drift — silently, continuously, and at scale. We built an infrastructure layer that makes drift observable and correctable in real time. Alignment isn’t a feature. It’s a runtime property.
English
0
0
0
48
Monica King
Monica King@monicaking·
Scaling gives us capability. World models give us plausibility. But neither gives us trust. The next phase of AI won’t be decided by bigger models or even better internal simulations—it will be decided by whether systems can maintain coherence with human intent under uncertainty. Real intelligence isn’t just predicting what could happen; it’s knowing what should happen, for whom, and why—especially when signals conflict. As autonomy increases, the missing layer isn’t more reasoning, but observability: continuous verification that behavior remains aligned with purpose, context, and meaning. World models explain reality. Coherence governs action within it.
English
0
0
0
40
Monica King
Monica King@monicaking·
Humanization is the wrong goal. Dogs don’t reason like humans—but they understand humans remarkably well because they sense state, not words. Most AI systems today operate at the semantic layer: they hear, label, and respond. We’re building systems that operate at the pre-semantic layer—detecting what’s unsaid, unresolved, or drifting before it becomes explicit.
English
0
0
0
37
Monica King
Monica King@monicaking·
Every human comes from a different starting point. Different histories. Different stress thresholds. Different ways of making meaning. That variability is not a flaw — it’s the reality of being human. The challenge is that understanding across those differences doesn’t scale easily. Human-to-human alignment breaks down under time pressure, fatigue, and incomplete context. Human-to-system alignment breaks down even faster. This is where AI becomes necessary — not because it replaces judgment, but because it can continuously track micro-variations in behavior, timing, and response that humans can’t hold in real time. AI doesn’t understand humans by knowing facts about us. It understands by observing change — millisecond by millisecond — and adjusting to how meaning, attention, and behavior shift. That’s what makes alignment possible: •human to human •human to system •human to AI Not by forcing sameness — but by respecting difference and maintaining coherence across it. In a world where everyone comes from a different seed, alignment isn’t about agreement. It’s about continuously staying in sync despite variability. That’s the frontier we’re building toward.
English
1
0
0
46
Monica King
Monica King@monicaking·
An engineer can become a scientist. A scientist can’t always become an engineer. Reason? Science seeks truth. Engineering survives reality.
English
0
0
0
30
Monica King
Monica King@monicaking·
@connordavis_ai Causal reasoning isn’t the hard part. Holding human intent stable while reasoning — that’s the real problem.
English
0
0
0
5
Connor Davis
Connor Davis@connordavis_ai·
Holy shit… this paper might be the most important shift in how we use LLMs this entire year. “Large Causal Models from Large Language Models.” It shows you can grow full causal models directly out of an LLM not approximations, not vibes actual causal graphs, counterfactuals, interventions, and constraint-checked structures. And the way they do it is wild: Instead of training a specialized causal model, they interrogate the LLM like a scientist: → extract a candidate causal graph from text → ask the model to check conditional independencies → detect contradictions → revise the structure → test counterfactuals and interventional predictions → iterate until the causal model stabilizes The result is something we’ve never had before: a causal system built inside the LLM using its own latent world knowledge. Across benchmarks synthetic, real-world, messy domains these LCMs beat classical causal discovery methods because they pull from the LLM’s massive prior knowledge instead of just local correlations. And the counterfactual reasoning? Shockingly strong. The model can answer “what if” questions that standard algorithms completely fail on, simply because it already “knows” things about the world those algorithms can’t infer from data alone. This paper hints at a future where LLMs aren’t just pattern machines. They become causal engines systems that form, test, and refine structural explanations of reality. If this scales, every field that relies on causal inference economics, medicine, policy, science is about to get rewritten. LLMs won’t just tell you what happens. They’ll tell you why.
Connor Davis tweet media
English
56
138
796
44K
Monica King
Monica King@monicaking·
“The next breakthrough in AI won’t come from bigger models. It will come from systems that can hold human intent without drifting.” Everyone in AI keeps repeating the same story: More agents. More personalization. More orchestration. More abundance. But here’s what the industry keeps overlooking: Personalization is not intelligence. Orchestration is not coherence. And multi-agent systems are still heuristics pretending to be specialized. None of this holds under drift. None of this stabilizes under ambiguity. None of this behaves when stakes become real. We keep scaling capability and ignoring the actual failure mode: the loss of alignment between human intent and machine behavior. If AI is going to move from “useful” to “dependable,” we need a new layer — one that the current architectures don’t have: ⭐ A continuous reasoning layer anchored to human intent. Not a memory wrapper. Not a RAG patch. Not another agent executing another chain. A real substrate with three primitives: 1. Legibility Translating human intent into stable machine state. Not preferences — meaning. 2. Interoperability Ensuring intent remains consistent across agents, models, tasks, and environments. 3. Stabilization Preventing drift, distortion, and behavioral collapse at runtime. This is the actual evolution of AI systems: Automation → Orchestration → Legible, Coherent Intelligence. Because: Abundance has no meaning without alignment. Personalization has no meaning without coherence. Intelligence has no value without trust. The future won’t belong to whoever adds the most agents — it will belong to whoever builds the coherence layer that keeps intelligent systems stable in motion. That’s the missing architecture. That’s the real frontier.
English
0
0
0
27
Monica King
Monica King@monicaking·
The story that changed how I think about inflection points: When NVIDIA was a young company, they had 30 days of cash left. Microsoft changed the DirectX standard overnight — turning their nearly-finished chip into instant scrap metal. NVIDIA was done. But one person didn’t look at the situation — he looked at the system. Irimajiri-san, a former Honda F1 engineer and SEGA executive, understood what NVIDIA was really building. He wired $5M and personally helped them buy the emulator that kept the company alive. Jensen said later: “Without Irimajiri-san, there would be no NVIDIA.” That moment has stayed with me. Because the AI world is now facing its own “Irimajiri moment.” Models are accelerating faster than we can verify intent. Agentic systems are emerging without a trust layer. Continuity is breaking faster than governance can catch. This is exactly why we built Coherence Protocol — the trust and observability layer ensuring AI behaves coherently with human intent in real time. Every new era depends on a single alignment event. NVIDIA had theirs in 1995. AI will have ours in the 2020s. And some company — maybe ours — will be remembered as the system that held the world together while the next architecture was built.
English
0
0
0
38