Michael Holdmann

4.5K posts

Michael Holdmann banner
Michael Holdmann

Michael Holdmann

@mholdmann

Passionate about DLT, Blockchain, Cryptocurrency, Security and Privacy, Founder & CEO @PrasagaOfficial https://t.co/0iBENJ5762

Pacifica, CA Beigetreten Nisan 2009
1.8K Folgt1K Follower
Michael Holdmann retweetet
Michael Holdmann
Michael Holdmann@PrasagaCEO·
AI hallucination isn’t a model problem. It’s an architecture problem. Today we’re introducing a new construct: PSAG (Persistent State-Augmented Generation). It changes how AI handles truth. For the past few years, the industry has tried to fix hallucination with better prompting, better models, and better retrieval. And to a point, it works. Until you reach the places where correctness actually matters governance, compliance, accountability. That’s where everything breaks. Not because the models are weak… but because they’re still guessing. And more importantly, because they’re guessing on top of systems that don’t share a single version of reality. Today’s dominant AI architectures, including A2A and MCP, solve authentication and authorization. But they don’t solve something more fundamental: shared, non-disputable state. Every system keeps its own record. When those records diverge, there is no computational way to resolve truth. Audit logs, account state, and accountability remain disputable by design. That’s not a bug. It’s an architectural ceiling. PSAG changes that. Not by improving the model. Not by improving retrieval. By removing the need to guess entirely. With PSAG, AI doesn’t retrieve governance data and it doesn’t infer it. It reads it. Directly from a single, canonical, on-chain state where every object has deterministic identity, immutable provenance, and cryptographic authorization. There’s no retrieval layer. No probabilistic ranking. No interpretation gap between the question and the answer. For deterministic queries - “What is the current risk classification?” “Who authorized this change?” hallucination isn’t reduced. It’s structurally eliminated. The answer is the state itself. PSAG doesn’t replace RAG. It defines a boundary. RAG remains the right architecture for unstructured knowledge. PSAG becomes the architecture for governed state where correctness, lineage, and auditability are not optional. But PSAG only works because of something deeper: non-bypassability. On SagaChain, no interaction happens outside consensus. Client and server don’t exchange data directly. Every state mutation, every governance evaluation, every authorization is routed through a shared, immutable record. No side channels. No conflicting logs. No ambiguity about what is true. That’s what allows SagaAI to operate differently. Before any AI-driven action mutates state, governance is evaluated - deterministically, against canonical objects, at the commit boundary. Every pass, fail, escalation, and authorization is itself recorded as a transaction. Governance is no longer something you document after the fact. It becomes something the system enforces in real time. Frameworks like ISO/IEC 42001, NIST AI RMF, and the EU AI Act stop being documents. They become executable, interoperable objects. Instead of reconstructing compliance, you verify it directly through object lineage on an immutable ledger. The implication is simple: AI accountability stops being a promise. It becomes a property of the system. This is early. It’s live on our public development testnet, not production. The architecture works. And the invariants are mechanically verified. We’re now opening this up. There are already thousands of canonical classes across governance, regulatory, financial, and operational domains. Now we need the people who understand those domains to shape them. If you’re working in AI, governance, compliance, or systems where correctness actually matters… this is the shift to pay attention to. Get involved: Standards → code.prasaga.com/sagachain/Saga… Overview → prasaga.com/sagatech/sagaa… Testnet → sagascan.prasaga.com WebCLI → sagascan.prasaga.com/wizard Contact → sagastandards@prasaga.com
Michael Holdmann tweet media
English
3
5
8
245
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
Good question. Let me look at this precisely rather than accept the marketing framing at face value.Good. Now I have enough to give a precise architectural answer. This requires no further searching — the mechanism is clear. What "AI Multi-LLM Consensus" Actually Is — Precise Architecture What It Is Not First, clear away the marketing framing. "Multi-LLM consensus" is not a novel consensus protocol. It is not LLMs reaching agreement through some inter-model communication. It is not a new architectural primitive. It is Chainlink's existing DON oracle aggregation pattern applied to LLM outputs as the data source, with LLMs substituted in place of traditional data feeds. The Actual Architecture — Step by Step Chainlink's standard DON architecture has always worked as follows: A request is made on-chain (or triggered by CRE) Multiple independent oracle nodes — each running off-chain — receive the request Each node independently queries the data source Nodes submit their results using a commit/reveal scheme (commit hash first to prevent freeloading, then reveal) Nodes compare their findings and reach consensus on the correct value, then deliver the verified information back to the smart contract The aggregated result is written on-chain For price feeds, the "data source" is an exchange API. For AI oracle use, the "data source" is an LLM API. The consensus mechanism is identical. What changed is only what each node is calling. In the DTCC/Swift corporate actions context: using Chainlink oracles paired with multiple large language models, they sourced unstructured offchain data and converted it into structured onchain data in near real-time, into predefined standards modeled on the ISO 20022 framework. Concretely, the architecture is: Unstructured document (PDF, announcement, etc.) ↓ [DON Node 1] → calls GPT API → gets structured extraction result [DON Node 2] → calls Gemini API → gets structured extraction result [DON Node 3] → calls Claude API → gets structured extraction result ↓ DON aggregation layer: do N-of-M results agree? ↓ If threshold met → write agreed result on-chain as "golden record" ↓ Distribute via CCIP across chains / ISO 20022 via Swift By distributing the system across DONs, multiple independent verifications — each grounded in diverse sources and utilizing different reasoning models — are aggregated through a proven consensus protocol. What "Consensus" Means Here The consensus is value matching — do a majority of nodes return the same structured output? It is the same majority aggregation Chainlink uses for price feeds, applied to LLM text outputs. For structured extraction tasks (extract the dividend amount from this announcement), this works reasonably well because the answer space is constrained and the LLMs are parsing the same document. This is categorically not the same as Byzantine Fault Tolerant consensus over state transitions. It is threshold agreement on output values — closer to a quorum vote than BFT. The Critical Architectural Limitation Chainlink Itself Acknowledges We recognize a fundamental challenge: as language models advance, they increasingly share training datasets and could potentially exhibit similar biases. This observation has led us to question whether consensus mechanisms alone — simply aggregating outputs from different models — can effectively mitigate these biases. This is Chainlink's own research team correctly identifying the structural problem: if GPT, Gemini, and Claude were all trained on overlapping internet corpora, they may produce correlated errors. Three models agreeing does not mean three independent verifications — it may mean three correlated outputs from models with shared failure modes. The DON consensus protocol was designed for independent price data providers with genuinely independent data pipelines, not for LLMs with shared pretraining data. What Was Actually "Production-Demonstrated" The DTCC/Swift initiative demonstrated: Unstructured corporate action announcements (PDFs, press releases) can be parsed by LLMs into structured ISO 20022-compliant fields Running that parsing through 3 LLM APIs and checking agreement catches some hallucinations The system achieved near-total data consensus across AI models and supported multilingual disclosures What it did not demonstrate: A novel consensus mechanism Any architectural property beyond "majority of LLM API calls agreed" Non-bypassability of the extraction process Any formal guarantee about execution path accountability The "near 100% data consensus" metric means: across the test events, GPT/Gemini/Claude produced the same structured output in nearly all cases. For well-structured financial announcements in major languages, this is expected — the extraction task is constrained enough that the models converge. It does not mean the consensus mechanism is robust under adversarial conditions or for genuinely ambiguous documents. Corrected Table Entry The original entry in my comparison table was: AI multi-LLM consensus ✅ Production-demonstrated (DTCC/Swift initiative) The accurate entry should be: Dimension Chainlink SagaAI Notes Multi-LLM output aggregation ✅ Production-demonstrated for structured data extraction (DTCC/Swift) ❌ Not present Chainlink's DON majority-voting applied to LLM API outputs. Correlated failure mode acknowledged by Chainlink's own research. Not a novel consensus protocol — it is threshold agreement on LLM responses. Relevance to the SagaAI Competitive Argument This actually strengthens SagaAI's five A's position, not weakens it. Here is why: Each DON node in the AI oracle configuration is making an independent HTTPS call to an external LLM API. That call is a direct client-server connection — exactly the architecture identified in the Beberman document as failing three of the five A's. DON consensus attests that a majority of nodes got the same output. It does not make any individual node's call to GPT/Gemini/Claude non-bypassable. The execution path — what prompt was sent, what context was included, what intermediate reasoning occurred — is off-chain and not part of the consensus record. The "golden record" on-chain is the aggregated output value. It is not a canonical record of what was done to produce it. The Beberman/white paper distinction holds precisely here: Chainlink writes AI outputs onto a chain; SagaAI is the channel through which AI agents interact, making every step of the channel non-bypassable. In short: the DTCC/Swift result is a genuinely useful data pipeline capability. It is not, architecturally, what "multi-LLM consensus" implies when read in isolation.
Chainlink@chainlink

LLM hallucinations are a massive roadblock to enterprise adoption of AI. Swift, UBS, Euroclear, & 20+ major organizations advanced a solution to the $58B+ annual corporate actions problem by leveraging Chainlink to reduce AI hallucination risk. LINK everything.

English
0
6
8
629
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/🧵 What if your bank deposits settled instantly, loans adjusted automatically to rates, and cross-border payments zipped through without fees eating your margins? SagaFinance™ Pilot 5 makes it real: Tokenized Fiat + Banking Products fused with ISO 20022 on SagaChain. The future of finance? It's here. Buckle up.👇
English
2
5
8
296
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/🧵 Picture this: You're a composer, pouring your soul into a melody. But when the royalties trickle in months late, incomplete, shrouded in mystery, you wonder if the system was ever built for you. What if that changed today? Introducing SagaMusic™ Pilot 5 on SagaChain: The game-changer for true creator empowerment.
English
2
4
6
230
Michael Holdmann retweetet
Michael Holdmann
Michael Holdmann@PrasagaCEO·
@IMFNews We offer and alternative, Canonicalization. Tokenization = ticket to coat in coat check closet. Canonicalization = Digital Twin of coat, all state, lineage, rules, policy, compliance as an encapsulated object in your account container.
English
0
1
3
177
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/🧵 The PraSaga Foundation is highlighting Motion Picture Pilot 4. It tackles a problem the industry has worked around for decades: The same film exists under multiple identities. 👇
English
5
3
5
192
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/ Validation in pharma is still running on paperwork logic in a real-time world. 👉 Months of cycles. 👉 Manual reconciliation. 👉 Repeated audits. 🚨And every change slows everything down.
English
3
5
8
211
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
PraSaga Foundation today submitted a request to the @FinancialCmte @USHouseFSC to include the construct of Canonicalization as an alternative architectural foundation for Tokenization of Real World assets (RWA).
PraSagaOfficial tweet media
English
5
5
9
379
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
Over the past decade, the blockchain industry has directed trillions of dollars in capital and economic activity toward scaling an architectural model that is structurally analogous to Tim Paterson’s QDOS (MS-DOS): a passive, reactive, single-entry, sequential execution model operating over shared state. While this model is Turing complete, it is fundamentally constrained in its ability to support scalable parallel execution and fully composable on-chain systems. These constraints are not implementation issues they are properties of the underlying architecture. As regulators and markets move toward tokenizing assets measured in the hundreds of trillions of dollars, there is a growing risk that policy frameworks will assume capabilities, such as global composability, transparency, and parallel execution, that current smart contract architectures cannot deliver at scale. Smart Contact blockchains are perfect for fast transactions, not for putting regulated assets on-chain. The critical debate, therefore, must begin at the level of computational models and system architecture. The foundation must be grounded in first-principles computer science, not market narratives or token-driven incentives, if blockchain is to serve as a viable substrate for global, decentralized, permissionless systems. PraSaga Foundation has generated an explainer video to dive into the MS-DOS/Smart Contract data model. On Wednesday, will review what we believe is the model needed for a true scalable global decentralized layer 1 blockchain, Alan Kay's message passing paradigm as the proper model for the future foundation of all global assets.
English
4
4
6
227
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
Smart Contracts = the ticket for the coat check, not the coat. Programmable Smart Asset = the digital twin of the coat. “For blockchain to reach its full enterprise potential, distributed systems MUST move beyond the transaction-only flat ledger and adopt the the asset itself, not just a record of it.” Transaction-only flat ledger" (Smart Contracts) cannot scale, cannot deliver what the world needs for an underlying global decentralized scalable infrastructure. Time to have the debate; Alan Kay’s “Message Passing” (parallelization) or Tim Paterson “MS-DOS” (serialization) as the data model for the underlying infrastructure for all global transactions. Watch the video, comment your thoughts.
English
6
4
6
522
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/🧵 Creators, Composers, Labels picture a music world where your IP isn't lost in silos, but shines with unbreakable ownership and instant royalties. No more unmatched millions vanishing into the ether. Introducing SagaMusic Pilot 4: Cross-Standard Namespace Synchronization on SagaChain. The future is now. 👇
English
5
6
9
219
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/ Aerospace engineering already has powerful standards: 👉 STEP 👉 S1000D 👉 iSpec2200 👉 CODEX 👉 ARINC But they still live in separate systems and documents. Aerospace Pilot 3: Digital Thread explores what happens when they are connected. 🧵
English
4
4
6
175
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/ 🧵 Virtual production is exploding. 👉 LED walls 👉 Digital doubles 👉 Real-time environments 👉 Simulations 👉 AI-assisted tools But there’s a quiet problem across the industry: The moment an asset leaves the stage… its history often disappears.
English
4
5
5
174
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/🧵 Picture this: 🔹IND workflows that once took months now unfold in minutes. 🔹Amendments signed immutably. 🔹Batches and lots linked forever to trials. 🔹Regulators watching compliance unfold in real time. The future of clinical development isn't coming. It's here. We are excited or overview: SagaPharma™ Pilot 3.
English
4
5
8
228
Michael Holdmann retweetet
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/ 🧵 Creators, composers, labels imagine owning your IP like never before. Streams turning into instant royalties, every detail transparent, every split fair. No intermediaries siphoning your earnings. SagaMusic™ Pilot 3 just dropped on SagaChain™, and it's rewriting the rules. 👇
English
4
5
8
208
PraSagaOfficial
PraSagaOfficial@PrasagaOfficial·
1/ 🧵 Last week the AI governance conversation got loud. EigenCloud → verifiable agents on-chain. Ethereum leadership → LLM governance blueprints. OpenAI → EVMbench auditing $100B+ in smart contracts. Now @PrasagaOfficial is releasing: “Executable Governance: Operationalizing ISO/IEC 42001, NIST AI RMF, NIST IR 8596, and the EU AI Act as Interoperable Class Architecture on SagaChain™.” 👉 AI governance is still document-based. ✅ We built executable governance infrastructure. White paper link: /8 👇
PraSagaOfficial tweet media
English
6
4
6
181