Sabrina Chaves

8 posts

Sabrina Chaves banner
Sabrina Chaves

Sabrina Chaves

@PROOF__SOL

If it’s not on SolProof, it’s just a hallucination.https://t.co/XarawujOIu

Beigetreten Ocak 2026
5 Folgt9 Follower
Sabrina Chaves
Sabrina Chaves@PROOF__SOL·
SolProof AI Technical Research Report Deep Technical Edition Based on publicly accessible materials available as of April 23, 2026 Research Scope This report is based on SolProof AI’s public website, whitepaper, and GitHub repository. One distinction is essential from the outset: the whitepaper describes the target protocol architecture, while the public GitHub codebase currently presents a browser-based MVP and product prototype rather than a production-grade ZKML network. 1. Executive Summary SolProof AI is attempting to turn AI output from an opaque application-layer result into a verifiable computational object. Its core thesis is that, in high-value contexts, an AI response should not be trusted merely because it exists; it should be accompanied by a cryptographic proof, a verifiable receipt, and an immutable record layer. In practical terms, SolProof is positioning itself as verifiable AI infrastructure on Solana rather than as a conventional AI application. The public materials describe a target system in which AI inference happens off-chain, a proof is generated over the inference event, the proof is recursively compressed, and the final verification artifact is anchored to Solana. The public GitHub implementation, however, does not yet implement production zero-knowledge proving. Instead, it demonstrates a deterministic, replayable verification model: structured input is normalized, mock inference is run locally, hashes and proof-like artifacts are generated, and the resulting bundle is revalidated by reconstructing the same computation path in the browser. The result is a project with a strong architectural direction and a credible product framing, but one that is still materially earlier than its protocol narrative. SolProof should therefore be analyzed as a technically meaningful protocol prototype, not yet as a fully deployed verifiable AI settlement layer. 2. Problem Definition and Project Positioning The project’s positioning is built around a simple but powerful premise: current AI infrastructure is easy to consume but difficult to verify. In most production systems, an AI provider returns a text answer, a label, a score, or an explanation, but downstream systems cannot independently prove that the claimed model really produced that output under the claimed conditions. From the whitepaper, SolProof is designed to address four structural problems: black-box inference, privacy exposure, lack of output integrity, and centralized control over the inference stack. The website frames the same issue more directly: if AI output is going to be used in financial, legal, enterprise, or trust-sensitive workflows, it needs to become auditable, reproducible, and machine-verifiable. This is an important conceptual shift. SolProof is not primarily trying to prove that an AI model is correct in an epistemic sense. It is trying to prove that a particular computational event occurred: a given model, on a given input, at a given time, generated a given output, and that this event was not altered after the fact. That distinction matters because it places SolProof closer to infrastructure layers such as blockchains, attestation systems, and proof systems than to end-user chatbot products. 3. Target Protocol Architecture The whitepaper describes a six-stage pipeline. First, the user submits encrypted input. Second, an off-chain AI system performs inference. Third, a zero-knowledge proof is generated over the computation. Fourth, the proof is recursively compressed. Fifth, the proof is submitted to Solana for verification and anchoring. Sixth, the verified result is returned and a PROOF token burn is triggered. From a systems perspective, this architecture is modular by design. SolProof does not attempt to move large-model inference onto the chain. Instead, it separates computation from verification: AI computes off-chain, while the chain adjudicates a compact proof object. This is the only realistic architecture if one wants to preserve both model-scale performance and blockchain-grade verifiability. The whitepaper identifies the intended proving stack as Groth16 over BN128, with recursive SNARK compression layered on top. The design goal is obvious: keep proof size bounded, verification cost low, and final on-chain artifacts small enough to fit practical Solana constraints. 4. The Proof Object and Why It Matters One of the most important technical details in the whitepaper is the commitment formula: commitment = Poseidon(input_hash || output_hash || model_id || timestamp) This formula is more significant than it may first appear. It means SolProof is not merely proving that “some output exists.” It is binding together four specific dimensions of an inference event: The input digest The output digest The model identity The timestamp This structure prevents several classes of ambiguity. Without an input hash, the proof would not be tied to a specific request. Without a model identifier, model substitution would remain possible. Without a timestamp, the proof would lose its temporal anchor. Without an output digest, there would be no fixed claim to verify. In other words, the protocol is trying to elevate inference from content into state. Once an inference result is represented as a state object with cryptographic bindings, it can be archived, verified, indexed, queried, and potentially consumed by downstream protocols. The whitepaper also specifies Poseidon as the target hash function, which is consistent with circuit-friendly zk design. Poseidon is not chosen for general-purpose API convenience; it is chosen because it is significantly more practical inside proof systems than traditional hashes. That matters because hash choice directly affects circuit cost, witness generation burden, and long-term proving performance. 5. Intended Cryptographic and Chain Design The whitepaper’s choice of Groth16 is pragmatic. Groth16 remains attractive in environments where proof size and verifier cost matter more than universal setup convenience. Its strengths are compact proofs and fast verification. Its weakness is the trusted setup requirement and the complexity of circuit engineering. For SolProof, that tradeoff suggests a specific philosophy: the system values low-cost verification and concise receipts highly enough to accept the engineering burden of tailored circuits. This is sensible if the project wants proofs to be verified cheaply and potentially referenced in on-chain workflows. The inclusion of recursive SNARK compression is also telling. Recursive composition is not a cosmetic add-on here; it is almost mandatory for any serious ZKML roadmap. Proving a meaningful ML pipeline directly is expensive. Compressing multiple proving stages or large intermediate artifacts into a smaller final proof is one of the only viable paths toward usable on-chain verification. On the Solana side, the whitepaper discusses recording proof and commitment data via the Memo Program and mentions Groth16 verification support. Architecturally, this implies at least two distinct layers: a record layer that timestamps and stores proof-related data, and a verifier layer that must eventually validate proof correctness under protocol rules. That distinction matters. Writing a hash to chain proves that data was recorded. It does not, by itself, prove that the corresponding inference was cryptographically verified on-chain under a strict verifier program. For SolProof to evolve from “anchored receipts” into “composable proof infrastructure,” it will likely need stronger verifier semantics than memo-based storage alone. 6. What the Public GitHub MVP Actually Implements The current GitHub repository is not a live ZKML protocol. It is a browser-based, static MVP designed to demonstrate the minimum product loop. According to the README, the public implementation includes: model selection, request validation, simulated inference, proof bundle generation, local verification, wallet and token-burn simulation, and replayable history. It explicitly does not claim production zero-knowledge proving or real chain verification. This is an important and healthy clarification. Rather than pretending to have solved ZKML end to end, the repository demonstrates the product envelope that a future proof-backed protocol could inhabit. The repository’s model configuration currently includes four named modes: gemini-flash-zkml, gemini-sentiment, gemini-vision, and gemini-fraud. They are divided into trial and pro tiers, with burn amounts and nominal constraint counts attached. These constraint values should not be interpreted as audited proof-system metrics. In the current codebase, they function more like protocol-aware UX parameters than hard zk circuit measurements. Still, their presence shows that the project is already organizing the product around a “model complexity maps to verification cost” mental model. 7. The Current Verification Mechanism The technical heart of the MVP is in modules/engine.js. The implementation is not zero-knowledge proving; it is deterministic replay verification. The browser first normalizes the request. It then runs a local deterministic inference routine depending on the selected model. For example, the sentiment mode uses lexical weighting, while the fraud mode interprets structured indicators such as unknown routes, mixer exposure, recipient age, amount, and transaction velocity. The result is not a call to Gemini or another external LLM API. It is a reproducible heuristic pipeline. From there, the system computes several derived values: an input fingerprint, an output fingerprint, a commitment, a proof hash, a public input hash, and a synthetic transaction identifier. The code also generates simulated proof fields such as pi_a, pi_b, and pi_c, while explicitly labeling the protocol as groth16-simulated, the curve as bn128-simulated, and the network as solana-devnet-simulated. The crucial point is what happens at verification time. The app does not just display these fields. It reconstructs the bundle from the stored request context and reruns the deterministic inference path. It then compares the recomputed output, commitment, proof hash, public input hash, transaction identifier, and associated metadata against the stored receipt. If any field diverges, verification fails. Technically, this means the MVP already has one meaningful property: receipts are replayable and internally consistency-checkable. It does not yet prove computation to a third-party verifier in the cryptographic sense, but it does prove that the receipt is not merely decorative UI. It is tied to a reconstructable computation path. 8. Whitepaper-to-MVP Gap Analysis The gap between the target protocol and the current implementation can be described across four layers. First, the hashing layer. The whitepaper targets Poseidon; the MVP uses browser-native SHA-256. This is a natural prototype shortcut, but it is not a trivial swap. Moving from SHA-256 receipts to circuit-friendly Poseidon commitments would affect both the proving design and the proof/public-input interface. Second, the proving layer. The whitepaper describes Groth16 and recursive compression; the MVP performs deterministic recomputation and hash comparison. That is a large architectural jump. One validates bundle self-consistency; the other validates computation under cryptographic constraints. Third, the inference layer. The website and whitepaper discuss broader AI-model support and off-chain processing, while the public repository relies on deterministic local inference. This means the current code does not yet confront the hardest part of verifiable AI: proving the behavior of a real model runtime. Fourth, the chain layer. The whitepaper describes Solana verification and anchoring; the repository emits simulated transaction-like artifacts and explorer-style links. The chain semantics are therefore still representational rather than operative. 9. Key Unresolved Tensions in the Public Materials Several tensions in the public materials deserve explicit attention. One is privacy. The website states that prompts do not leave the user’s device, while the whitepaper describes off-chain AI processing. Those two claims are not fully aligned unless SolProof eventually relies on local inference, trusted execution environments, private model-serving infrastructure, or some similarly constrained execution model. Another is cost. The homepage references low Solana transaction fees per proof, while the whitepaper introduces PROOF token burns, and the repository simulates burn amounts in the product flow. These may all eventually coexist as different layers of cost, but the public documentation does not yet present them as a fully unified economic model. A third is model support. The homepage implies support for multiple major model families and OpenAI-compatible APIs, while the public repository demonstrates only four local, Gemini-branded product modes with deterministic logic. Again, this does not invalidate the roadmap, but it does mean the live code should be interpreted as a prototype, not as proof of broad model compatibility. 10. The Hard Technical Problem: What Exactly Is Being Proven The deepest technical challenge for SolProof is not UI, token design, or chain integration. It is the definition of the proof object. In practice, proving the full inference trace of a modern frontier LLM is extremely expensive. That means SolProof will eventually need to choose among several strategies: Prove lightweight classifiers or bounded inference tasks rather than open-ended generation. Prove selected intermediate states rather than full model execution. Combine zk receipts with trusted execution or attestation. Use structured-output regimes where the computational boundary is narrower and more circuit-friendly. This is precisely why the MVP’s current task selection is strategically sensible. Sentiment scoring, fraud classification, structured risk labeling, and constrained vision tasks are much more realistic first proving targets than free-form chat completions. They produce enumerable outputs, operate on structured input, and are easier to formalize inside constraint systems. 11. Proving Economics and Systems Scalability Even if on-chain verification is cheap, prover-side economics remain difficult. Groth16 minimizes verification overhead, but witness generation and circuit execution can still be expensive. Recursive proof composition helps reduce the cost of final on-chain verification, but it does not eliminate the cost of proving itself. This implies that any production-grade SolProof stack will likely need: model simplification or quantization, careful circuit engineering, batching or aggregation strategies, possibly a distributed prover network, and strict control over which inference classes are made provable. The whitepaper’s roadmap reference to a distributed prover network is therefore not peripheral. It is one of the clearest signs that the team understands where the real bottleneck sits. 12. Application Domains Most Likely to Fit the Architecture The strongest near-term fit for SolProof is in structured-decision environments. DeFi risk systems are an obvious candidate. Wallet behavior, collateral composition, transaction paths, and speed signals can be fed into a model that emits structured risk categories. A proof-backed receipt can then be consumed by downstream protocols as a risk attestation object. Fraud detection is similarly well-suited. The repository already uses structured fraud features, which mirrors real-world payment and compliance systems. The commercial value here is not “AI wrote a nice explanation.” It is “the system can prove that a particular risk classification was generated under a defined model and input context.” Digital provenance and NFT authenticity are another plausible domain. The value is not metaphysical truth; it is proving that a classification event occurred under a particular model, version, and context, and that the result was later anchored and preserved. Regulated enterprise workflows, including healthcare and compliance, may be an even larger long-term opportunity if execution privacy can be made credible. In such settings, the most valuable property is often not output novelty but auditability. 13. Research Conclusion SolProof AI should be understood as a serious attempt to define a verifiable AI middleware layer on Solana. Its core design goal is technically meaningful: transform AI outputs into cryptographically bound, replayable, and eventually chain-verifiable state objects. As of April 23, 2026, however, the public implementation should not be described as a live production ZKML protocol. The whitepaper presents a full protocol thesis. The website presents a product narrative around verifiable AI receipts. The GitHub repository presents a well-designed MVP that demonstrates deterministic replay verification and proof-shaped product semantics. These are aligned in direction, but not yet in technical completion. The project’s long-term significance depends on whether it can cross four milestones: Replace mock deterministic inference with real provable inference pathways. Replace replay validation with actual proving and verifier infrastructure. Upgrade chain anchoring from record storage into composable verification semantics. Reconcile privacy, cost, and model-support claims into one coherent execution architecture. If those milestones are achieved, SolProof could become a meaningful verification layer for AI-generated decisions in crypto-native and enterprise systems. If not, it will remain an elegant protocol concept with a compelling interface but limited infrastructural depth. Sources Website: proofaisol.xyz Whitepaper: proofaisol.xyz/whitepaper GitHub repository: github.com/fm374coullon/S… README: raw.githubusercontent.com/fm374coullon/S… Engine implementation: raw.githubusercontent.com/fm374coullon/S… Model configuration: raw.githubusercontent.com/fm374coullon/S… App orchestration: raw.githubusercontent.com/fm374coullon/S…
English
0
0
2
35
Sabrina Chaves
Sabrina Chaves@PROOF__SOL·
The "Verified" Checkmark. Imagine a Phantom wallet warning: "This transaction is triggered by an UNVERIFIED AI." Would you still click confirm? 🟢 #TheStandard #Solana
English
1
0
4
70
Sabrina Chaves
Sabrina Chaves@PROOF__SOL·
Quantization vs. Precision. How much model precision are we willing to sacrifice for on-chain verifiability? SolProof is hitting the 99% mark. Is that enough for high-frequency trading? 🧬 #AI #Solana
English
0
1
5
42
Sabrina Chaves
Sabrina Chaves@PROOF__SOL·
The Data Availability bottleneck. In ZKML, what’s more important: The proof size or the state root? We’re pushing the limits of Solana’s transaction metadata. Let's talk architecture. #SolProofAI #ZK
English
0
0
2
49
Sabrina Chaves
Sabrina Chaves@PROOF__SOL·
AI Agents are the new CEXs. Most "AI" on Solana right now is just a hosted Python script with a private key. If you don't own the proof, you don't own the agent. $PROOF #SolanaAI
English
7
0
13
119
Sabrina Chaves
Sabrina Chaves@PROOF__SOL·
The Oracle Paradox: We spent a decade solving the Oracle problem for data. Why are we ignoring the Oracle problem for AI? If an Agent makes a decision in a black box, is it still Web3? Let’s debate. #SolProof #ZKML
English
5
0
10
99