esper°|(❖,❖)

9.5K posts

esper°|(❖,❖) banner
esper°|(❖,❖)

esper°|(❖,❖)

@joneluba

Ritualian of @ritualnet , @ritualfnd , The Saga Continues as my pouring support to every Ritualian's 🖤🖤🖤 From the scratch to the Awaited Day 🖤

Katılım Haziran 2013
1.8K Takip Edilen327 Takipçiler
Divinity Official (MINT - March 20th)
Mint is going live in 3 hours ⏳ Free Mint and $20,000 in Rewards. Any interaction with this tweet will be considered for multiple mints.
English
52
24
81
3.7K
Leon.ip
Leon.ip@Fahad1077798·
Your smart contracts can move billions. But they still can't think without trusting a server you don't control. Ritual fixes that. @ritualnet
Leon.ip tweet media
English
4
1
17
158
Intuition (❖,❖)
Intuition (❖,❖)@Intuitionweb3·
Computational integrity is a fundamental property that ensures the output of a computation is provably correct and has been executed as intended. @ritualnet Verifiable computing, powered by the computational integrity gadgets below, enables any computation whether conducted by a trusted or untrusted party to be verified for accuracy and correctness, without redoing the often complex computation itself. Ritual takes a credibly-neutral approach to computational integrity by enabling users to leverage different gadgets based on their app specific needs and their willingness to pay. 📌SUPPORTED GADGETS: 🔸Zero Knowledge Machine Learning Strong cryptographic guarantees of correct model execution, at expense of added overhead, complexity, and cost. 🔸Optimistic Machine Learning Optimistic acceptance of model execution, with model bisection based verification only when disputes arise. 🔸Trusted Execution Environments Model execution with hardware-level isolation in enclaves, at expense of trust in chip manufacturers and hardware attacks. 🔸Probabilistic Proof Machine Learning Low overhead and cost-efficient statistical guarantees of model execution, at expense of consistently perfect verification. Eager vs Lazy consumption Ritual enables both eager and lazy consumption of proofs from supported gadgets. Lazy consumption enables use cases where computational integrity is only required in the sad path: • Save costs: Lazy proofs are generated only when disputes or errors occur • Improve performance: Minimize proof verification for applications with infrequent disputes • Better developer experience: Build simpler, easier to audit applications with fewer hot paths 📌Gadget trade-offs A one-size-fits-all paradigm to computational integrity creates inherent trade-offs between security, cost, and performance. Each gadget has its own trade-offs and best use cases: 🔺Zero Knowledge Machine Learning Zero Knowledge Machine Learning (ZKML) builds on zero-knowledge proofs to cryptographically assert correct execution of an AI model. Ritual’s ZK generation and verification sidecars enshrine this gadget natively, enabling users to make strong assertions of model correctness, with robust blockchain liveness and safety. + Robust security: Offers the strongest correctness guarantees via cryptography = High complexity: Computationally expensive, demands high resources, and is slowest = Limited support: Only simple models are supported by modern ZKML proving systems today• 🔺Optimistic Machine Learning Optimistic Machine Learning (OPML), inspired by optimistic rollups, assumes model execution is correct by default, with verification occurring only when disputes arise. At a high level, the system works as follows: 1. Model execution servers stake capital to participate 2.These servers then execute operations, periodically committing intermediary outputs 3.If users doubt correctness, they can contest outputs via a fraud proof system 4. The system views models as sequences of functions and uses an interactive bisection approach, checking layer by layer, to identify output inconsistencies 5. If model execution is indeed incorrect, server stake is slashed + Cost effective: Especially efficient for use cases where disputes rarely occur + Extended support: Bisection approach better supports large, complex models (like LLMs) = Weaker security: Relies on incentivized behavior rather than cryptographic security = Complex sad path: Dispute resolution is lengthy, complex, and demands some re-execution 🔺Trusted Execution Environments Trusted Execution Environments (TEEs) provide hardware-based secure computing through isolated execution zones where sensitive code and data remain protected. Ritual’s TEE Execution sidecar enshrines this gadget natively by executing AI models in secure enclaves enabling data confidentiality and preventing model tampering. + Performant: Enables sans-gadget competitive performance for most AI model types + Real-time: Better suited for real-time applications with limited proving complexity or overhead = Vendor trust: Requires trust in chip manufacturers and secure enclave software =Hardware attacks: Susceptible to sophisticated side-channel hardware attacks 🔺Probabilistic Proof Machine Learning Most model operations are computationally complex, especially when performing resource-intensive operations like fine-tuning or inference of modern LLMs. To better support these operations with a low computational overhead tool, Ritual has pioneered a new class of verification gadgets, dubbed Probabilistic Proof Machine Learning. The first of this line of tools is vTune, a new way to verify LLM fine-tuning through backdoors. + Computationally cheap: Time and cost-efficient for even the most complex model operations + Third-party support: Suitable for trustlessly verifying third-party model API execution = Statistical correctness: Not suitable for when perfect verification guarantees are necessary ▶ Powered by Ritual This flexibility of enabling applications to pick and choose from a range of specialized gadgets is only possible on Ritual, built on our belief that we should remain proof system agnostic. Powering this belief is our underlying architectural work with Resonance, Symphony, enshrined execution sidecars, vTune, Cascade, and more. @joshsimenhoff @Jez_Cryptoz @BunsDev
Intuition (❖,❖) tweet media
English
27
0
38
319