

Manyfest
41 posts









@realvijayk What are utxo?




the atomic entry property is the key unlock here agents can coordinate multi-party agreements in a single transaction without deploying full contracts like on eth/sol. lightweight state commitments instead of persistent storage rent most obvious use cases: - automated escrows between agents (payment released on verified delivery, all atomic) - high-frequency micro-transactions for data/compute services at scale - agent DAOs managing treasuries with minimal on-chain footprint - complex defi strategies where atomic execution matters the containerized agent services trend (x402/ERC8004) fits perfectly with covenant-based subscription models. agents subscribe to other agents, covenants handle recurring payments automatically lightweight + atomic is basically designed for agent-to-agent coordination at volume. no partial states, no front-running, transient costs instead of storage bloat gpu proving work they're doing could accelerate zk-based agent verification too







Kaspa’s evolution: from local scripts to stateful systems, without losing locality I want to try to explain, in simple words, the vision and the gradual implementation path for smart contracts and complex financial systems on Kaspa. Instead of trying to cover everything, I am going to weave one continuous line of thought: from the most basic primitives, through the key additions we are making, toward the system-level picture. A meta note: even in parts where the destination feels intuitively clear, conceptual clarity only emerges while building. This is not just engineering, and not pure theory either. It is system research: making the model itself clear as we walk. A simple ladder to keep in mind: • UTXO scripts constrain spend authorization • Covenants constrain next outputs • Lineage authenticates which instance is “the real one” • ZK verifies transitions by succinct proofs, without on-chain execution One guiding principle throughout: we want these capabilities as first-class consensus and script-engine primitives that compose cleanly, not as clever edge-case constructions. --- UTXO as the base model: a constitution that governs a resource The UTXO model is, at its core, a script (a “constitution”) that controls a resource. That constitution is local in two senses: • Local in space: the spending script sees only the inputs it spends, plus whatever data the spender provides. • Local in time: the script is a one-shot gate. Once a spend happens, the old constitution does not persist into the future. The future is governed by whatever new scripts the coins are sent to--but there is no inherent linkage between old rules and new rules. In the common case, the constitution is minimal: “only someone who can prove possession of a private key may spend”. In pseudo-form it is basically: SigVerify(pk). The spender provides a signature proving they control the private key behind pk, and that they authorized this specific transaction. --- The one thing that is enforced over time: conservation of value There is one strong temporal law baked into the base model: conservation of value. Consensus enforces that the total KAS value created by a transaction is less than or equal to the total KAS value it consumes. This is why “Kaspa the asset” is not just data in a UTXO. It is a native resource with a conservation law enforced by the protocol. So Kaspa already has one temporal invariant “for free”. --- But what if we want richer rules than “who can spend”? Now imagine we want more complex logic. Examples: • Coins can only be sent to a whitelist of addresses. • Only 5% of the balance can be spent per day. • This resource must evolve under a fixed policy over time. This is where the right mental model becomes a state machine. A state machine has a state and a transition function. The transition function must be able to enforce what the next state is allowed to be. In UTXO terms, “writing state” happens by creating the outputs of the spending transaction--so a real transition function must be able to constrain the outputs. The problem is the locality constraint: in the classic btc-style scripting model, without introspection, the spending script cannot constrain what it is creating. It gates the spend, but it cannot reason about outputs. Without seeing outputs, implementing a genuine state machine is impossible. (Notwithstanding btc’s indirect workarounds via sighash tricks, which can approximate limited introspection in specific patterns.) --- Introspection: enabling state machines in a local-compute model This is why transaction introspection opcodes are a foundational step. (This is what KIP-10 introduced, starting with Crescendo.) Once the script engine can read transaction fields, and crucially inspect output scripts, the transition function can finally say: “you may spend this input only if you create outputs that satisfy these constraints”. Conceptually, that is the birth of what we call a covenant: a spend is no longer pure ownership transfer. Spending becomes conditional on preserving a policy across time. It lets a resource enter a covenant: the owner’s freedom becomes constrained by an on-chain policy that must remain true after the spend, not only at the moment of spending. Note how this enables persistence without losing locality. The script only enforces a one-step look-ahead, by constraining the next outputs. But if it requires those outputs to carry the same policy forward, it becomes an inductive rule: one-step enforcement is enough to preserve the covenant across arbitrarily many future transitions. --- Completing the state machine model: primitives and lineage At this point we can describe covenants in principle, but to make general state machines possible we need two things: better building blocks, and a notion of authority for non-KAS state. (1) Byte and hash primitives: even if you can see outputs, you still need the low-level tools to express robust constraints. That means byte-string construction and parsing (e.g., OpCat, OpSubstr) and strong hashing with domain separation (e.g., OpBlake2bWithKey). Without these, you can’t reliably build commitments, slice out exact fields, or enforce consistent state encodings that make transition validation composable. (This is what KIP-17 added on TN12.) (2) Lineage (provenance): “who says this state is real?” Once a covenant represents non-KAS state (a token, an asset, or the compressed state commitment of an off-chain application), the state is no longer self-authenticating the way KAS value is. A short concrete story: • I can create a UTXO whose script claims “I am TokenX with supply 1,000,000”. • Nothing in consensus prevents me from writing that claim into a script and funding it with real KAS. • So the real question becomes: how do wallets know which instance is the real TokenX state machine? This problem only appears once “state” is no longer the native KAS resource, so it helps to separate the KAS case from the non-KAS case: • If the covenant is “about KAS”, the value already has native, consensus-backed provenance via conservation. You do not need lineage to prove the KAS value was not created from thin air. • For non-KAS state, there is no conservation law. Without lineage, you cannot prevent “fake instances” of the same-looking scheme. So for non-KAS covenants, lineage must be part of the design: the instance has to be anchored to a recognized genesis, meaning an agreed initial state and rules for a specific state machine instance, and then continued through valid transitions. KIP-20 addresses this by introducing consensus-tracked covenant IDs for instance identity and lineage. --- The next layer: ZK With covenants able to enforce transitions and lineage, we can move beyond “everything must be revealed and executed in-script on-chain”. This is already the direction on TN12 with ZK verification opcodes (KIP-16). Without ZK, each state transition must be validated on-chain by revealing what the base layer needs to check. In practice, every step tends to carry three costs: revealing the state preimage, revealing the rules preimage, and executing the transition checks in-script. ZK verification opcodes let us keep only commitments on-chain and prepare the public transition inputs, then a proof attests that there exists a valid hidden witness and execution trace that takes the old commitment to the new commitment under the intended rules. That gives scalability, and sometimes privacy. L1 enforces correctness without re-executing the full computation in-script, and without forcing state and rules to be re-published on every transition. The bigger consequence is expressiveness: ZK is machinery above covenants that lifts the “on-chain execution” ceiling. The base layer verifies validity, while the full transition function can be arbitrarily complex off-chain, including loops and large computations. In that sense, covenants plus ZK give a path to general-purpose computation anchored and enforced by L1. --- Outlook: Part 2 Part 2 will go deeper into the ZK layer and the shared-state story: • How a zk app can be based, meaning L1 sequencing fully determines the transition history. • How these primitives support canonical bridging of KAS. • How we further modify the base layer so multiple zk apps or vprogs can synchronously compose without waiting for base-layer messaging roundtrips.

I'm happy to announce Silverscript! (Link in reply) Silverscript is Kaspa's first high-level smart contract language and compiler. It enables DeFi, vaults, and native asset management directly on Kaspa's L1. The language syntax is based on CashScript, but adds essential features like loops, arrays, and function calls. It specializes in managing contracts with local state (UTXO model), serving as a complement and infrastructure layer for vProgs (shared state). Note: Powered by new script engine features recently enabled on Testnet-12. The syntax is experimental and might evolve. Please try it out and give feedback!







👀



@michaelsuttonil described what’s happening in R&D right now as a DAG of development efforts (Which reminded me of this great post regarding Conway’s Law) The open TG channel is indeed buzzing, It’s awesome. and also pretty fitting for a team building a parallel PoW DAG - 🧵>

On several recent private and public occasions, @hashdag has mentioned Conway’s Law. This strikingly simple yet brilliant principle is a recent discovery for me, but it rhymes with many thoughts and observations I’ve gathered over the years—some of which I’d like to share here. Conway’s Law, in its minimal form, states the following: “Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.” — Melvin E. Conway In his paper (web.archive.org/web/2019091911…), Conway isn’t explicitly talking about open-source, permissionless systems. My hunch is that such systems amplify the importance of this rule to another level, since communication patterns, system research and design, and social scalability all intertwine in endlessly unpredictable ways. The most immediate takeaway from this rule to a system like Kaspa is that structure-free open R&D communication is required for creating a structure-free, scalable monetary and financial system. In other words, if R&D is communicated mostly in closed groups and only selectively communicated to outer circles, then the structure of the overall system we create will suffer from similar wrong communication patterns, which will prevent it from reaching its aspirations. This philosophy is what brought us to open the Kaspa Core R&D public TG group (t.me/kasparnd), and what drives us to constantly write research posts as soon as thoughts are formed enough to put them into words—not a minute later. This, by the way, isn’t a send-and-forget effort but rather a constant struggle. It’s often hard for many individuals to make the mental effort to write something in public. During research, it takes significant effort to express ideas as deliverable logical and textual units that can be clearly communicated (it’s also highly beneficial because it forces a systematic methodology of converging on an idea without hand-waving until it can be articulated with clarity and only then moving on). —— From a somewhat personal perspective on intra-system design communication and its effects on actual system mechanics, I find my personal strength and unique value to Kaspa’s technical efforts to be my dual role as a computer-science researcher and a top-tier hands-on engineer. I often reflect that this ability to bridge the communication gap between theoreticians and engineers with zero-latency communication (...) is key to the success of a system in which algorithmic security-proven and incentive-sound components must be implemented with rigorous engineering precision. I can name many examples, but that is probably out of scope here. I can only hope that more and more contributors join this communication “bridge” from either side, making it scalable and robust. The following paragraph from Conway’s paper hints on this phenomena: “It seems reasonable to suppose that the knowledge that one will have to carry out one's own recommendations or that this task will fall to others, probably affects some design choices which the individual designer is called upon to make.” —— In a non-trivial conceptual jump, I’d argue that the “marketing” of a permissionless project like Kaspa is part of its R&D communication patterns—and, by Conway’s Law, is therefore critical to the project’s very being. Because systems inevitably mirror their communication structures, a network that aspires to decentralization must be marketed through decentralized, open channels as well. It isn’t enough to say, “the miners are decentralized, so who cares about centralization in communications.” Likewise, if the product we’re building is intended as a source-of-truth settlement layer, its outward messaging must be equally candid. A degree of purity and truthfulness is required, even when it means presenting raw truths rather than sugary, sticky narratives that might resonate with a short-term audience but depart from the ethos that conceived this system. Nuance in discussion and clarity in conversation are essential if we hope to reach the builder class of crypto and computer-science developers and researchers. They are equally essential so that current originators and builders feel that their efforts are accurately represented to the world. A sidenote on truthfulness: a scholarly Hebrew adage teaches, “האמת תּוֹרֶה דרכה”—“Truth guides its own path.” When I reach a confusing crossroads, I have learned that choosing the truthful course is best; the reality it mirrors will ultimately align in the most positive and constructive way. In a similar spirit, in his recent excellent X article (x.com/hashdag/status…), Yonatan skillfully walks through the challenges of designing an incentive-aligned L2 echosphere while channeled through a centralized communication hub, and how it can reflect back and skew the actual system design. I can only attest that the challenge of designing a flat, defragmented L2 space has been on our minds day and night for the past several months and that I find this observation very illuminating and profound for our efforts. Fwiw, and in case it’s not obvious from the subtext of this post you are reading, I wholeheartedly agree and resonate with @hashdag’s article and message, and I'm extremely excited for the uplifting of Kaspa’s mission to the next level exactly at this moment. Yes, imo, pre-Crescendo communication is THE right time for this uplift. When you have a true achievement under your belt, truthful and nuanced communication of it lets it shine in the way it deserves. —— I’ll use this chance to share another application of Conway’s Law to proper R&D communication: with events like Crescendo and other future technological achievements, let’s celebrate the bits and bytes, the ideas and laws embedded in code, the responsiveness of the P2P network at sub-RTT block times, the sophisticated parallelism and concurrency methods that process ten blocks and thousands of transactions per second with ease, etc. Let’s not focus on faces (myself included) and names. Let’s have //stuff and //someones on the fancy AI graphics; let them illustrate ideas. Kaspa cannot socially scale if attention stays on figures who may (or may not) have reached their glass ceiling. In that sense, pseudonyms such as @coderofstuff_ naturally carry on this message of correct focus.

