pp

67 posts

pp banner
pp

pp

@pptheanon

programmable sovereignty

off-chain Katılım Ağustos 2018
254 Takip Edilen550 Takipçiler
pp
pp@pptheanon·
@asaefstroem congratulations brother, and great to have you back on core zk development. kip 16 was a master piece, cant wait to see what you're going to do next.
English
0
0
0
33
saefstroem
saefstroem@asaefstroem·
Stroem.finance, the $KAS - $ETH atomic swap protocol is being acquired by a private investor and will be rebranding to Hashlock Finance. Everything I’ve proven so far, the P2P infrastructure, sub-3-minute settlement, the CCR mechanism continues under this new brand. If anything, it accelerates things due to the larger resource availability.
English
7
9
70
2.1K
Mu𐤊esh.𐤊as
Mu𐤊esh.𐤊as@DilSeCrypto1·
Michael Saylor is for Bitcoin. Tom Lee is for Ethereum. Who is for Kaspa?
Mu𐤊esh.𐤊as tweet mediaMu𐤊esh.𐤊as tweet media
English
26
4
58
2.8K
pp
pp@pptheanon·
@crono_walker this is brilliant and a great use of cov utxos.
English
0
0
1
43
Ross 𐤊
Ross 𐤊@crono_walker·
A UTXO Physical Vote Protocol on Kaspa The motivation is to make the existing election system tamper-proof using blockchain — not to reinvent it. A limited implementation is possible with the Covenant++, but waiting for vProgs is the prudent approach. github.com/RossKU/KAST/bl…
English
4
16
48
4.2K
Michael Sutton
Michael Sutton@michaelsuttonil·
Covenants++ mini-update / teaser: We’ll be restarting TN12 probably tomorrow and wow, the amount of content and significant developments coming with it is staggering. I’ll do my best to write about it at length and in simple language in the coming days keywords: covenant ids, seqcommit & based covenants, zk precompiles, silverscript
Michael Sutton tweet media
English
58
271
845
43.2K
pp
pp@pptheanon·
@KaspaHub @maxibitcat the only exception would be based zk covs, which are effectively rollup style and introduce separate state. that framework is for heavy computation, not asset accounting. balances should remain native via inline covenants where enforcement happens in the utxo model.
English
0
0
1
104
pp
pp@pptheanon·
@KaspaHub @maxibitcat no, that’s not true. if you’re using native kas, there’s nothing to bridge. asset state lives in the utxo set, and state transitions are enforced via covenants. nothing leaves l1.
English
2
1
9
634
Tiptree
Tiptree@tiptr_ee·
Everyone’s talking about Kaspa marketing lately but let’s be real. What we actually need are vProgs. We need to build real apps, things that were simply impossible on legacy blockchains. Products that deliver a seamless experience and are better than anything else on the market today. That’s how we show what internet speed crypto looks like in the real world, not just on paper. And honestly no one wants to deal with a layer 2 when they can just stick to Solana. Switching networks is easier than bridging to another layer that’s barely supported anywhere. If people want to build L2s fine. Just don’t confuse newcomers. They’re already overwhelmed 😄
English
11
8
83
2.6K
pp
pp@pptheanon·
@coderofstuff_ @michaelsuttonil genuine tears in my eyes. first the vprog repo, now dk nearing the public repo. what a time. what a time.
English
0
0
4
272
coderofstuff
coderofstuff@coderofstuff_·
Dagknight technical progress I want to share some technical progress related to the dagknight effort. In a yet-unshared post by @michaelsuttonil, the DK effort is split into several iterations: devnet v0, testnet v1 and mainnet candidate v2. v0 is focused on getting a full end-to-end flow running, even if it only partially implements components of the full protocol. v0 progress will be the focus of this post. Since the last update, more things have been implemented such as: efficient k-searching algorithm, gray blocks (to replace representative) among many other technical changes that I will elaborate elsewhere. The most important change since then is that DK can now be run over a dynamic DAG. Rusty-kaspa has a simulation engine called simpa where you can test dynamic DAG conditions. Current state of DK can now run there. There are a few more changes that need to be checked in to fully complete v0. This will happen very soon. The dk branch will also be posted into the main repo once it is cleaned up. A small internal devnet will be set up to test the v0 in a controlled but dynamic non-simulation environment.
coderofstuff@coderofstuff_

Regarding DAGKnight - the cascade voting isn’t implemented yet nor is the code here in any way mainnet or testnet ready yet, but I do have a vanilla static-DAG based impl that has core components of the protocol like hierarchic conflict resolution and incremental coloring in place. I was working on this on a private repo, but what the heck, I pushed it out in my rusty-kaspa fork if anyone wants to see. @michaelsuttonil I think it’s time to share that long overdue post soon. The attached image shows a view of what the DK parent selection looks like from the pov of the next block to be mined (block 64). It correctly selects a parent from the supposed “honest” cluster. The blue line is the VSPC.

English
21
143
459
45K
Kaspa𐤊Fox
Kaspa𐤊Fox@KaspaFox·
Kaspa, vprogs, and the Mathematics of Global Adoption The history of technological networks shows that large-scale network effects only emerge once the underlying infrastructure can scale without exponential complexity. A common example is Facebook, which took roughly seven years to connect 500 million users. For Kaspa, this comparison is not a promise, but a theoretical scaling scenario: a system that can, in principle, handle extremely high transaction throughput over time without collapsing into centralized bottlenecks. While Web2 platforms rely on centralized server architectures, Kaspa follows a different path by building a decentralized infrastructure designed to remain stable under increasing load, without buying performance through added complexity. [1] A key component of this architecture is the vprogs framework. It benefits from late design by combining insights from across distributed ledger research instead of inheriting historical protocol constraints. vprogs is designed as a highly parallel execution engine where workloads are processed as independently as possible. By explicitly modeling dependencies and relational state, it largely avoids classic sequential bottlenecks such as global locks and mutexes, a core limitation of many existing Layer-1 systems. [2] This is not merely a theoretical construct. The vprogs repository has recently been open-sourced and is under active development. Early limitations, are being addressed incrementally. With recent merges introducing state pruning and eviction directly into the vprogs framework, the engine gains essential self-maintenance capabilities. These mechanisms are critical for sustainable scaling, as they prevent unbounded state growth from translating into increasing hardware requirements for nodes and help preserve decentralization. [3] This progress builds on years of foundational work by the Kaspa core team. Milestones such as DAGKnight, a robust asynchronous consensus mechanism, and Crescendo, which significantly increases block frequency, address the same core challenge: global scale must not rely on serial execution or implicit centralization. vprogs is not a standalone feature, but a direct continuation of this design philosophy by shifting complexity away from global consensus into clearly defined, locally resolvable dependencies. In this context, Metcalfe’s Law becomes relevant for a Layer-1 architecture. Network value grows quadratically with the number of participants only if additional interactions do not impose disproportionate systemic costs. Many blockchains violate this condition structurally. As usage grows, conflict rates, state size, and synchronization overhead increase faster than the network’s actual utility, resulting in economically valuable but technically fragile systems. Kaspa follows a different model. Through DAG-based consensus and an execution engine that parallelizes transactions based on explicit dependencies, additional usage becomes mostly local work rather than a global burden. Growth does not necessarily mean longer queues, but more independent execution paths. This creates the technical foundation that allows Metcalfe’s Law to remain sustainable at scale. vprogs advances this logic further by reducing execution semantics to minimal relational principles. The system is structurally simplified rather than optimized through complexity. Each removed implicit assumption and avoided global lock increases the likelihood that new participants add network value without compromising decentralization or node accessibility. Kaspa is not building a blockchain that works despite growth, but one whose architecture assumes growth as the normal state. → Open-source development: github.com/kaspanet/vprogs → Focus: scaling through radical simplicity and parallel execution. $kas
Kaspa𐤊Fox tweet mediaKaspa𐤊Fox tweet media
English
11
42
147
3.2K
Michael Sutton
Michael Sutton@michaelsuttonil·
kaspanet/vprogs It’s about time (still a long way to go, but this is how meaningful journeys begin)
Hans Moog@hus_qy

Have you ever looked at different DLT projects and realized they're all converging on the same ideas, just with different terminology? And have you ever wondered what would happen if you forced all DLT projects to have a baby, where each could only contribute their most powerful ideas? You'd be surprised by the overlap - how non-unique many projects actually are - and how few genuinely good ideas exist. Often they sound almost trivial once you strip away the noise. The problem is that fundamental breakthroughs get buried under layers of unnecessary complexity - the inevitable result of gradually expanding a protocol's capabilities as research progresses. With Kaspa, we have the benefit of being late. In fact, we're so late that we arrive at the party when almost all the research has already been done. We can skip the archaeology and just make that perfect baby - making the final breakthrough on our quest for perfection. Today we are open-sourcing our vprogs framework: github.com/kaspanet/vprogs A post-Amdahl execution engine that enables inter-block parallelism and linear scaling beyond boundaries traditionally assumed to be possible in the context of DLT execution. By deeply understanding causal actors and domains, we eliminate almost all logic and instead encode behavior in dependencies and relational properties of a generic type framework. This allows us to transparently map hardware resources to workload - achieving linear scalability. The design principles: - No fsync / WAL flush boundaries - No mutexes / locks - Versioned append-only data with efficient rollbacks - Maximal parallelism - even inter-block - breaking through Amdahl's law - No wasted CPU cycles on speculative execution This repo is still heavily WIP with rough edges (we don't even prune state yet). But the goal of this repository is to create a concrete instantiation of all existing research directions condensed into a singular, maximally performant type framework that gets away with almost no logic. There's still room for improvements (zero-copy deserialization, NUMA affinity, etc.) but we're converging toward a system that can eventually no longer be optimized or simplified. The holy grail of blockchain execution isn't more complex but orders of magnitude less complex than anything that exists today! I am really looking forward to tell you more about this in the coming weeks (I just ordered a new microphone pre-amp to be able to host regular hangouts where we can discuss and explain how everything works under the hood - let's pray for a fast delivery 😅).

English
18
173
662
20.3K
pp
pp@pptheanon·
@manyfest_ wow, so so cool
English
0
0
3
88
Manyfest
Manyfest@manyfest_·
Let's understand Covenants++. The upcoming hard fork aims to extend Kaspa's programmability - a big step toward vProgs and an enabler for simple L1 contracts/tokens. Via ZK verification, it enables complex programs and base rollups. To understand how this will be achieved, we need to step back and learn what UTXOs are. ZK? UTXOs? Covenants? ++? Let's break it down: utxo-covenants.vercel.app (You'll need a non-mobile screen for that.)
English
10
72
208
11.6K
pp
pp@pptheanon·
@BankQuote Excellently written
English
1
0
2
65
BaN𐤊ℚuOτE
BaN𐤊ℚuOτE@BankQuote·
Covenant++ via KIP-17 on Testnet 12 represents a definitive pivot for Kaspa in early 2026. This move transitions the network from a high-performance payment rail to a platform for native programmability without compromising the Proof of Work or BlockDAG principles that ensure decentralization. ​A critical technical distinction exists between Covenant++ and the broader vProgs initiative. KIP-17 specifically expands UTXO-level covenants, implementing bounded and stateless logic that is enforced during the validation phase. In contrast, vProgs describe a more expansive vision for verifiable off-chain execution. Both share a fundamental design constraint: the avoidance of a global mutable state and virtual machine replay. This architecture prevents the accumulation of technical debt and node-level complexity often seen in account-based systems. ​By keeping logic localized to the UTXO, Kaspa ensures that validation remains parallelizable across the GHOSTDAG. This is vital for maintaining the post Crescendo throughput of ten blocks per second. Unlike the Ethereum Virtual Machine model, which relies on sequential processing and a global state trie, Kaspa utilizes stateless programmability to prevent bottlenecks and state bloat. ​Current Zero-Knowledge research is focused on inline ZK covenants. This allows for the verification of spending conditions without exposing sensitive witness data. While based ZK-rollups, where the Kaspa Layer 1 handles ordering and data availability, are being explored, the immediate focus is on these private authorization primitives. ​Furthermore, the implementation of Difficulty Adjustment Algorithm Score time locks provides a robust security layer. By anchoring temporal constraints to cumulative network work rather than wall clock time, the protocol effectively mitigates risks associated with time dilation and reorganization amplification attacks. ​The common claim that Kaspa lacks smart contracts is a technical misunderstanding. KIP-17 facilitates stateless smart contracts. Most decentralized finance primitives, including vaults and atomic swaps, function more efficiently within this model than within a heavy, Turing complete virtual machine. Kaspathon 2026 serves as the primary proving ground for these advancements.
BaN𐤊ℚuOτE tweet media
English
13
77
248
11.8K
pp
pp@pptheanon·
@KrcBot bro, good work.
English
1
0
2
89
pp
pp@pptheanon·
zk proving is hardware intensive on any chain, so the per prover cost is similar everywhere. if eth zk proving is around 100k in hardware for competitive latency, vprogs provers will be in roughly the same order of magnitude. the difference is not the price of a prover, it is how many provers the system needs. eth uses zk for scaling and execution, so aggregate compute demand is massive. kaspa only uses zk for programmability, so total proving demand is far lower even if each prover is similarly priced. proving is also getting cheaper over time as hardware improves and proving systems get more efficient. vprogs are zk agnostic, so they are not locked into one proving system and can adopt cheaper and faster proving options as they become available.
English
0
0
2
69
Pavel Emdin
Pavel Emdin@emdin·
Cuts deep. Real-time ETH proving still requires gpu farms. SP1 hypercube needs what? 16 RTX 5090s to hit 12 seconds? That's $30-70k in hardware.
Pavel Emdin tweet media
English
4
2
32
1.2K
Hashdogs
Hashdogs@Hashdogsonkaspa·
@pptheanon @aKaspamaxi @michaelsuttonil And btw I understand that eth is still PoS and sucks compared to Kaspa which scales at nakamoto consensus, but my worry is not everybody cares about these things and just want quick and cheap regardless of principles and fundamentals
English
1
0
1
91
akaspamaxi
akaspamaxi@aKaspamaxi·
Vitalik suggests PeerDAS + ZK-EVMs may push Ethereum toward a practical resolution of the blockchain trilemma. @MichaelSuttonIL — does this actually change the trilemma, or just relocate the trade-offs?
vitalik.eth@VitalikButerin

Now that ZKEVMs are at alpha stage (production-quality performance, remaining work is safety) and PeerDAS is live on mainnet, it's time to talk more about what this combination means for Ethereum. These are not minor improvements; they are shifting Ethereum into being a fundamentally new and more powerful kind of decentralized network. To see why, let's look at the two major types of p2p network so far: BitTorrent (2000): huge total bandwidth, highly decentralized, no consensus Bitcoin (2009): highly decentralized, consensus, but low bandwidth - because it’s not “distributed” in the sense of work being split up, it’s *replicated* Now, Ethereum with PeerDAS (2025) and ZK-EVMs (expect small portions of the network using it in 2026), we get: decentralized, consensus and high bandwidth The trilemma has been solved - not on paper, but with live running code, of which one half (data availability sampling) is *on mainnet today*, and the other half (ZK-EVMs) is *production-quality on performance today* - safety is what remains. This was a 10-year journey (see the first commit of my original post on DAS here: github.com/ethereum/resea… , and ZK-EVM attempts started in ~2020), but it's finally here. Over the next ~4 years, expect to see the full extent of this vision roll out: * In 2026, large non-ZKEVM-dependent gas limit increases due to BALs and ePBS, and we'll see the first opportunities to run a ZKEVM node * In 2026-28, gas repricings, changes to state structure, exec payload going into blobs, and other adjustments to make higher gas limits safe * In 2027-30, large further gas limit increases, as ZKEVM becomes the primary way to validate blocks on the network A third piece of this is distributed block building. A long-term ideal holy grail is to get to a future where the full block is *never* constituted in one single place. This will not be necessary for a long time, but IMO it is worth striving for us at least have the capability to do that. Even before that point, we want the meaningful authority in block building to be as distributed as possible. This can be done either in-protocol (eg. maybe we figure out how to expand FOCIL to make it a primary channel for txs), or out-of-protocol with distributed builder marketplaces. This reduces risk of centralized interference with real-time transaction inclusion, AND it creates a better environment for geographical fairness. Onward.

English
1
1
1
281
pp
pp@pptheanon·
eth chose this path largely due to constraints in its original execution model. the assumption has been that general purpose execution at scale must be pushed offchain, with l1 reduced to verification and settlement. vprogs challenge that assumption by allowing computation offchain while l1 still performs the native state transition. that model is generally viewed as infeasible today, which is why it has not shaped eth’s design. when eth claims to have solved the trilemma it is mostly a reframing where tradeoffs are redistributed rather than eliminated, with execution authority no longer native to the base layer.
English
2
0
2
67
akaspamaxi
akaspamaxi@aKaspamaxi·
@pptheanon @Hashdogsonkaspa @michaelsuttonil So why did Ethereum choose this path? Is native on-L1 execution at scale architecturally infeasible for ETH, or was off-chain execution a deliberate trade-off? And if it’s a trade-off, why does ETH still claim to have “solved” the trilemma?
English
1
0
0
68
pp
pp@pptheanon·
no they’re different eth pushes ledger execution offchain and then asks l1 to verify a foreign state transition the base layer is no longer the system advancing state it just checks another ledger so the tradeoff is moved not solved vprogs push computation offchain but l1 itself still performs the native state transition by updating utxos there is no external ledger and no foreign state being verified execution authority stays on the base layer
English
1
0
2
73
Hashdogs
Hashdogs@Hashdogsonkaspa·
@pptheanon @aKaspamaxi @michaelsuttonil Doesn’t vprogs kinda do this aswell? They push the execution off onto a different system then Kaspa verifies the zkproof at the end? Is this similar to what Vitalik is doing with ethereum or am I way off?
English
1
0
1
96