ASPECT

475 posts

ASPECT banner
ASPECT

ASPECT

@CryptoAspect

#blockchain technology integrator; contributor to Bitcoin ecosystem; co-creator of #ScalingBitcoin; contributor to #Kaspa

เข้าร่วม Aralık 2009
167 กำลังติดตาม4.6K ผู้ติดตาม
ASPECT
ASPECT@CryptoAspect·
@HocusLocusT @cryptosione @terah4d5 @cryptosione Sparkle has been folded into vProgs. We shared a lot of our R&D with the team and devs who have worked on it are actively contributing to Kaspa and some elements that are helping to facilitate vProgs.
English
3
54
210
6.7K
Terah
Terah@terah4d5·
compiled the vprogs/zk/native assets answers from the Q&A into one place 🧵 this is where we pretend things are simple, just for a moment. what's the upcoming HF actually about? covenant-centric HF with native assets. vprogs foundations being laid but thats not the main event yet. mainnet May 5. upcoming milestones: * TN12 reset in the coming hours afaik * Sequencer commitment - KIP ETA Feb 12 * SilverScript (Ori Newman & MS) - practical gamechanger for writing progs on kas. will let them do the explaining but hint: high level language, very friendly to noobs or LLMs (not me, i'm still debugging my own cognition). dropping today/tomorrow can native assets and vprogs assets interact atomically? native assets: yes, atomic transfers. this covers anything running on regular inline covenants (zk or non-zk) plus KRC20. vprogs assets: not transparent to L1 - non-atomic async transfers only. different state availability assumptions, different execution model. vprogs using native KAS or wrapped? any non-inline covenant must use wrapped KAS through canonical bridge. not native L1 kas. inline = wallet generates immediate proof for state transition, no data<>state decoupling. what is the computational DAG? CDAG is the data structure recording all read/write declarations - like Solana/Sui but in full form. was asked about sparkle vs vprogs - Sparkle is Anton's architecture for combining computation DAGs and ZKs. vProgs allows sovereign programs to atomically sync without compromising sovereignty - no arbitrary dependencies from foreign programs. both combine CD&zk but thats not the interesting contribution of either. important framing: the highlight of vprogs design is the dependency regulation mechanism - each vprog defines its own throughput rules (enforced through the CDAG + gas commitments). a vprog doesnt get read dependencies from another vprog without being gas-paid for that resource consumption. sovereignty by design. who should care about building vprogs? was asked "why should Solana/Ethereum teams build as vProg?" regular devs: probably wont care. system designers: should care. current options are (a) write smart contracts on L1 like Solana/early ETH, (b) go L2 rollup-centric like current ETH, or (c) combine best of both worlds via vprogs - cohesive uniform usage of L1 sequencer and state handling, but state and computation outside L1. but here's what i think matters: vprogs sovereignty appeals to big players considering appchains, or teams building AI agents on chain with huge state. composable yet siloed, priced only by externality on other programs. will vprogs/dagknight help security budget? probably not directly. DK could help push bullish narrative but current market is product oriented, not blank infra. privacy features from zk? privacy programs? technically possible post-HF via groth16 etc. but reading between the lines, this vertical stays outside Core's focus - not on kas' north star. make of that what you will. proving fees - need special prover services? my read: initial apps will be inline (wallet proves immediately). even based covenants Hans and Maxim are building should run on commodity HW. no specialized prover infrastructure needed initially for most apps. your macbook can be a prover, for now. node requirements changing? no. MEV kickback auctions? too early. ping after vital ecosystem exists. PoW grinding for STARKs? hashdag mentioned was inspired by early ETH where addresses with leading zeros got cheaper gas, so people would grind PoW for them. apparently there was even a market for buying/selling such addresses - arbitrage on vanity, peak 2016 energy. thats the aggregation. all good ideas are mine, all nuance errors are hashdag's. Would you like me to draft a community announcement that sounds like it was written by an automated script with a headache? terah out 🫡 𐤊
English
22
122
301
85.7K
ASPECT
ASPECT@CryptoAspect·
When Kaspa switched from 1 BPS to 10 BPS, DAA became 10x faster. How “temporary vulnerable” transactions using this approach would be if Kaspa rises its BPS rate again? There is a “median block time” metric, albeit I am not sure of the ability of the script engine to access it. If that can be facilitated, that can potentially be a much more stable temporal reference… 🤷
English
2
3
30
859
saefstroem
saefstroem@asaefstroem·
Another day, another opcode. OpTxInputBlockDaaScore, now renamed to OpTxInputDaaScore is something that I am trying to enable on #Kaspa. This particular opcode is interesting because it introduces a completely new dimension in $KAS transactions. Namely, the temporal dimension. 🧵
saefstroem tweet media
English
5
20
95
2.9K
Michael Sutton
Michael Sutton@michaelsuttonil·
A few words on the kaspanet/vprogs code repository. Consider this as two efforts digging a mountain from both directions and seeking to meet. Hans @hus_qy (and others will join) is building a based computation runtime, i.e., a runtime that is fed by a sequencer with transactions and executes them over a vm (or several vms) with maximum concurrency. The covenants++ taskforce is digging from the L1 side, building the covenant infrastructure and zk opcodes that will allow this runtime to settle and bridge from/to L1. We aim to reach each other first through a narrow tunnel, and only then widen the pathway together. That’s the covenants++ goal: create based zk covenants on one side and a minimal standalone app-level runtime on the other, then have them connect and link with production-ready quality. Then we widen the tunnel together, integrating increased composability while still targeting maximum sovereignty (with the vprogs yellowpaper guiding the way). This pathway can also be gradual (see the gist I shared a few weeks ago). The point is building the basic elements in production-ready quality, having them support massive based computational apps, and then continuing the journey
English
43
240
808
58.5K
ASPECT รีทวีตแล้ว
Yuval Avrahami
Yuval Avrahami@yuvalavra·
We hacked the AWS JavaScript SDK, a core library powering the entire @AWScloud ecosystem - including the AWS Console itself 🤯 How did we do it? Just two missing characters was all it took. This is the story of #CodeBreach 🧵👇
Yuval Avrahami tweet media
English
161
863
7.5K
1.3M
ASPECT
ASPECT@CryptoAspect·
Kaspa testnet-12 went up within a few hours. Impressive!
English
11
66
437
14.4K
ASPECT
ASPECT@CryptoAspect·
@p4bpj Don’t think this has to be any news as this is just a proposal.
English
2
1
6
443
ASPECT
ASPECT@CryptoAspect·
There are multiple factors to consider for something like this. It is a bleeding edge of tech for UTXO-driven systems. It’s a plug and play engine, so its integration is relatively simple. The platform would also benefit from other features such as ability to reference pre-published programs, which hasn’t been researched. There may be potential showstoppers related to processing performance costs. Kaspa is the only high-TPS system out there and extended UTXO execution environment may become prohibitive in terms of performance; …and imposed execution restrictions may negate various benefits, such as ability to create arbitrary ZK verifiers etc. In vProgs JAM paper I have presented a concept where a RISC-V VM (OP_RISCV) can be integrated into the system to solve for what I consider a very serious problem - each ZK verifier version (say RISC-0 v3) would be embedded into RK consensus and require a hard-fork if the verifier needed to be upgraded (to v4). (Unfortunately for very complex technical reasons, Kaspa does not support soft-forks). OP_SIMPLICITY solves that. Conceptually there can be OP_RISCV as well, but Simplicity is very state conscious (minimalistic) and is designed for UTXO embedding/use in crypto. Simplicity also solves various opcode execution cost estimation challenges (aka GAS/CU costs) as you can determine execution costs simply by looking at the compiled bytecode. I’ll just add that I believe most native-related asset functionality should be done on the vProg layer since vProgs are Turing-complete. However, if covenants are to be used as vProg or L2-UTXO interfaces (there are other solutions as well), then something like Simplicity would bring much more long-term benefits because it is much more capable than the scripting engine.
English
2
4
30
619
Ricardo Sanchez
Ricardo Sanchez@HocusLocusT·
@p4bpj @CryptoAspect Though it’s very cool, considering the timeframe and the amount of work needed to implement and verify such a change, I have hesitations. On the other hand, if we’re trying to make Kaspa THE chain by making “smart decisions” upfront, it definitely needs to be weighed.
English
1
1
6
254
ASPECT
ASPECT@CryptoAspect·
Public nodes has two meanings in Kaspa - public p2p nodes and public User RPC nodes. For example, I want to run a public node to support the network (p2p) but I may not necessarily want to serve client RPC because that can, based on amount of users and subscriptions, siphon large amounts of bandwidth while simultaneously impacting QoS to any user connected to my node. (i.e. my Netflix can suffer :) ). This is why PNN was setup and is built into all Rusty Kaspa SDKs. It load balances multiple contributor ran nodes (with sufficient underlying hardware and bandwidth) providing a sufficient horizontal surface of nodes one can connect to, while not allowing anyone to see all nodes in play to reduce the impact of DDoS attacks. (If using PNN, one should never connect to any specific node and always ask PNN for a node address each time a connection is made). Your own node is fine as well, but depending on the underlying hardware and bandwidth constraints, it can crumble above 1k client connections. If an application is popular, it can experience a few thousand client connections, which can tax the node quite heavily. Anyone can setup their own PNN cluster, but I encourage people to use PNN and contribute their nodes to PNN. This pattern helps everyone.
English
2
7
32
911
supertypo
supertypo@supertypo_kas·
@cryptomatt1983 For public rpc nodes there is the kaspa pnn + @CryptoAspect resolver architecture. Though self- hosting is cheap and recommended. I'd also like to mention I have a rest proxy up on proxy.kaspa.ws (testing only), the source of which can be found on my github
English
2
1
12
600
Crypto Matt ꓘ
Crypto Matt ꓘ@cryptomatt1983·
Automatic way to retrieve a list of all Kaspa public nodes? Is there an API? Should there be? List of domains or IPs where they are hosted. Wouldn’t this make it easier for builders to interact with nodes? Setting up a node is ideal but should be mandatory if someone wants to build a service on Kaspa. $KAS
English
4
5
21
1.5K
ASPECT
ASPECT@CryptoAspect·
Heya. In some areas, you are correct, in some ...not quite. I can’t do an in-depth response to this post as it would require an even larger post :) So I’ll just focus on what I consider to be key elements. 1) vProgs paper is really just a set of guidelines. It has been reviewed in detail by me and the developers working with me. It has a number of issues, but nothing a team can’t solve within a day in front of the whiteboard. The key pillar (CDAG structure) is solid (but its implementation is very complex). 2) The way vProgs are designed is that they are basically the same as on-chain SCs. If properly done, the UX would be indistinguishable from traditional high-performance smart-contract-capable chains. (but vProgs bring numerous superior capabilities that TBH wipe the floor with all existing tech). 3) vProgs are absolutely possible and feasible - we have been doing R&D in this domain for 1.5 years now, and I have absolutely no red flags. 4) Adoption-wise, vProgs are just like any other SC system - all they need is a well-harmonized SDK with a bunch of primitives. I am trying to steer the design toward zkVM framework neutrality, which increases the project complexity, but for good reasons. 5) The economics of different system layers are actually well understood by some of the people involved. There is, however, no published material on the subject. Typically, this is addressed once you have a PoC. 6) There is a “one size fits all” solution, and that’s vProgs. It is important to understand that vProgs is a ZK composability layer that turns Kaspa from a global sequencer into a global ZK sequencer - without imposing any limitations or obstructing anything. 7) I have recently published a document called vProgs JAM SESSIONS on the TG R&D channel ( aspectron.org/docs/vprogs-ja… and aspectron.org/docs/EPA.pdf ). This document is meant as a "soft bridge" between the R&D performed by my team (Sparkle) and the Kaspa vProg research team. While the document identifies various subjects and poses suggestions, most of them are well understood and, in many cases, have been prototyped on our end ✨. So none of the questions you are raising (specifically) in relation to vProgs are my concern. The REAL problems are the team communication, project management, and the lack of a holistic software architecture (which I am attempting to solve). This is further aggravated by the lack of properly structured funding and ad-hoc attempts by different developers to tackle the challenge. There is currently “no one in charge”. There are a number of developers “developing”, but no one gave guidance with respect to what they should be developing. It is literally a group of construction workers making a building without any architectural plans. Build a sky-scraper, they said, …and everyone is trying to make one. The problem is the fact that the complexity of the project and layers involved are extremely multidisciplinary, and this is not something that a single developer can construct (neither, as a developer, would I be able to). In my recent comms with @michaelsuttonil I’ve stated that in fact the complexity of this is comparable to KIP-1. It’s a very large and complex project. However, if properly executed, with concise and _coordinated_ efforts, an MVP can be achieved within 6-8 months. (note "coordinated" being bold, italics, and underlined). I am currently trying to steer @hashdag toward a proper project execution structure based on the clearly defined architecture. I have huge respect toward @michaelsuttonil, @hus_qy, @FreshAir08, @Max143672, and everyone else working on this, so I can’t just “barge in” and start telling everyone what to do. I can guide and coordinate everyone, but only if they align behind the effort and adopt the architecture I am proposing. So, at this point, I have helped the team to identify key stress points and problems related to ZK (that took us months to understand), and gave a tentative architectural blueprint to the team that is meant to envelop the vProgs Yellow Paper design. I have suggested to @hashdag that we schedule a few weeks of in-person R&D meetings in front of a whiteboard to solidify the architecture, understand everyone’s proficiencies, understand what code and subsystems already exist and can be reused …and execute.
English
29
154
484
58.3K
Shai ❤️ Deshe 💜 Wyborski 💙
Why I don't think vProgs are enough I think vProgs are a great idea. From what I understand, they seem like a very neat idea. However, many in the $kas community seem to believe that: 1. The vProgs promise is enough to carry their development all the way to adoption 2. vProgs make all other programmability solutions obsolete 3. This discussion has something to do with independent tokens In this post, I hope to explain why all three perceptions above are wrong. vProgs are much further than you think a. vProgs are, at this point, an incomplete yellow paper that was not peer reviewed or audited by anyone. b. Even after completing the yellow paper, there are many steps to deployment. Even merely implementing a proof-of-concept for internal testing is a matter of months. Let alone a production-grade implementation that is ready for deployment. We are talking about years of testing, prototyping, adjusting, auditing, reviewing, and so on. c. Even after deployment, it would take years to raise adoption. vProgs are very different from how people do SC today. Mobilizing builders to this new technology will be a huge struggle. No effort is made towards making vProgs more approachable or understandable. Quite the opposite. d. It is not guaranteed that vProgs are even possible. I believe they are. But I also know that home-brewed cryptography is always speculative. The problem of combining cryptographic primitives is extremely difficult. There is a decent portion of literature dedicated to that exact problem. The keyword is composability. When you combine cryptographic schemes (such as Snark schemes) to build a new scheme, the security of the underlying scheme doesn't automatically carry over. There are many seemingly innocuous examples of combinations of secure schemes that turned out to be broken. Not just synthetic textbook examples. The ETH rollback is one such example. The only responsible way to proceed is by thorough analysis, peer review, and auditing. And these might uncover a subtle yet fatal flaw that will punch a hole through the entire design. I'm not saying this will happen, just that it is reckless to behave as though we don't need contingencies. vProgs are more limited than you think a. vProgs are not on-chain SC. Just like rollups, vProgs require an external entity to maintain the state and compute the proofs. Sure, using on-chain sequencing removes trust assumptions, and makes things easier, but the need to fund external, non-L1 operators, still exists. It doesn't just go away. b. The economics of vProgs are not understood. Sustainably funding provers and archives is complicated. And I suspect that vProgs actually make it more complicated. I agree that the "parasicity" of (non-based) rollups is bad, but it did provide economic incentives for people to operate the L2. Undermining this requires an alternative (and I'm not sure alternatives could even exist in the fantasy land of "no-native token", but more on that later). One could argue that in vProgs it is much more difficult to fund a permissionless L2 (or L1.5, whatever), making them even less suitable for large-scale projects. This might be a problem with a solution but again, there are currently more questions than answers. c. vProgs are not EVM compatible. This in itself means that many "migrations" will actually be ground-up overhauls, and some will not be possible at all. EVM compatibility doesn't mesh with vProgs for several reasons, and my guess is that they will never support EVM. Technologically, this makes a lot of sense. Practically, most existing projects will not bother. The improved security/performance just isn't worth the hassle of rewriting your entire codebase, especially when you already have an up-and-running product, that cost dozens of millions in productions and millions more in audits. Remember: there are good reasons why banks still use COBOL, and 386s still control nuclear missile arrays. d. Therefore, vProgs do not make other programmability layer designs obsolete. They offer a new, highly useful, currently missing trade-off. But there are utilities that require other trade-offs. There is no one-size-fits-all solution. (In fact, expecting such a solution is diametrically opposed to the "sequence the world" ethos.) e. As long as we're on it, is there any update on covenants? Is it going to happen? This has nothing to do with utility tokens a. A utility token is nothing more than a way to raise funds. It couples the funds people invested into the network with their ability to influence stuff (e.g. by staking). It has nothing to do with the design itself. vProgs-based projects could just as easily create their own coin. I mean, the entire point of programmability is to be able to do anything. b. A "native token" is just a technicality. A native token is just something that allows us to pretend there is a coin where there isn't one. Why? Because most VMs are designed to assume the gas is paid in a coin that is deployed on the VM. The Igra $ikas coin is just that, a technicality, a way to wrap native Kaspa in a way Reth can work with. Kasplex also has this exact same native coin, they just called it Kas so that it won't be visible. But still, the "kas" you see in the Kasplex explorer is not native Kaspa but a wrapped version. It doesn't matter for anything but aesthetics. vProgs provide a new design that does not need this assumption, so it is removed. This is a matter of convenience that has no economic implications. Kaspa has a liquidity problem The funding source of a community project should be the community. When the emission is dried up, there should be other ways for the community to raise liquidity. A utility token is a great way to do that. The anti-native-tokens rhetoric has nothing to do with the vProgs-vs.-rollups discussion. I don't know how these two discussions got so tangled together, but there are clues: if all other liquidity sources are discouraged, KEF remains the sole funder of Kaspa. Kaspa becomes a project owned by the ASIC manufacturers, and cutting its ties to the community is complete. Now, who has an interest in doing so, and why? I have no idea, but I have guesses. Conclusion a. I don't have a problem with people who like vProgs better than based-rollups. But it upsets me that people are led to think they must take a side. The "correct" position in my opinion is that both are needed, and both can and should coexist. b. Anyone who is trying to tangle the vProgs-vs.-rollups discussion with the utility-token-yes-or-no discussion is either trying to mislead you, or they were misled themselves. c. Kaspa is not in a position where it can afford to push away all approaches that don't precisely align with a single person's vision, even if you think this person might be Satoshi. There's a lot more to say, but I think that's quite enough for today.
English
33
22
146
24.8K
ASPECT
ASPECT@CryptoAspect·
@xximpod Sure, I’ll join in.
English
1
0
29
332
XXIM Podcast
XXIM Podcast@xximpod·
XXIM would like to organize the Kaspa Programmability Roundtable. A proper virtual sit-down with all the key stakeholders at the same table: • Kaspa Core researchers @michaelsuttonil @FreshAir08 • Igra Labs (L2) @Igra_Labs @emdin • Kasplex (L2) @kasplex @khriskang Topics • vProgs vs based rollups • Where to build today vs tomorrow • Migration paths, liquidity, composability, • Trade-offs explained by the people actually making them Just a calm, deep, structured conversation that finally puts every piece of the puzzle in its place for the entire community. We’ll go straight through the brand-new official “Kaspa’s Programmability Mosaic” article (kaspa.org/kaspas-program…) and answer the questions everyone’s looking for. Oh and it’s not about winners, it’s about a total clarity from the builders themselves. This is our proposal and a request. Date/Time & Agenda - TBD
English
23
67
265
16.9K
ASPECT
ASPECT@CryptoAspect·
The interesting thing about this design, is that technically, it does not require another “layer” per se. There will be a general purpose layer, for simplicity and ease of use, but it’s important to outline that you can have “many vProgs on a layer” OR each vProg “can be a layer.”. There is a specific emphasis on state vs proof separation. One can simultaneously publish a “smart contract” and give others access to it, or develop in-house standalone application (think Kasia) that uses vProg interface to communicate across its instances globally while retaining and managing/transfering Kaspa (tokens, or any data) in a provable way without any PKI. 1) technically, you can supply your own state when invoking vProgs 2) vProg can be a completely isolated separate (even private/hidden) app monitoring L1 and posting proofs of its actions meaning you can supply your state, you can also execute your own vProg (it doesn’t matter where state or vProg comes from). 3) vProgs (being ZK programs) themselves have infinite scale as you can have a HUGE external app proving its actions where vProg can also function like an “action proxy” by verifying one or multiple external proofs and taking actions based on these proofs (this part is more akin to traditional SCs capable of ZK verification). (can == should be able to, as all of this is currently designs and prototyping) These are very unique and important factors that need to be understood. (and actually quite hard to explain and even harder to implement because developers are biased toward concrete “tangible” software implementations whereas this implementation is more akin to a “layer as a decentralized global API/Interface/Methodology” that anyone can make/use/instantiate). So the “L2 layer” in this case, can be “any observer” watching L1, taking actions and posting proofs of execution to L1 synchronizing itself. This is in part what “program sovereignty” and “sovereignty as a service” refers to. The only other thing L1 offers is basically a protocol (transaction structure) that tells the external vProg “interpreter” “how” it should manage state access - which state can be accessed asynchronously (reads) vs which state access needs to be sequenced (writes). …all that without impacting performance and decentralization properties of L1.
English
0
14
37
1.6K
Shai ❤️ Deshe 💜 Wyborski 💙
@CryptoAspect convinced me that the term "rollup" is a bit too ambiguous to necessarily fit this argument. So let me rephrase: vProgs are a technology for off-chain processing. Hence, they require an additional layer to store and maintain the state. It is not on-chain SC, which is a good thing. (If you are wondering about the semantics of "rollups". I used the term a bit loosely to describe any system in which on-chain transactions serve as attestations that can validate off-chain states. Anton, justifiably, thinks that this use of the term "rollup" is too lax, as the term is descriptive of aggregation via "rolling up the transactions". vProgs do aggregate processing, but in a different paradigm that doesn't quite fall into dividing instructions into transactions at all. The key is that vProgs still require additional entities other than the L1 to operate, and these entities are not too different from what you would call a "rollup L2".)
Shai ❤️ Deshe 💜 Wyborski 💙@DesheShai

For the love of.... vProgs are a type of rollup! Stop saying "we don't need rollups, we can use vProgs". That's like saying "we don't need cars, we can use Lambos". If you think vProgs are "on-chain contracts," that they entail that you need to process smart contracts to validate the chain, then you missed the entire point of vProgs.

English
8
9
65
7.5K
ASPECT
ASPECT@CryptoAspect·
Execution/processing occurs only on L2. L1 knows nothing about state but it also knows nothing about programs or transactions. In a way, L1 is agnostic to L2 transactions (albeit protocol conformance, program IDs and mass might be handled at L1 layer in a supervisory/delegator capacity. i.e. unlike a full pass-through that takes place for an indexer, there might be some additional logic/sanity checks). but in any case, L1 primarily functions as a sequencing carrier. Hence you cant aggregate multiple transactions and have L1 interpret them. So yeah, we should drop any notion of rollups. However (this part is not defined in the vProgs paper yet, but there are known mechanisms for this) ultimately vProgs will be able to move Kaspa among themselves. L2 transactions can facilitate multiple such transfers, which then can be projected back onto the UTXO set. So in that sense, one could describe this as compounding multiple transfers via vProgs. ..but it’s a different animal.
English
0
0
20
636
Shai ❤️ Deshe 💜 Wyborski 💙
Ok, so adhering to your terminology, you agree vProgs encode arbitrary circuit execution. (I don't really get the L3 argument because why would the vProg even care how the state was created?). It is an L2 in the sense that a separate layer is needed to store and maintain the state. The discussion about whether an arbitrary circuit is a "rollup" feels semantic to me. Would you find it more reasonable if I drop "rollups" and stick to e.g. "off chain processing"? That's the key property here anyway.
English
1
0
6
727
Shai ❤️ Deshe 💜 Wyborski 💙
For the love of.... vProgs are a type of rollup! Stop saying "we don't need rollups, we can use vProgs". That's like saying "we don't need cars, we can use Lambos". If you think vProgs are "on-chain contracts," that they entail that you need to process smart contracts to validate the chain, then you missed the entire point of vProgs.
English
14
15
102
15.3K
ASPECT
ASPECT@CryptoAspect·
That’s not correct. vProgs are original, so is Sparkle L2. They achieve same goals (ZK program composability) but in different ways at the core level. Sparkle L2 data processing might not be applicable or match design goals of vProgs. A term like “technology transfer” should be used with caution. I specifically said “lessons learned”, “research” and “solutions to various challenges”. So at best it is “technology sharing”. Unlike vProgs, Sparkle L2 does not have same program sovereignty considerations and its CDAG approach is completely different. Sparkle design uses pure ZK-recursion-driven CDAG (it uses ZK composability to maintain and compress CDAG) whereas vProgs use consensus style processing where CDAG related commitments become integral to consensus. Sparkle L2 would require minor modifications to consensus just to address few vulnerabilities (but require compute for CDAG maintenance), whereas vProgs relies on heavier modifications of L1 (and less compute). So like I said they are similar, achieve similar goals, but inherently different. The main focus where vProgs can benefit from Sparkle L2 is at the protocol and ZK program execution and composability approaches.
English
2
3
24
840
Ross 𐤊
Ross 𐤊@crono_walker·
@CryptoAspect @picaye @DesheShai @ReLeomerda I understand, but if vProgs is completed with your original idea and the transfer of Sparkle's technology, it will no longer be anything other than Sparkle. It's hard to accept that the spotlight is on someone who walked the same path later. This is my opinion. 恥ずかしい。
日本語
1
0
5
622
TheSheepCat
TheSheepCat@ReLeomerda·
@CryptoAspect @DesheShai interesting approach. that would clearly needed to be configurable, depending on the vprog scope?
English
1
0
0
118
ASPECT
ASPECT@CryptoAspect·
In essence vProgs design is very similar to Sparkle L2 design. Not the same, but very similar. I’ve been gradually transferring our Sparkle L2 “lessons learned” to vProgs researchers on TG R&D channel. There are still a number of subjects and designs to compare and brainstorm on (in various areas we are far ahead with our in-house efforts and in some areas we have achieved similar goals using different approaches). Given similar goals, I’ve decided that in the interest of the ecosystem harmonization, the best strategical approach will be to consolidate everything under the vProgs umbrella in an effort to help accelerate vProgs design and development. I am currently in talks with @hashdag and am actively helping to address various challenges that we have already identified and solved. So to that extent, one can say that vProgs will have a significant boost thanks to the efforts made on the Sparkle L2 project.
English
5
20
98
4.1K
ASPECT
ASPECT@CryptoAspect·
These designs are pending as this subject is not currently defined in the Yellow Paper. Technically vProgs should be able to manage resource/account-associated Kaspa deposits, but this requires additional designs and I don’t know what the research team will settle on. I (myself personally) think vProgs should be able to manage resource/account deposits interchangeable with UTXOs as well as asynchronously interop with existing UTXOs.
English
0
0
3
204
La_Gauffre
La_Gauffre@Crypto_Ding0·
@CryptoAspect @DesheShai @ReLeomerda I understand vProgs can communicate and share the same state. But can vProgs read L1 data (like an address’s UTXOs) and trigger native KAS transfers between L1 addresses? Sorry if it’s a naive question just trying to understand the exact boundary between L1 and vProg capabilities
English
1
0
0
250
TheSheepCat
TheSheepCat@ReLeomerda·
@CryptoAspect @DesheShai with "under the sink" it means on the external node which should be also the node which allow RPC connections and the external DAPP integrations.
English
1
0
0
155
ASPECT
ASPECT@CryptoAspect·
I believe I have originally coined a term “L-1.5” in an effort to describe a hybrid system where the separation of responsibilities is such that parts of the system exist in L1 and parts of the system exist in L2 (or rather in a closely L1-mated secondary network) If you simply do everything in L2, the layer becomes exposed to a set of very complex vulnerabilities. Partially merging the processing logic gives best of both worlds - keeps L1 “thin” while allowing it to safely offload unlimited scalable decentralized compute into a separate network/subsystem. Hence L-1.5 :)
English
1
2
19
608
ASPECT
ASPECT@CryptoAspect·
@JackKHash @DesheShai @ReLeomerda also, technically, there is no need to do any UTXO associated processing. a CDAG driven by L1 consensus can be completely independent from the UTXO set as it is driven by L1 transactions and thus stays in sync with L1 consensus. …but that’s a whole different subject.
English
0
0
3
85
Jack
Jack@JackKHash·
As of my humble understanding: Rollup state roots become part of L1 state, in contrast to vProg state commitments which never become part of L1 state; they exist only as utxo anchors and disappear once the next commitment replaces them. In both cases many internal off-chain operations can be rolled into one state root or one state commitment, but the “up” — updating L1 — is only true for rollups. vProgs never update L1’s state root; they only post their state commitment and its ZK proof to L1.
English
2
1
1
128
ASPECT
ASPECT@CryptoAspect·
@JackKHash @DesheShai @ReLeomerda Sorry, we are specifically talking about based-zk-rollups (not traditional rollups). which are also not applicable here.
English
0
0
1
62