Michael Sutton

3.3K posts

Michael Sutton banner
Michael Sutton

Michael Sutton

@michaelsuttonil

Computer science, graph theory, parallelism, consensus; taking Kaspa to the next level

가입일 Şubat 2021
117 팔로잉25.3K 팔로워
고정된 트윗
Michael Sutton
Michael Sutton@michaelsuttonil·
wrote an outlook for the upcoming “Toccata” hard fork -- native L1 covenants, based zk apps, why the activation window moved, and what the road from feature freeze to mainnet looks like: @michaelsuttonil/kaspa-covenants-toccata-hard-fork-outlook-a4d81a40900c" target="_blank" rel="nofollow noopener">medium.com/@michaelsutton
English
43
249
725
90.2K
Hans Moog
Hans Moog@hus_qy·
I think the questions @hashdag raised are not necessarily only tied to technological challenges but are also more of a "mindset issue". People have to understand that they can do incredible things with these emerging tools that have never been done before. Crypto needs to turn from a collection of bag holders, hunting for yield, into a group of people that actually want to change the world! That should be the message of Kaspa - not digital anarchy but a full blown digital society that interfaces with the real world in as many meaningful ways as possible and that has a shared vision of the future. It's fine to generate wealth along the way but we should also be aware of the amount of agency we have as a collective to just do things.
Hans Moog@hus_qy

You are absolutely right: tools alone are not enough. What you also need is a unifying mission - a stag worth hunting together - and it cannot just be a product, a memecoin or a financial instrument. It has to point beyond finance and beyond economic settlement. That is why I do not think the real opportunity here is simply to build better tools for coordination. It is to build mission-driven institutions that can coordinate people around questions and goals that existing structures struggle to hold. I think I have already identified the stag I personally want to hunt: properly exploring the possibility that cognition is a scale-free process in our cosmos. A growing number of scientists and researchers are beginning to take this idea seriously. But the work lives in the cracks between established fields - AI, philosophy, physics, cosmology, biology, economics, and perhaps even some form of spirituality - which makes collaboration unusually hard and costly. The people best positioned to push it forward are usually too busy doing the actual work to also build the social infrastructure needed to coordinate the field in a persistent and goal-directed way. There are of course people like Dr. Michael Levin trying to connect the dots by organizing conferences and conversations across disciplines. But efforts like these are still relatively isolated, often revolve around key individuals, and lack a persistent address that others can turn to if they want to follow the work, contribute, or collaborate. And if this movement keeps growing, the coordination burden on those individuals will only increase. That is exactly why I think a distributed institution dedicated to this mission could matter - something like a Cognitive Cosmos Institute: a persistent coordination layer for people working on these questions, and potentially even a new channel for funding outside the established structures of academia. Maybe it fails. Maybe people think the whole idea is nonsense. But I believe we are at a unique moment in the development of our species. We now have tools that are still radically underutilized when it comes to building mission-driven institutions. I think people generally underestimate how much agency they actually have. Sometimes you can just do things. And it is up to us to build the tools that empower people to do exactly that.

English
4
39
142
3.2K
Michael Sutton
Michael Sutton@michaelsuttonil·
(1) no need for an entity with memory or something. you need the tx that produced the output and contract metadata. Nothing more. So technically it will probably be an indexer, but not bcs it needs to track info through time in order to decode (2) - logically, every well-defined covenant is obligated to verify its output state is correct. - this validation happens within the tx producing the output (in the script execution of at least one input). - you would agree, that by def, this validation is transparent, since each node in the network must run it one way or another - the only problem is that nodes run it through the script engine (so they kind of run it like “assembly” they don't understand) - that’s why the only missing component is compiler-provided metadata that tells you how to run that assembly in a way that tells you what output state was written (3) the output Idx is simply the output index for which validateOutputState was executed. The covenant id appears directly as part of the consensus tx output structure (TransactionOutput::binding)
English
1
0
4
176
JH
JH@oneforonehaha·
reframing the question as below: (1) does this process involve off chain elements, such as an off chain indexer / decoder? (2) how do you know the state output? or is it that you don't know, but just run it again and validate some other result with an equation? so if that == stands, then the output state is right. is this essentially what validateOutputState doing? (3) Does this outputIdx contains a covenant ID?
English
1
0
3
151
JH
JH@oneforonehaha·
Follow-up on STATE_DECODER_PROPOSAL section: so we learn about the state change through validateOutputState (as a decoder), and this decoder/indexer off chain that recovers prev_state from UTXO/sigscript? Within the contract, the new state is stated by functionroll, but validateOutputState doesn't get the state directly. So there is no setState(newState) alike stuff? Instead, the old covenant UXTO is spent -> a encodedState is formed on stack by running the script -> validateOutputState asserts if outputIdx == current covenant family + new state -> if not match the fail closed (instead of taking in metadata) Then what's the difference between new_state and encodedState? I did see that encodedState = encode_by_state_layout(new_state), but what does that mean... (sorry for asking in such a plain way lol)
Michael Sutton@michaelsuttonil

> “The state is in redeem script, so only when the output UTXO is spent, the original script will be visible.” That’s incorrect. By definition of a covenant, the tx creating the output must verify the state written to it. This means that if you run the script engine over that tx you inevitably compute that output state along the way. I actually have a proposal directly related to this (see link below), but it’s only for convenience. The statement holds regardless github.com/michaelsutton/…

English
2
9
35
2K
Hans Moog
Hans Moog@hus_qy·
I just got back from my Easter trip visiting family, and one of the first things I made time for was listening to Yonatan’s Oxford Union speech. I think he absolutely nailed it: youtube.com/watch?v=VIZGKo… If there is one thing we can learn from our progress in AI, it is that intelligent behavior emerges when many components explore degrees of freedom and converge, through distributed constraint resolution, into coherent and stable patterns. The modern world has connected an enormous number of people and given us unimaginable freedom, but it still lacks a credible way to enforce shared constraints. The result is a crisis of responsibility at every scale of society: from cyberbullying to former superpowers chasing old glory through war, to leaders blaming the weakest members of society for national decline, or even killing their own citizens to preserve power. At the same time, our capacity to inflict harm on one another has become increasingly asymmetric. Small actors can now create disproportionately large disruptions, and conflicts no longer remain local. They send shockwaves through the emerging superorganism we call humanity. The age in which we could dominate one another and still produce a stable world is coming to an end. The only serious path forward is collaboration. People may feel pessimistic about the future, but I think we are approaching an inflection point. Even the old superpowers are beginning to learn this lesson the hard way. The American Dream of "I can make it" is gradually giving way to the realization that individual prosperity depends on collective wellbeing, and that we can build far larger and more meaningful things when we share a common dream. That is why I am excited about DLTs, and Kaspa in particular. Not because I see them as safe havens for hiding wealth from corrupt governments in some dystopian future, or because I want people to get rich by selling to later participants. But because I believe DLTs can offer a superior foundation for large-scale human coordination: one that is more neutral and reliable than traditional models based on force and mutual deterrence. In that world, wealth is not the goal in itself, but a byproduct of coordinating around shared missions, empowering people to contribute, and aligning incentives toward common outcomes. The future is not about building better products. It is about building better protocols: systems that allow human beings to coordinate meaningfully at larger scales than ever before. For the first time in human history, we have the tools to build institutions that are not bound to territory, yet can still provide structural coherence without having to fight wars to establish their legitimacy. What we need now is a group of people bold enough to take that mission seriously.
YouTube video
YouTube
English
18
163
483
24.1K
Ori Newman
Ori Newman@OriNewman·
Okay, for now I chose the name KCC20, where KCC stands for Kaspa Contract Convention. I also made an md book that goes over the contract and the examples in the code kaspanet.github.io/silverscript/k…
Ori Newman@OriNewman

I wrote a PoC token contract in Silverscript, currently called DOG20 (better name ideas are welcome). It supports token ownership by 3 kinds of entities: 1. Public keys — like any regular Kaspa address. 2. P2SH addresses — which means ownership by a stateless contract, e.g. multisig. 3. Covenant IDs — which means ownership by a stateful contract. The third option is the interesting one, and it's a demonstration of a broader concept (that might be familiar to whoever watched the webinar by @IzioDev and @michaelsuttonil), called inter-covenant-communication (ICC). In this context, it means you can put arbitrary stateful rules around token control. For example: - “after the first 10 spends, wait a year before spending again” - zk-rollups can manage their L1 tokens using a stateful bridge. DOG20 also supports minters that are allowed to mint indefinitely — but that does not mean the supply must be unbounded. Let's say you want to publish a token and allow to issue only 100 new tokens each month. DOG20 doesn't support it natively, but you can achieve that by making the only minting entity a covenant. That covenant will store in its state `nextIssuance`, and will allow spends of 100 tokens only if `time > nextIssuance`, and will set `nextIssuance = nextIssuance + 30 days` each time it's used. I hope to explain about it a bit more in the future, but in the meantime, feel free to look at the examples linked in the next comment.

English
17
60
232
10.6K
Michael Sutton
Michael Sutton@michaelsuttonil·
@KaspaSilver @hashdag Ask him what happened to Bitcoin. Is he still crawling in his backyard? Is he still the only one invited to slowmo vip parties?
English
10
32
230
5.5K
Michael Sutton 리트윗함
Ori Newman
Ori Newman@OriNewman·
I wrote a PoC token contract in Silverscript, currently called DOG20 (better name ideas are welcome). It supports token ownership by 3 kinds of entities: 1. Public keys — like any regular Kaspa address. 2. P2SH addresses — which means ownership by a stateless contract, e.g. multisig. 3. Covenant IDs — which means ownership by a stateful contract. The third option is the interesting one, and it's a demonstration of a broader concept (that might be familiar to whoever watched the webinar by @IzioDev and @michaelsuttonil), called inter-covenant-communication (ICC). In this context, it means you can put arbitrary stateful rules around token control. For example: - “after the first 10 spends, wait a year before spending again” - zk-rollups can manage their L1 tokens using a stateful bridge. DOG20 also supports minters that are allowed to mint indefinitely — but that does not mean the supply must be unbounded. Let's say you want to publish a token and allow to issue only 100 new tokens each month. DOG20 doesn't support it natively, but you can achieve that by making the only minting entity a covenant. That covenant will store in its state `nextIssuance`, and will allow spends of 100 tokens only if `time > nextIssuance`, and will set `nextIssuance = nextIssuance + 30 days` each time it's used. I hope to explain about it a bit more in the future, but in the meantime, feel free to look at the examples linked in the next comment.
English
21
101
311
23.9K
Michael Sutton
Michael Sutton@michaelsuttonil·
if only you chose `.ag` over `.sil` for silverscript code files ;)
English
4
5
67
1.9K
Ori Newman
Ori Newman@OriNewman·
Not sure if intended or not, but it's pretty cool that @hashdag's blog domain name is the chemical symbol for silver.
Kaspa Eco Foundation (KEF)@Kaspa_KEF

On hashd.ag/staghunt/, we are able to see @hashdag's theory for why decentralized blockchain is still needed and more needed than ever. Link to Y speech at @OxfordUnion: hashd.ag/oxford-union-a…. We are very proud to have orchestrated and funded it. Full-length 40-min video to be released soon.

English
7
21
146
9.3K
Hans Moog
Hans Moog@hus_qy·
No, not really. The main focus of the vprogs effort is to establish a "meta-protocol" for applications and state to be seamlessly composable across VM and proving system boundaries. So the primary goal is to define something like a "common language and set of primitives" that ensures that different VMs can talk to each other. This doesn't mean that any based VM is automatically a vprog but it means that any VM could potentially become part of the vprogs ecosystem by adopting this common language. Depending on the modularity of the corresponding VM and how much it focuses on being an actual VM that drives state transitions rather than a full blown "protocol", this can vary in effort. Generally speaking, more modern VM's that put more effort on modularity should be easier to integrate than older ones and we have shown that both Move as well as Risc0 can be supported with less than a few hundred lines of code for the plumbing. Since you asked about RETH, I suspect that this would be a VM/protocol that would take slightly more effort as it takes care of things that should be left to the implementor (like commitment schemes and state management). Often times even older execution environments like the EVM are able to support more modern optimization mechanisms (like EIP-2930 for RETH) but to provide backward-compatibility these are usually "optional" which increases the amount of work necessary to comply with the vprogs standard (because here we care more about bare metal performance than backward compatibility). Instead of looking at vprogs as "just another VM" we should look at it as a "meta-vm" or an "interoperability layer for VMs" that takes care of exactly those things that are common to all VMs in a generic and optimized way while maintaining the freedom for developers to express logic in the way they are used to.
English
6
43
188
4.9K
JH
JH@oneforonehaha·
Hi @hus_qy, a question: if an L2 has RETH client and the RTH can generate a zk proof, which will be submitted to #Kaspa L1 and verified by Covenant zkp op code, then is this L2 = vprogs? Essentially, no matter which L2 it is, anything that can be verified by an L1 script is vrogs. 中文版: 是否只要有了RETH客户端,RETH能生成zkp,zkp提交L1,Covenant zkp op code完成验证,那就等于vprogs?是不是不管L2是什么,只要落在L1契约脚本验证的就是vprogs?
Hans Moog@hus_qy

Today there is a bigger update because there are 3 new PRs that are ready for review. The first one (github.com/kaspanet/vprog…) fixes some bugs and introduces the SchedulerState which unifies the way we expose shared state in our framework. The second one (github.com/kaspanet/vprog…) introduces the node-framework which builds on top of the first PR and introduces a generic platform for building L2 nodes that ingest data from the Kaspa L1 to produce state changes (and eventually proofs) of the L2-execution. The third one (github.com/kaspanet/vprog…) introduces the node-vprogs-cli - an actual binary that can be executed and that follows the L1 to execute transactions in a concrete VM. The binary is designed to support compile-time modularity, which means that by updating a single line in the backend.rs we can swap out both, the storage and the VM. A VM is defined by three functions: 1. A pre_process_block function which extracts the relevant L2 transactions from the L1 chainblock. 2. A process_transaction function which executes a single transaction (within its scope) and produces execution-results. 3. A post_process_block function which takes the execution results of the individual transactions and stitches them together into an aggegrated proof structure which can be settled on the L1. All steps are parallelized as much as the underlying causal structure permits. I still have a few chores on my todo-list and there are still a few missing parts (like syncing from state that we didn't actively witness) but this is pretty much as far as we get without re-integrating with the covenants on L1, so the next steps will be to actually design the concrete framework for settling state transitions on the L1 (and consequently syncing from it). I suspect that this will require a little bit of back and forth between the two development efforts but we are getting to the point where things start to get interesting.

English
6
16
83
7.6K
Michael Sutton
Michael Sutton@michaelsuttonil·
@Chris_Hutch7 @emdin @oneforonehaha @hashdag From all I know it will be a Solana-like experience. Of course my words carry weight and I can’t and won’t promise this until we’re much closer, but I see no big obstacles to this vision.
English
3
50
234
7.3K
Chris Hutchinson 𐤊
Chris Hutchinson 𐤊@Chris_Hutch7·
Do you think vProgs would be an easy enough transition for builders and developers on different networks to come straight over to Kaspa’s base layer (pls bare in mind i literally have very little idea of what devs look for when building on networks let alone how vProgs will actually work lol)
English
1
1
29
1.7K
Chris Hutchinson 𐤊
Chris Hutchinson 𐤊@Chris_Hutch7·
I’ve been thinking. If Aviv and Mr Y’s GHOST protocol had actually been accepted and implemented into Bitcoin back in 2013, would Kaspa exist today? I know GhostDAG is a different beast from GHOST but would it have ever came to be if Bitcoin Forked to GHOST? Should we thank Bitcoin for becoming so ossified that it made way for Kaspa? $KAS
English
9
11
116
7.3K
Michael Sutton
Michael Sutton@michaelsuttonil·
imho we should stop treating Kaspa as being in sos mode whereby all means are required and justified. L2 fragmentation is a very real problem and risk, and we have the time to do it right. Hence vprogs. Hence escaping the L2 language and focusing on a single dimension of apps only
English
12
24
174
8.5K
Chris Hutchinson 𐤊
Chris Hutchinson 𐤊@Chris_Hutch7·
@emdin @oneforonehaha @hashdag @michaelsuttonil This is actually why I champion all L2’s on top of Kaspa that use the base layer as a sequencer. Stepping stones for existing and new builders into Kaspa with less of a barrier to entry all whilst boosting txns on the L1
English
2
0
18
1.9K
Michael Sutton
Michael Sutton@michaelsuttonil·
There are similar pocs and proposals, mainly by starkware devs eg delvingbitcoin.org/t/proposal-op-… In a sense, zk verify is yet another possible verification step in the covenant transition logic. The combination is powerful, but not scientifically novel. The connection to a sequencing commitment for supporting *based* zk apps is probably more novel. And in general we’re doing a pretty nice job in connecting all the dots, but I agree with @lunfardo314 here, this combo isn’t new science yet
English
1
0
9
152
Michael Sutton
Michael Sutton@michaelsuttonil·
In general, as I emphasized in the webinar, this is indeed an evolving architecture, but eventually recognizable patterns emerge and will be normalized into the compiler/sdks such that “normal” devs will be able to use them. It has already happened with silverscript to things such as verifying the output state and will continue to happen to increasingly more abstract concepts cc @OriNewman
English
0
5
59
1.2K
JH
JH@oneforonehaha·
I noticed the DOG20 example is the same as OP_CAT (and KRC20): docs-kasplex.gitbook.io/krc20. I also understood that silver script is a compiler. I don't necessarily know anything about zk <> OP_CAT (lmk if you have good sources). I still see some nuances to be clarified. For example, unlike OP_CAT which has OP_RETURN, the state of #Kaspa covenant design seems to be not available. The state is in redeem script, so only when the output UTXO is spent, the original script will be visible. Another example is the chess game. Not sure if you will think that is conceptually novel, but to me if developing dapp on #kaspa covenant requires dev to think what will change the state machine.... it can be very "novel" as I don't think most devs think in that way...... Still, I saw core devs in TG channel still discussing Covenant signing and indexing features these days, so I will wait for those to be finalized.
Yonatan Sompolinsky@hashdag

I beg to differ @oneforonehaha. Cov+ZK is conceptually the same as what the OP_CAT camp in BTC is pushing for. In principle, we could've introduced programmability into kaspa by merely adding OP_CAT; that would've been easier to communicate, though not to build with; you'd still need a silverscript-like compiler cc ON, easy authentication of covenant lineage cc MS, etc. BTC's OP_CAT doesn't magically make program development accessible. Ecosystem differences aside, any practical, developer-friendly utilization of OP_CAT would be in the ballpark of what kas core designed here. (One arguable exception: BTC could've used the simpler block header sequencing commitment and require apps to zk-prove the entire onchain activity and not just their app's activity; infeasible for high throughput). With all the great ideas baked into the design, I still think we should view the upcoming HF as a natural next step, one that has no particular research angle or conceptual novelty. Think of it as OP_CAT++

English
6
8
57
8.1K
Michael Sutton
Michael Sutton@michaelsuttonil·
> “The state is in redeem script, so only when the output UTXO is spent, the original script will be visible.” That’s incorrect. By definition of a covenant, the tx creating the output must verify the state written to it. This means that if you run the script engine over that tx you inevitably compute that output state along the way. I actually have a proposal directly related to this (see link below), but it’s only for convenience. The statement holds regardless github.com/michaelsutton/…
English
3
12
69
3.5K
Michael Sutton
Michael Sutton@michaelsuttonil·
It’s a single covenant by design. Think of it as a complex state machine with many partitioned/parallel entities (league, players, games). The state machine must be a well-defined and closed system. I was considering ICC but it wasn’t the right tool here. This is a good question bcs it makes me realize I still need to gain more clarity in order to translate my own intuition to clear communicable principles. Will continue to think about it
English
2
9
65
1.1K
JH
JH@oneforonehaha·
@kaspaunchained Quick question: for this chess game example, does it require ICC (multiple convenants) or just a few mini contract flows within one convenant? @michaelsuttonil
English
1
1
12
1K
Kaspa
Kaspa@kaspaunchained·
Recently a few members from Kaspa Core held a discord meeting to discuss SilverScript, Covenants, and much more that is coming to Kaspa in the upcoming Covenants++ hard fork. Tune in here: youtube.com/watch?v=9t-14L…
YouTube video
YouTube
English
9
77
293
15.1K
Michael Sutton
Michael Sutton@michaelsuttonil·
I’ll take the liberty of answering, but of course I’d love to hear @hus_qy’s perspective as well. In short, no. It will not be a vprog in the sense defined by the vprogs yellow paper. And that’s because it will not allow synchronous composability (aka syncompo) with other zk apps. The vprogs design aims to solve a tension that is inherently hard to solve: - giving each app sovereignty, including proof-liveness sovereignty, which means it has a dedicated covenant on L1 - enabling synchronous composability with other progs that do not necessarily share the same L1 covenant, and can even belong to a different vm with a different zk proof system The reth L2 you are describing is like a standalone mega-app. I call it mega because it allows its users to submit custom smart contracts into it. It indeed supports syncompo between inner contracts, but that comes at the cost of sovereignty: these contracts are not standalone units, but rather parts of the same monolithic mega-app. In contrast to its inner syncompo, it does not support syncompo with external applications or with other external mega-apps
English
5
31
172
4.4K
Kaspa Commons
Kaspa Commons@Kaspa_Commons·
Hard Fork Target Update. Date TBD. Remember folks, milestones are targets, not promises. As always, it's done when it's ready. This is the way. Carry on, @michaelsuttonil and ¬Team!
Kaspa Commons tweet media
English
12
24
135
6.7K
Kaspa
Kaspa@kaspaunchained·
The anticipated covenants++ hard fork will not be May 5th but about a month later. No dates given are ever absolute. Much testing and research is always involved which can always push things back. Stay in tune with Kaspa Core developments via Telegram: t.me/kasparnd
Kaspa tweet media
English
32
89
459
15.3K