Michael Sutton

3.2K posts

Michael Sutton banner
Michael Sutton

Michael Sutton

@michaelsuttonil

Computer science, graph theory, parallelism, consensus; taking Kaspa to the next level

Katılım Şubat 2021
115 Takip Edilen25.6K Takipçiler
Sabitlenmiş Tweet
Michael Sutton
Michael Sutton@michaelsuttonil·
“Fast, wide and yet thin” and its relation to “Based ZK rollups over Kaspa” Kaspa is built on advanced proof-of-work consensus protocols allowing it to scale in blocks (BPS; frequency; fast) and in transactions (TPS; throughput; wide)
English
56
331
1.2K
167.7K
Wolfie
Wolfie@Kaspa_HypeMan·
@supertypo_kas - $KAS infra support DEV Volunteer Hero of the Month !!
Wolfie tweet media
English
15
19
126
4.7K
justindefi.kas
justindefi.kas@Justinf_DeFi·
How come $KAS doesn’t have an Address Poisoning problem like ETH?
English
1
2
37
2.3K
Michael Sutton
Michael Sutton@michaelsuttonil·
place your bets: can a full, trustless chess server be deployed purely on native Kaspa covenants (TN12 / mainnet post-HF), or will script-size limits prove decisive?
English
49
94
478
10.2K
Docto $Dero
Docto $Dero@fribroon·
@michaelsuttonil @KaspaTipbot The script-size limit is a hard wall. Packing 64-square state validation, castling rights, and En Passant logic into a single UTXO-based covenant would cause a combinatorial explosion. Witness data for a "no-check" proof is simply too heavy. ​...Or I just don’t understand a thing
English
1
0
2
243
DAG Knight
DAG Knight@dag_knight·
@michaelsuttonil Chess move validation — legal moves for all six piece types, en passant, castling rights, check detection, checkmate — is probably 5-10KB of logic. Current Kaspa script size limits make this a very tight squeeze. You could maybe get a simplified variant through
English
1
0
5
1.1K
Bit Cat
Bit Cat@maxibitcat·
Vitalik casually dropping an argument for why PoW is much better than PoS at scaling consensus, as it avoids the complex coordination problem which forces ethereum to keep a gigantic minimum stake.
vitalik.eth@VitalikButerin

CC @drakefjustin Basically, to remove the 32 ETH minimum (eg. reduce it to 1 ETH) we would have to be able to handle >1-10m validators in the network (depending on how much ETH is staked). From a raw bandwidth perspective, this *is* theoretically feasible, because you can get the bandwidth overhead of recursive SNARK aggregation down all the way to 1 bit per participant per slot + O(1) overhead. But in practice, that requires conservative parameter choices that increase latency: basically, do perhaps 4 rounds of aggregation instead of 2. This will not affect slot time (as available chain is a separate mechanism). But it will affect finality time (eg. maybe instead of 8-16 second finality we would have 16-32 second finality). So that's the tradeoff that the ecosystem would have to accept.

English
7
30
144
7K
Michael Sutton
Michael Sutton@michaelsuttonil·
@VitalikButerin @colludingnode @drakefjustin This is indeed (one of) the case(s) for fast pow (eg kaspa) > “in pos, fast confirmations press directly against decentralization. In fast pow, the two properties are decoupled” full argument: x.com/i/status/19738…
Michael Sutton@michaelsuttonil

The case for the uniqueness of fast pow tl;dr Finality has two moving parts: (i) fast inclusion (= high bps, how quickly a tx gets into a block), and (ii) fast confirmations (= how quickly that tx becomes irreversible). Any system with rapid block production can achieve the first. The second is where the tension shows: in pos, fast confirmations press directly against decentralization. In fast pow, the two properties are decoupled. prologue A few weeks ago I came across Solana’s founder claiming: “Solana is the fastest monetary system in the world”. Since Kaspa already runs at a faster block rate, I was curious to check Solana’s finality times. That curiosity quickly pointed me to a deeper issue: not raw speed, but how speed interacts with decentralization. —————— The tension is structural. In pos, finality means accumulating staked votes, and the more decentralized the stake distribution, the more time is required to reach finality. Here I’m not talking about hardware requirements or validator specs. The axis I’m discussing is centralization around the security mechanism itself: stake in pos vs. hardware in pow. To be secure, a block must be confirmed by a supermajority--typically >66.7% of the total economic stake. In a truly decentralized network, where n stakers with uniform share grows without bound, the time to coordinate this supermajority becomes a real bottleneck. Pow works differently. It samples the hardware space without requiring the protocol to explicitly collect evidence from a majority of miners. Each block is itself a statistical proof that the finder out-competed the full network’s hash power. This process--and its timing--remains independent of how many individual miners participate. Ethereum’s researchers understood this when moving to pos. Unlike Solana, which tolerates concentration to reach ~13-second finality, Ethereum’s designers could not accept that trade-off. Their solution was to introduce rotating committees. A rotating committee is a smaller subset of validators, randomly chosen from the full set, that votes on behalf of everyone else. But this comes with a different security model, known in the literature as exposure to a BFT adaptive attacker. The committee is selected first and then votes. That “select-then-work” sequence is theoretically exposed to adaptive attackers, since members are known in advance. Pow, by contrast, is “work-then-select”: the winner is only revealed after the work is done. Think of it this way: in pos, you know who the referees are before the game starts, which gives an attacker time to pressure them. In pow, you only learn who won after the work is already done, which removes that attack surface. So n confirmations provide consistent confidence regardless of miner granularity, and the system stays secure even under adaptive targeting. Beyond attack subtleties, the real issue is economic weight. When I send a billion-dollar transfer in a pos system, the question I care about is simple: how much stake is actually securing it? A committee vote provides strong statistical evidence, but only a true supermajority puts the full economic stake of the network behind my confirmation. In other words, a sampled committee may convince me that things are probably safe, but only the weight of the entire stake provides an overwhelming guarantee. And this is exactly where pow shines: each confirmation is not just a probability estimate, but a direct proof of work done against the full hash power of the network, no matter how many miners there are. closing remark I don’t claim to know every engineering detail of Ethereum or Solana. But I’m convinced the core principle holds. I’ll state it simply: fast pow uniquely enables fast finality without forcing a compromise on decentralization.

English
10
99
366
14.3K
vitalik.eth
vitalik.eth@VitalikButerin·
CC @drakefjustin Basically, to remove the 32 ETH minimum (eg. reduce it to 1 ETH) we would have to be able to handle >1-10m validators in the network (depending on how much ETH is staked). From a raw bandwidth perspective, this *is* theoretically feasible, because you can get the bandwidth overhead of recursive SNARK aggregation down all the way to 1 bit per participant per slot + O(1) overhead. But in practice, that requires conservative parameter choices that increase latency: basically, do perhaps 4 rounds of aggregation instead of 2. This will not affect slot time (as available chain is a separate mechanism). But it will affect finality time (eg. maybe instead of 8-16 second finality we would have 16-32 second finality). So that's the tradeoff that the ecosystem would have to accept.
English
36
17
165
23.4K
vitalik.eth
vitalik.eth@VitalikButerin·
We should be open to revisiting whole beacon/execution client separation thing. Running two daemons and getting them to talk to each other is far more difficult than running one daemon. Our goal is to make the self-sovereign way of using ethereum have good UX. In many cases that means running your own node. The current approach to running your own node adds needless complexity. Short-term, maybe we want some more standardized basic wrapper that lets you install dockers of any client and make them talk to each other easily? Also good that @ethnimbus unified node github.com/status-im/nimb… exists. Longer term, we should be open to revisiting the whole architecture once @leanethereum lean consensus is more mature.
English
302
145
1.2K
181.6K
Michael Sutton
Michael Sutton@michaelsuttonil·
I saw this got exaggerated by several follow-up posts, so let me put it in perspective. This is mostly a boring correct choice in parts of the L1<>ZK interface where the same commitments need to be handled both by L1 nodes and by zk provers. You can think of these commitments as an L1 compression layer: L1 headers publish compact commitment data about which transactions were accepted by the DAG and in what order, and the prover later re-expands the relevant parts inside the zk guest using witnesses + public args. That is exactly why the hash here cannot be chosen from a purely “what is nicest for CPUs” or purely “what is nicest for circuits” perspective. It has to live on both sides. blake3 is not some unique Kaspa edge by itself. It is just one pragmatic point in the design space. In fact, it is not even uniformly ideal across zk stacks; it is much more natural for RISC-V oriented proving flows than for cairo-style ones. So yes, this matters. But mainly as another piece of careful systems design, not as a headline advantage on its own
Michael Sutton@michaelsuttonil

@crono_walker We’re switching all L1 prover required hashes to blake3 (it’s a very long ongoing subject)

English
10
60
252
9.7K
Kaspa Commons
Kaspa Commons@Kaspa_Commons·
Fair point. ACK. Another ingredient in the recipe. yes. Not as important on its own, but leave that one ingredient out of your master dish and it's just not the same. Our intent was to highlight the one ingredient in the broader architecture, provide some context, not suggest BLAKE3 itself is a headline advantage. The real story is the careful system design at the L1 - ZK interface. Kaspa’s edge is not one trick that's for sure! It is careful systems design across the stack. BLAKE3 just happens to be one small spice in masterpiece in the making. Or to bring it more into the tech and tools space, one small cog in the master machine. It is good engineering, not a headline. :) Peace.
English
1
1
25
1.1K
Michael Sutton
Michael Sutton@michaelsuttonil·
@crono_walker We’re switching all L1 prover required hashes to blake3 (it’s a very long ongoing subject)
English
7
44
215
24.8K
Michael Sutton
Michael Sutton@michaelsuttonil·
Relatedly, this is also a good opportunity to share the recent KIP-21, which I haven’t really made approachable yet. Viewing L1 commitments as a compression layer immediately raises the more interesting follow-up: not just whether data can be compressed, but whether it can be compressed in a way that lets later proofs stay local. If re-expansion later requires each prover to recover large parts of the original information, the compression is only partially doing its job. The more interesting case is when L1 commits to the data in a partition-aware way, so each prover can reveal and reconstruct only the specific subset it actually needs. That is exactly what KIP-21 is about. KIP: github.com/michaelsutton/… PR: github.com/kaspanet/kips/…
English
8
67
274
12.5K
Tiptree
Tiptree@tiptr_ee·
Can someone explain this? It appears the burn wallet had some outgoing transactions, but they failed. Does that mean someone had access? Why? How? WTF 🤣 @hashdag @michaelsuttonil
Tiptree tweet media
English
7
8
73
11.6K
Michael Sutton
Michael Sutton@michaelsuttonil·
@KasperoLabs @crono_walker It’s a technical subject and irrelevant for most users/builders. I don’t feel like X is the right place for this discussion. All info will be out in due time (now it’s scattered between research posts, kips, and tn12 code)
English
1
4
21
518
Kaspero Labs
Kaspero Labs@KasperoLabs·
@michaelsuttonil @crono_walker Good to know. For anyone building on the scripting side, does this mean blake2b() as a script opcode is being replaced with blake3 as well, or is this change scoped to the prover/consensus layer?
English
2
3
20
1.2K