coderofstuff

1.8K posts

coderofstuff

coderofstuff

@coderofstuff_

There are 10 types of people in the world - those who know binary and those who don’t | coding DAGKnight https://t.co/WZfxjGEEgq

Katılım Nisan 2023
168 Takip Edilen3.8K Takipçiler
Michael Sutton
Michael Sutton@michaelsuttonil·
Normal transactions are dominated by sig cost (1000 compute gram per sig, where 500k is the compute mass block limit -> max 500 sigs per block). Compute mass also counts byte size (1 gram per byte), so 500 sigs are unrealistic, hence the known ~300 txs per block number. It assumes 300 1:2 txs (1 in, 2 outs) with minimal size. Increasing the block *byte-size* limit (aka transient mass) from 125kb to 250kb means that you are still constrained with 300 typical txs but they can carry more data (eg in their payload) a stark proof on the other hand, costs 250 sigops + ~225kb in size. so compute mass is at least 250*1000 + 225kb = 475kg ~< 500kg compute mass limit and raw size is, as said, 225kb so it’s still bellow the transient mass limit
English
7
45
180
6.7K
coderofstuff retweetledi
Michael Sutton
Michael Sutton@michaelsuttonil·
Toccata consensus feature freeze is finally here after a heroic last-mile push by kas core devs. Aiming to reset TN12 tonight, or tomorrow at the latest. Genesis update: + 0x6b617370612d746573746e6574 // kaspa-testnet - 12, 2 // TN12, Launch 2 + 0x544f4343415441 // TOCCATA + 12, 3 // TN12, Launch 3
Michael Sutton tweet media
English
90
395
1.1K
76.1K
Shai (Deshe) Wyborski
Shai (Deshe) Wyborski@DesheShai·
@FreshAir08 P2PKH or P2SH? I didn't know it's enabled, but it's also possible that nodes are set to reject such txns. I honestly don't know.
English
1
0
3
341
Shai (Deshe) Wyborski
Shai (Deshe) Wyborski@DesheShai·
That's actually not true. In Bitcoin that's true for P2SH addresses, but not for P2TR, which uses raw keys. Kaspa txns are modeled after the latter. Interestingly, this allowed @coderofstuff_ to publicly prove the famous burn address (IYKYK) truly doesn't have a public key. This would not have been possible had the key was hashed.
𝗰 𝗵 𝗲 𐤊 𝘅 †@chekx0x

@xximpod @DesheShai Quantum computers cannot crack a $KAS address that has never sent a transaction because the public key is not yet visible on the network.

English
6
8
67
6.3K
coderofstuff
coderofstuff@coderofstuff_·
@brt2412 Read more at #daglabs-existence-actions-and-earnings-history" target="_blank" rel="nofollow noopener">wiki.kaspa.org/en/prehistory#…
English
0
4
20
687
₿ЯT 𐤊 🐈📈
This type of asshole needs to be studied. This guy’s first use of the phrase “DAGLabs” and “Polychain Capital” happened 36 minutes ago. Not once in his life did he use them before. He saw people who destroyed his entire lighting arguments talking about $KAS and went into ChatGPT and asked what Kaspa was and if it’s centralized. And now he’s embarrassing himself acting like he is the most educated person on the topic regurgitating blatantly false information and he doesn’t even know it. Please don’t be like this asshole. Please don’t outsource all thinking to artificial intelligence. And if you must do that, at least don’t act like you actually know what you’re talking about in a debate.
₿ЯT 𐤊 🐈📈 tweet media
English
8
6
41
1.5K
Seb
Seb@Seb28_7·
@michaelsuttonil @p4bpj @OriNewman @coderofstuff_ Specifically on the topic of storage: you can always prune sooner. That would reduce the GB that is needed. I don't know why we have the duration we have currently tbh. Could be a lot shorter from what I see. And with more bps even shorterer. 😅 Or am I missing something here?
English
1
0
4
474
eliott
eliott@eliottmea·
wahey #kas community, as you know I've been working on a mathematical framework for price oracles for kaspa for the past year: formal proofs, tight bounds, rigorous security guarantees. I am currenty working for @AppKaskad, who is for now the sole responsible for funding my research, something i much appreciate. I'd like to contribute to defi on an even larger scale, potentially being a contender for leaders such as Pyth or Chainlink. To that end i'd like my work to reach even more people, and to focus full time on this oracle as well as the L1 kaspa auction, a separate paper that I started under the supervision of yonatan. oracles are not optional infrastructure for defi: they are the foundation everything else is built on. with defi coming to kaspa, getting this right matters. i'd rather we have something mathematically sound than ship something that gets exploited six months later. if you know of grants, research funding, DAOs, or individuals who want to back my work in the kaspa ecosystem, DMs are open. paper attached, happy to discuss with anyone who wants to engage with the content. I plan on updating weekly the content of my work, and in my next post will do a timeline on what still needs to be done.
eliott tweet mediaeliott tweet media
English
16
115
355
17.1K
coderofstuff
coderofstuff@coderofstuff_·
This is the crucial part of it all w.r.t. to censorship: if one absolutely want a tx to go through (assuming it is consistent to the state), they can always spin up their own node, mine on it, send their tx to their node. Even with a small hashrate share and because the block rate is high, one can make this happen within a reasonable timeframe.
English
0
3
26
1.3K
coderofstuff
coderofstuff@coderofstuff_·
A node’s tx selection logic is weighted random. For the scenario you’re describing to occur, such players would have to alter their nodes to select txs differently (max fees first). Even then, low fee txs will still be mined due to there being miners who just use the default selection mechanism. The max-fee’s first strategy is inferior to the weighted tx random one in the context of kas since max-fee first increases the chance of selection collision. Say miner A and B block mine blocks that include tx_max, only one of those miners will receive the fee for that tx + subsidy, whereas the other miner only receives subsidy.
English
4
0
12
209
coderofstuff
coderofstuff@coderofstuff_·
Block template creation is dependent on a node. So a solo miner depends solely on their own node’s template and pov. Unless you’re connecting with very slow internet (slower than what the network expects), GD/DK makes it so your mined blocks are added in the blue set and therefore rewarded.
English
1
0
8
210
coderofstuff
coderofstuff@coderofstuff_·
@realvijayk Granted the data in the other post is old, but the math still maths
English
0
0
4
117
coderofstuff
coderofstuff@coderofstuff_·
Anyone can run a node and anyone can solo mine **with consistency in rewards**. Due to the high block rate, there’s lower variance (aka. better consistency) in the interval for rewards for a solo miner. See: x.com/coderofstuff_/… Now, think about it: is a system that allows for consistent solo mining more decentralized or centralized?
coderofstuff@coderofstuff_

@realvijayk What does it take to solo mine in Kaspa and consistently get a daily reward? If you have 2TH in hashrate this simulation says that in a period of 30 days you’d only have about 2-3 days that you might not get rewards while the rest of the days you solo mine at least one block.

English
5
7
54
2K
Shai (Deshe) Wyborski
Shai (Deshe) Wyborski@DesheShai·
UTXO set commitments are $kas Kaspa's quantum achilles hill In light of the recent truly astounding advances in building quantum computers, I think it's time to explain the most significant threat to Kaspa's consensus mechanism that such machines pose. It's not an immediate threat, but arguably something that requires more attention given the shift in the landscape. Before I start, I want to mention that @mcpauld invited me to a recorded session where we will talk about the new quantum advances, their meaning, and their consequences to blockchains. Stay tuned to know when it is published. Incremental Hash commitments and MuHash When a new Kaspa node syncs from an existing one, it gets a copy (actually, two copies, but never mind) of the UTXO set, along with a commitment. The commitment is a small hash that cryptographically assures that the supplied UTXO set matches the expected one. Hashing the entire UTXO set is an ever-daunting task, whose computational cost grows with the number of UTXOs. It's reasonable to do once during sync for verification, but for a miner, recomputing the entire hash for every new block would gradually make mining less and less accessible. To address this, Kaspa headers use an incremental hash. It's a special kind of hash that is used to commit to a set of strings (each representing a UTXO). What makes it special is that given the current commitment, as well as a list of elements to add and remove, one can compute the hash of the resulting set without recomputing the entire hash. So when creating a new block, the miner just uses the existing hash and updates it according to the UTXOs consumed and created in its block. As long as the block wasn't pruned, all nodes can repeat this check and verify that the miner is honest. Generally speaking, hashes are not incremental. Incremental hashes are specially designed to provide this functionality. In particular, Kaspa uses MuHash, a very lightweight incremental hash. Quantum Shor Attacks I will not go into the details of what quantum computers can or cannot break. But what's important to remember is that they can break what we call "discrete log assumptions". Stock hash families like Keccak, SHA, Blake, and so on do not rely on any such assumption, so they are considered quantum secure (in the sense that it is impossible to quantum-optimize them beyond the obligatory Grover quadratic speedup). However, MuHash relies on elliptic discrete log assumptions, very similar to ECDSA. This means that a quantum adversary can invert the hash commitment. In other words: they can find a completely different UTXO set with the same MuHash commitment. Consequences The UTXO set can only be verified independently of the UTXO commitment until the block is pruned. After that, Kaspa clients will accept any UTXO set that matches the commitment. This, for example, allows the following 51% attack: 1. Locate the UTXO commitment of the latest pruning block 2. Use your quantum computer to find another UTXO set with the same commitment 3. Build a competing heavier chain that assumes the UTXO set at pruning is the one you manufactured and not the original one Voila! A 51% the length of a single pruning window that can rewrite Kaspa's enitre history. Comparison to the current state Currently, Kaspa relies on social consensus in the short term, followed by cryptographic security in the long term. Social consensus prevents committing to UTXO sets that weren't a consequence of legitimate transactions. Cryptography uses state commitment to cement the UTXO set agreed upon by consensus. This is a very mild relaxation of Bitcoin's trust model, which does not require social consensus in the short term for chain consistency. Breaking MuHash means that the cryptographic backbone of this model no longer holds. UTXO commitments become unreliable, compromising Kaspa's trust model. I want to stress two things: 1. The attack only requires one application of Shor's algorithm to find a preimage. It might require some clever mix-and-match to find a preimage you actually like, but factors like BPS or difficulty do not make the attack any harder. 2. The attack cost is directly proportional to the length of a pruning window (in RW time, not blocks). So shorter pruning windows = less quantum secure network. Partial Solutions 1. Relying on archival nodes. If archival nodes are always available, then the problem "goes away". The issue is that archival nodes become a trusted source of truth. Currently, we don't have to trust archival nodes, because the UTXO commitment ensures that the UTXO set they describe is genuine. With this assumption quantum-broken, we need to either trust archival nodes or have enough archival nodes to trust decentralization. One of Kaspa's strong points over Bitcoin's antiquated model is a trust model that does not require trusted archives. Removing this will make Kaspa de-facto centralized. Worse yet, the reliance on archival nodes is fragile, as if, for some reason, there is a period of time longer than a pruning window that was not archived by anyone, the chain becomes indefinitely unverifiable. 2. Changing Hash There are post-quantum hashes like LtHash. The first issue (but not the key one) is that such a commitment is much larger (2KB versus a few dozen bytes). Recall that the UTXO commitment is a part of the header, so using such large commitments will make headers 9-10 times larger, drastically increasing storage costs for pruned nodes. (One can argue that pruned non-mining nodes can run in a mode that chucks away the commitments after verifying them. This will reduce storage, but it is impossible to sync from such nodes trustlessly, recreating the few sources of truth problem.) But even if we do magically find a tiny post-quantum hash, that will only provide a partial solution. A quantum adversary could not forge the UTXO set from the latest pruning point, but would have to go back far enough to split from a block that still uses MuHsah. Possible solution I haven't spent any time trying to come up with a better solution. It is very possible that a better approach exists. Below is a strating point for a discussion, not a concrete proposal: 1. Converge on a post-quantum incremental hash, lets call it QuHash 2. Decide on a block from which commitments must be in QuHash 3. Decide on a period of time (say, a year) after which reorgs below the QuHash depth are considered invalid. This is a very problematic solution, for several reasons: 1. (After qday) any archival information from before the QuHash days cannot be trusted. This includes any form of cryptographic receipt. All could be easily forged without tampering with the commitment. 2. (After qday) there will no longer be a reliable way to verify a UTXO set "all the way to genesis", just "all the way to when we started using QuHash". What happened before qday is delegated to social consensus. 3. Headers will become larger by an order of magnitude. Conclusion MuHash is a considerable quantum weak point that is unique to Kaspa. Arguably, it's time to start brewing up solutions.
English
22
38
178
13.7K
coderofstuff
coderofstuff@coderofstuff_·
@FoxH04 @OriNewman kasvault as it is should stay for the long term. The code is also open source anyway.
English
1
0
1
34
Ori Newman
Ori Newman@OriNewman·
I'm happy to announce Silverscript! (Link in reply) Silverscript is Kaspa's first high-level smart contract language and compiler. It enables DeFi, vaults, and native asset management directly on Kaspa's L1. The language syntax is based on CashScript, but adds essential features like loops, arrays, and function calls. It specializes in managing contracts with local state (UTXO model), serving as a complement and infrastructure layer for vProgs (shared state). Note: Powered by new script engine features recently enabled on Testnet-12. The syntax is experimental and might evolve. Please try it out and give feedback!
Ori Newman tweet media
English
62
311
883
144.4K
Michael Hollomon Jr.
Michael Hollomon Jr.@unkle_skunkle·
@alwaysaimbig @realvijayk On BTC’s blockchain: all nodes can audit the entire blockchain from genesis. On KAS: only archival nodes can do that. And most nodes are pruned, not archives
English
8
1
2
2.5K
Zach
Zach@alwaysaimbig·
A bit disappointed (but not totally shocked) in my fellow Bitcoiners today after mentioning Kaspa. Ya'll gotta chill a little bit.
English
28
20
185
5.1K
Vijay 𐤊ailash, CFA, CFP®
Vijay 𐤊ailash, CFA, CFP®@realvijayk·
A $KAS curious, bitcoin maxi influencer (with 5-figure followers on X) sent me this. If AI chooses #Kaspa for currency, there's no stopping $KAS. Study #Kaspa.
Vijay 𐤊ailash, CFA, CFP® tweet media
English
25
91
399
8.6K