Reckless ⨀

6.5K posts

Reckless ⨀ banner
Reckless ⨀

Reckless ⨀

@Rcklss27

Si Vis Pacem, Para Bellum • prev @exodus • buidl on @monad

Comfy in Spot Katılım Şubat 2020
967 Takip Edilen931 Takipçiler
Sabitlenmiş Tweet
Reckless ⨀
Reckless ⨀@Rcklss27·
Gmontardio, our time has come bros. ARS LONGA, VITA BREVIS ✊ Ὁ βίος βραχύς, ἡ δὲ τέχνη μακρή Let’s chase the Monad together, shall we?
Reckless ⨀ tweet media
James@_jhunsaker

Gmonad, milady.

22
5
58
3.2K
Hsaka
Hsaka@HsakaTrades·
gm
QST
1.5K
480
7.2K
671K
Reckless ⨀ retweetledi
Monad
Monad@monad·
"Monad"
Monad tweet media
English
3.1K
2K
7.7K
283.9K
Reckless ⨀ retweetledi
bill monday
bill monday@billmondays·
when you realise that growth is only possible through the constant cycle of death and rebirth
bill monday tweet media
English
98
18
466
10.5K
Reckless ⨀
Reckless ⨀@Rcklss27·
@Homelesscrypto1 Dawn will break brighter tomorrow. We fight on.. no matter the storm. Stand together.. keep fighting. 🪽
English
0
0
3
23
Reckless ⨀ retweetledi
tunez
tunez@cryptunez·
Letter to the Monad Community: 📝 Monad is a Layer 1 blockchain that is 1:1 EVM compatible, will scale to hundreds/thousands of nodes (72 right now) and 10,000+ TPS (has handled spikes of 3, 4, 5k+ without problems on testnet). Throughout the history of crypto (15 years) and the history of smart contract platforms/chains in crypto (10 years) not only is there no other blockchain with these properties, there is also no other blockchain that is even pursuing these properties. The main thing I am trying to get across is that you should not forget that monad (the piece of technology) is *actually* special and *actually* important. Similarly, the monad community is *actually* special and *actually* important. There are thousands and thousands of uniquely talented and diverse individuals who have built lifelong friendships with one another, all sharing the common bond of a silly purple chain. When you combine both factors of monad the technology being innovative, and the monad community being unprecedented, you are presented with an opportunity for something extremely special to happen. The key here, is that if we lean into our unique advantages, we will win. That may seem obvious, but it is something that is often overlooked. The point is: we have to do the things that only we can do. I often get messages from Monad community members asking for advice on "how can I stand out?". My response is always something along the lines of "Well, you have to find the thing that makes you special, and lean into it." If you aren't thinking strategically about what your unique edge is to compete, you will never stand out. I'll give you an example of something that I am trying to do (for monad) to make monad stand out. This week, we launched Monad Shark Tank. It's an online weekly pitch competition (every week at 10am EST in the Monad Discord) for new Monad app builders who want to demo their apps and get feedback. We had 500 concurrent viewers for the entire event. I write this letter to the Monad community as a reminder of two things: 1. Monad is special 2. It's important to lean into the things that make Monad special This is what I am trying to do with Monad Shark Tank, and I am asking all of you from the bottom of my heart to join me. Together, we have to do the things that only Monad can do. This is one of the best opportunities for us to do that. There also also special prizes for people who participate: 👀 - 50 prizes of 100 testnet MON tokens every week for members who give the most valuable feedback - Grand prizes of 1,000 testnet MON tokens every week for members who refer builders that participate in Monad Shark Tank (there is a section on the application form for builders to submit their referral - application form posted in next tweet) In conclusion, Monad Shark Tank is something that only Monad can do. There are many ways in which you can contribute to Monad, but you should try to do things that only you can do. This is important, because you're actually important. Thanks for reading. 🫂
tunez tweet media
English
72
51
341
26.7K
Reckless ⨀ retweetledi
Keone Hon
Keone Hon@keoneHD·
Summarizing the breakthrough in MonadBFT Yesterday Category Labs released the MonadBFT paper, describing the consensus mechanism that will power Monad at mainnet. MonadBFT is a significant development in consensus research since it is the first time that Pipelined HotStuff becomes resistant to tail-forking. Tail-forking occurs when a missed slot causes the previous proposal to be discarded and re-mined. It is a severe problem in previous Pipelined HotStuff formulations since it opens up multi-block MEV attacks that destabilize consensus. Alleviating this problem is a huge deal because it gives us all the benefits of Pipelined HotStuff - frequent blocks, low latency, large validator sets - while avoiding the biggest downside. MonadBFT also offers a huge upgrade for finality. It features single-slot (500 ms) speculative finality and two-slot (1s) hard finality. “Speculative finality” means “finality that will revert only in the event of equivocation (double-signing) by a majority of validators”. Equivocation is a major offense in most blockchain systems and is commonly penalized with slashing; the bigger the penalty for equivocation the closer you can think of “speculative finality” to finality. One-slot speculative finality is a huge unlock for high-performance applications, which can confidently display the updated state of the world immediately after the next block is received. These properties make MonadBFT a huge advancement in consensus, and a worthy complement to other compounding improvements in Monad including Asynchronous Execution, Optimistic Parallel Execution, and MonadDb. The rest of this article serves as a summary of how successive improvements in HotStuff have built upon each other, in order to explain the problem that MonadBFT solves. To summarize: 1. HotStuff gives us linear communication complexity so that we can have large validator sets, but it's not very efficient 2. Pipelined HotStuff gives us efficiency and low latency from proposing blocks every slot, but suffers from the problem of tail forks 3. MonadBFT gives us gives us tail-fork resistance and one-slot speculative finality --- HotStuff: Linear communication complexity enables large node counts HotStuff algorithms complete over the course of several rounds of communication, which generally take the form of “fan out, fan in” communication directly from leaders to validators back to leaders. Each round begins with the leader sending a message directly to other validators, who each send back a signed message attesting to having received the message. Provided that a supermajority (2/3) of the validators send back an attestation, each round ends with the leader aggregating the signed attestations into a Quorum Certificate (QC), which serves as proof that the supermajority attested to the previous message. HotStuff algorithms have multiple rounds of communication like this. - The first message from the leader is a block proposal - The second message is the QC for that block proposal - The third message is a QC about the previous QC (i.e. a QC-on-QC) - and so on If the procedure is interrupted at any time before finality, the block fails to finalize and is discarded; transactions from that block will have to be re-included in the next block. The original HotStuff protocol has no pipelining and has 3 rounds of communication before finality; the same validator plays the role of leader for each round. --- Pipelined HotStuff: A new block every slot raises efficiency Pipelining is what we all do intuitively when we have two loads of laundry to complete. Instead of waiting for load 1 to finish the full cycle before starting load 2, in pipelining we put load 1 in the dryer at the same time as load 2 goes in the washer. You can think of the original HotStuff as that naive approach to doing laundry (don't start on load 2 until load 1 is completely done), while Pipelined HotStuff is doing the intuitive behavior of progressing multiple laundry loads in a staggered fashion. In Pipelined HotStuff we stagger proposals, such that there is a new block proposed at each round, with the new block piggybacking on top of the message carrying the QC from the previous block. Block proposals march toward finality over the course of multiple rounds. The benefits of pipelining are significant. Pipelining raises the density of block proposals, since a block proposal is made in every slot, which raises throughput and lowers time-to-finality. However, there is one major drawback of pipelining, which is best illustrated with an example. Assume that the leaders for blocks N, N+1, and N+2 are Alice, Bob, and Charlie. If Bob misses his slot, then Alice’s proposal will be invalidated as well, because Bob’s message carries both his proposal and a QC for Alice’s proposal. When this happens, Charlie ends up being called upon to produce a block as if Alice’s proposal had never existed. We refer to this behavior as “tail-forking”, and it can be thought of as a mini-reorg of depth 1. The possibility of tail-forking has significant consequences, because missed slots aren’t necessarily by accident. If there is an opportunity to extract value by re-mining Alice’s block while re-ordering or omitting some of the transactions, then Bob and Charlie can collude to have Bob intentionally miss his slot, triggering an opportunity for Charlie to re-mine Alice’s block. This has been a significant drawback of Pipelined HotStuff protocols (some of which are in mainnet today). --- MonadBFT changes this MonadBFT is the first protocol to enable pipelining while making the algorithm tail fork-resistant. This tail fork-resistance comes from the fallback procedure when Bob misses his slot, which enables validators to piece together their collective knowledge of Alice’s proposal and its level of consensus within the validator set. In particular, under MonadBFT, if Bob misses his slot, then the fallback procedure has validators communicate to each other with signed attestations stating whether they saw Alice’s block. If the supermajority attests to Alice’s block, then Charlie is forced to re-propose Alice’s block. If Charlie wishes to propose a different block then he must provide a signed attestation from the majority of validators attesting to not seeing Alice’s block on time. In the typical case where Charlie re-proposes Alice’s block, he then gets to propose his block on the subsequent round. The result is two important properties: tail-forking resistance and speculative single slot finality. We have already spoken about tail-forking resistance, but let’s understand the impact on finality. As before, assume the leaders for blocks N, N+1, and N+2 are Alice, Bob, and Charlie. Under Pipelined 2-Phase HotStuff - i.e. before MonadBFT - as a validator (or a full node), you cannot finalize Alice’s block proposal until you see Charlie’s block proposal. Why? Because if you finalize as soon as you see Bob’s proposal, it is possible that Bob is messing with you by ONLY forwarding his proposal to you, and he actually plans to fail to send his proposal to everyone else, thus missing his slot. But in MonadBFT, as soon as you see Bob’s proposal, you can “speculatively” finalize Alice’s proposal because Bob’s proposal includes a QC on Alice’s proposal, which is proof that 2/3 of the network attested to Alice’s proposal. Even if Bob is messing with you by ONLY forwarding his proposal to you, and is going to end up missing his slot, you know that a supermajority of the network saw Alice’s proposal and, when they participate in the fallback procedure, will sign off on Alice’s proposal again. The only way that Alice's block won't get finalized is if the validators equivocate and sign off saying that they didn't see Alice's message. This fault is easily provable - we have signed conflicting messages from them. If the penalty for equivocation is substantial - and it should be - this “speculative” finality is actually not really that speculative. --- Takeaways MonadBFT is an extremely exciting development for consensus, and is a worthy complement to other compounding improvements in Monad including Asynchronous Execution, Optimistic Parallel Execution, and MonadDb. Huge congrats to @MohammadMJalal1 and @KushalBabel on this significant breakthrough. MonadBFT will be implemented shortly on Monad Testnet, which currently implements Pipelined 2-Phase HotStuff. For further reading, see the linked blog post and paper in the next tweet.
Keone Hon tweet mediaKeone Hon tweet media
English
293
308
1.6K
116.5K
Reckless ⨀ retweetledi
Hyperliquid
Hyperliquid@HyperliquidX·
Yesterday is a good reminder to stay humble, hungry, and focused on what matters: building a better financial system owned by the people. Hyperliquid is not perfect, but it will continue to iterate and grow through the collective efforts of builders, traders, and supporters. Users with JELLY long positions at the time of settlement will be refunded by the Foundation as if their position settled at the closing price of 0.037555. This results in all JELLY traders being settled at a price advantageous to them, except flagged addresses. To recap what happened: A trader self-traded a 4M USDC JELLY position at 0.0095. The price of JELLY then rose more than 4x, with HLP backstop liquidating the 4M position. The short position led to a loss in HLP’s account value. The OI cap formula is a dynamic function of global liquidity and OI on other venues including major CEXs. A 4M USDC position fell within those limits, but additional open interest was prevented from being opened beyond the automatically triggered cap. However, the key issue was that once HLP took over the position, it shared collateral with the other component vaults in the strategy and therefore did not trigger ADL. Risk management on Hyperliquid is being strengthened in various ways, including: + HLP: The Liquidator vault will have a tight cap representing a small percentage of total HLP account value, rebalanced less frequently, and more sophisticated logic around taking backstop liquidations. ADL will be triggered if the Liquidator loses above a certain threshold, instead of moving collateral automatically from the other component vaults. Note that ADL is not expected to trigger during organic market activity. + OI caps: Open interest caps will be refined to be dynamic relative to market cap. + Delistings: Validators will vote onchain to delist assets that fall beneath thresholds. Thank you for your continued feedback, support, and commitment. Hyperliquid
English
474
629
3.9K
661.6K
Reckless ⨀ retweetledi
wishful_cynic
wishful_cynic@EvgenyGaevoy·
1. First some disclosures. Wintermute (and myself by extention) is long $HYPE. Not as much as people seem to think based on our wallet, but still our top10 positions as of today (and really since TGE). I also think Jeff and the team clearly won the defi perp space as of today. It's a pretty significant achievement and they deserve all the praise (and the accompaning hyperliquid army🙃) All this to say that I do believe in the future upside and have skin in the game
English
5
9
127
15.6K
Reckless ⨀ retweetledi
Keone Hon
Keone Hon@keoneHD·
It was a good day. I had 8 meetings and 3 interviews, wrote a bit of code for a quick POC, reviewed some docs PRs, sent a bunch of slack messages, hung out a tiiiny bit at The Studio's opening day, met up with friends for dinner, heard some good news from a talented Monad ecosystem team, did some writing. Weekdays are usually 9 am to 3 am. Meanwhile, the entire Monad Foundation team is firing on all cylinders covering a ton of ground. Let's do it again.
English
180
49
797
26.1K
Reckless ⨀
Reckless ⨀@Rcklss27·
@banditxbt LMAO! I’m selling quality, not discounts bra If it costs peanuts, we expect monkeys 🙉 And monkeys don’t jump on us like Latinas do
English
0
0
1
23
banditxbt
banditxbt@banditxbt·
@Rcklss27 $3K usd best I can do in these market conditions, ETH looks shaky as well as Bitcoin and SOL
English
1
0
1
74
banditxbt
banditxbt@banditxbt·
step 1: make it step2:
banditxbt tweet media
English
36
6
71
2.6K
bill monday
bill monday@billmondays·
i hate mini-burgers
English
91
1
260
7.6K
Ricky
Ricky@rickybharti·
1. we built on taiko, @taikoxyz flopped 2. we built on omni, @OmniFDN can't find pmf 3. we built on @zetachain, nobody knows them 4. we built on @shardeum, no comments here 5. we built on @SeiNetwork, they shut their cosmos chain 6. we built on @ton_blockchain, went crashing down the corporate lane I guess I was the problem all along 🥺
English
264
25
700
141.2K
Reckless ⨀ retweetledi
Esteban
Esteban@breath_mirror·
we spend our whole lives chasing recognition validation and status from people who wouldn't notice if we disappeared tomorrow
English
2
3
21
875
Reckless ⨀ retweetledi
Jarry Xiao
Jarry Xiao@jarxiao·
I have a lot of respect for @monad_xyz. Strong engineering teams can explain precisely how and why their systems are designed. LARPs often cannot, and teams that fork are often LARPs. If you believe the numbers, you can tell Monad has alpha from just their machine specs.
Keone Hon@keoneHD

How Monad Works Summary / Network Parameters - Monad is EVM bytecode-equivalent (you can redeploy bytecode without recompilation) - Cancun fork (TSTORE, TLOAD, MCOPY) is supported - Opcode to gas units mapping is same as Ethereum (e.g. ADD is 4) - RPC conforms to geth's RPC interface - Blocks are every 500 ms - Finality occurs in 1 second; finality of block N occurs at the proposal of block N+2 - Block gas limit in testnet is 150 million gas, i.e. gas rate is 300 million gas/s. This will increase over time - 100-200 validators expected in consensus - on Day 1 of testnet, Monad will have about 55 globally-distributed validators Frugality / Impact on Decentralization The driving goal of Monad is to have better software algorithms for consensus and execution, offering high performance while preserving decentralization These algorithms deliver high performance while relying on nodes with relatively modest hardware: - 32 GB of RAM - 2x 2 TB SSDs - 100 Mbps of bandwidth - a 16-core 4.5 GHz processor like the AMD Ryzen 7950X You can assemble this machine for about $1500 These algorithms deliver high performance while maintaining a fully-globally-distributed validator set and stake weight distribution There isn’t a reliance on a supermajority in one geographic region - one would think this is an obvious expectation but many “high-performance” L1s actually derive their performance from having a supermajority of stake weight in close proximity Node Monad node has 3 components: - monad-bft [consensus] - monad-execution [execution + state] - monad-rpc [handling user reads/writes] - Network is 100-200 voting nodes (we’ll call them “validators” for the rest of this doc) - Non-voting full nodes listen to network traffic - All nodes execute all transactions and have full state Consensus Mechanism Overall consensus mechanism is MonadBFT. MonadBFT has linear communication complexity which allows it to scale to far more nodes than quadratic-complexity algorithms like CometBFT In the happy path, it follows the pattern of “one-to-many-to-one” or “fan out, fan in”: - Leader (Alice) broadcasts a signed block proposal to all other nodes (fan out), who acknowledge its validity by sending a signed attestation the next leader Bob (fan in). - Bob aggregates the attestations into a “Quorum Certificate” (QC) - Attestation signatures use the BLS signature scheme for ease of aggregation - Bob broadcasts the QC to all the nodes, who attest to receiving it by sending a message to the 3rd leader (Charlie) who aggregates the attestations into a QC-on-QC - Charlie sends the QC-on-QC to everyone. Upon receiving the QC-on-QC, everyone knows that Alice’s block has been finalized In the above story, Bob and Charlie are only sending out QCs or QCs-on-QCs, but in reality the proposals are pipelined: - Bob’s message contains both the QC for Alice’s block and also the contents of a new block. - Charlie’s message contains the QC for Bob’s block (which is a QC-on-QC for Alice’s block) and also contains the transactions for a new block When validators send an attestation for Bob’s message they are attesting to both the validity of Bob’s block and the validity of the QC This pipelining raises the throughput of the network since every slot a new block gets produced. The below diagram shows how MonadBFT reaches consensus. Pipelining is tracked at the top: See the docs for a fuller description. Obvious questions addressed there are: - How the network handles the unhappy path where Bob doesn’t get enough a supermajority of attestations - How the above mechanism results in nodes being sure that the block has been finalized once they have received the QC-on-QC RaptorCast MonadBFT requires the leader to directly send blocks to every validator However, blocks may be quite large: 10,000 transactions/s * 200 bytes/tx = 2 MB/s. Sending directly to 200 validators would require 400 MB/s (3.2 Gbps). We don’t want validators to have to have such high upload bandwidth RaptorCast is a specialized messaging protocol which solves this problem In RaptorCast, a block is erasure-coded to produce a bunch of smaller chunks In erasure coding, the total size of all of the chunks is greater than the original data (by a multiplicative factor) but the original data can be restored using (almost) any combination of chunks whose total size matches the original data’s size For example, a 1000 kb block erasure-coded with a multiplicative factor of 3 might produce 150 20kb chunks, but (roughly) any 50 of the chunks can reassemble the original message RaptorCast uses a variant of Raptor codes as the encoding mechanism In RaptorCast, each chunk is sent to one validator who is tasked with sending the chunk to every other validator in the network That is, each chunk follows a two-level broadcast tree where the leader is the root, one other validator is at the first level, and all other validators are on the second level Validators are assigned chunks prorata to their stake weight Here's a diagram showing the RaptorCast protocol: each validator serves as a first-hop recipient for a range of chunks, and broadcasts those chunks to every other validator: Raptorcast properties: - Using the two-level broadcast tree ensures that message delivery occurs within 2x the longest hop - Upload bandwidth for the leader is limited to the block size times the replication factor (roughly 2) - Since chunks are assigned pro-rata to stake weight, and BFT assumes no more than 33% of stake is malicious, at most 33% of chunks could fail to reach their recipients. With a replication factor of 2x, nodes can reconstruct the original block despite a maximum 33% loss. Transaction Lifecycle - User submits pending transaction to RPC node - RPC node sends pending transaction to next 3 leaders based on the leader schedule - Pending transaction gets added to those leaders’ local mempools - Leader adds transaction to their block as they see fit [default: they order by descending fee-per-gas-unit, i.e. Priority Gas Auction] - Leader proposes block, which is confirmed by the network as mentioned above Note: directly forwarding to upcoming leaders (as opposed to flood forwarding to all nodes) greatly reduces traffic. Flood forwarding would take up the entire bandwidth Note: in the future, a behavior is being considered where leaders forward pending transactions (that they weren’t able to include in their block) to the next leader Leader Election - Leaders in the current testnet are permissioned. Staking will be added shortly - An epoch occurs roughly once per day. Validator stake weights are locked in one epoch ahead (i.e. any changes for epoch N+1 must be registered prior to the start of epoch N) - At the start of each epoch, each validator computes the leader schedule based on running a deterministic pseudorandom function on the stake weights. Since the function is deterministic everyone arrives at the same leader schedule Asynchronous Execution Monad pipelines consensus and execution, moving execution out of the hot path of consensus into a separate swim lane and allowing execution to utilize the full block time. - Consensus is reached prior to execution - Leader & validators check transaction validity (valid signature; valid nonce; submitter can pay for the data cost of the transaction being transmitted), but are not required to execute the transactions prior to voting. - After a block is finalized, it is executed; meanwhile consensus is already proceeding on subsequent blocks This is in contrast to most blockchains, which have interleaved execution. One way to understand the impact of asynchronous execution is to recognize that, in interleaved execution, the execution budget is necessarily a small fraction of the block time since in interleaved execution, the leader must execute the transactions before proposing the block, and validators must execute before responding For a 500 ms block time, almost all of the time will be budgeted for multiple rounds of cross-globe communication, leaving only a small fraction of the time for execution The below diagram contrasts interleaved execution with asynchronous execution. Blue rectangles correspond to time spent on execution while orange rectangles correspond to time spent on consensus. The budget for execution is much larger in async execution. Delayed merkle root Due to async execution, Monad block proposals don’t include the merkle root of the state trie, since that would require execution to have already completed. All nodes should stay in sync because they’re all starting from the same point and doing the same work. But it’d be nice to be sure! As a precaution, proposals in Monad also include a delayed merkle root from D blocks ago, allowing nodes to detect if they have diverged. D is a systemwide parameter, currently set to 3. If any of the validators makes a computation error (cosmic rays?) when computing the state root at block N, it will realize that it possibly erred by block N+D (since the delayed merkle root for N contained in that block differs from its local view). The validator then needs to wait until N+D+2 to see if 2/3 of the stake weight finalizes the block proposal at N+D (in which case the local node made an error) or if the block gets rejected (in which case the leader made the error). Block stages Assume that a validator has just received block N. We say that: - Block N is ‘proposed’ - Block N-1 is ‘voted’ - Block N-2 is ‘finalized’ (because block N carries the QC-on-QC of block N-2) - Block N-2-D is ‘verified’ (because block N-2 carries the merkle root post the transactions in block N-2-D, and block N-2 is the last block that has been finalized) Note that unlike Ethereum, only one block at height N is proposed and voted on, avoiding retroactive block reorganization due to competing forks Speculative execution Although only block N-2 is ‘finalized’ and can officially be executed, nodes have a strong suspicion that the lists of transactions in block N-1 and block N are likely to become the finalized lists Therefore, nodes speculatively execute the transactions included in each new proposed block, storing a pointer to the state trie post those transactions. In the event that a block ends up not being finalized, the pointer is discarded, undoing the execution Speculative execution allows nodes to (likely) have the most up-to-date state, which helps users simulate transactions correctly Optimistic parallel execution Like in Ethereum, blocks are linearly ordered, as are transactions. That means that the true state of the world is the state arrived at by executing all transactions one after another In Monad, transactions are executed optimistically in parallel to generate pending results. A pending result contains the list of storage slots that were read (SLOADed) and written (SSTOREd) during the course of that execution. We refer to these slots as “inputs” and “outputs” Pending results are committed serially, checking that each pending result’s inputs are still valid, and re-executing if an input has been invalidated. This serial commitment ensures that the result is the same as if the transactions were executed serially Here's an example of how Optimistic Parallel Execution works: Assume that prior to the start of a block, the following are the USDC balances: - Alice: 1000 USDC - Bob: 0 USDC - Charlie: 400 USD (Note also that each of these balances corresponds to 1 storage slot, since each is 1 key-value pair in a mapping in the USDC contract.) Two transactions appear as transaction 0 and 1 in the block: - Transaction 0: Alice sends 100 USDC to Bob - Transaction 1: Alice sends 100 USDC to Charlie Then optimistic parallel execution will produce two pending results: - PendingResult0: * Inputs: Alice = 1000 USDC, Bob = 0 USDC * Outputs: Alice = 900 USDC; Bob = 100 USDC - PendingResult 1: * Inputs: Alice = 1000 USDC; Charlie = 400 USDC * Outputs: Alice = 900 USDC, Charlie = 500 USDC When we go to commit these pending results: - PendingResult 0 is committed successfully, changing the official state to Alice = 900, Bob = 100, Charlie = 400 - PendingResult 1 cannot be committed because now one of the inputs conflicts (Alice was assumed to have 1000, but actually has 900) So transaction 1 is re-executed Final result: - Alice: 800 USDC - Bob: 100 USDC - Charlie: 500 USDC Note that in optimistic parallel execution, every transaction gets executed at most twice - once optimistically, and (at most) once when it is being committed Re-execution is typically cheap because storage slots are usually in cache. It is only when re-execution triggers a different codepath (requiring a different slot) that execution has to read a storage slot from SSD MonadDb As in Ethereum, state is stored in a merkle trie. There is a custom database, MonadDb, which stores merkle trie data natively This differs from existing clients [which embed the merkle trie inside of a commodity database which itself uses a tree structure] MonadDb is a significant optimization because it eliminates a level of indirection, reduces the number of pages read from SSD in order to perform one lookup, allows for async I/O, and allows the filesystem to be bypassed State access [SLOAD and SSTORE] is the biggest bottleneck for execution, and MonadDb is a significant unlock for state access because: - it reduces the number of iops to read or write one value - it makes recomputing the merkle root a lot faster - and it supports many parallel reads which the parallel execution system can take advantage of Synergies between optimistic parallel execution and MonadDb Optimistic parallel execution can be thought of as surfacing many storage slot dependencies – all of the inputs and outputs of the pending results – in parallel and pulling them into the cache Even in the worst case where every pending result’s inputs are invalidated and the transaction has to be re-executed, optimistic parallel execution is still extremely useful by “running ahead” of the serial commitment and pulling many storage slots from SSD This makes optimistic parallel execution and MonadDb work really well together, because MonadDb provides fast asynchronous state lookups while optimistic parallel execution cues up many parallel reads from SSD Bootstrapping a node (Statesync/Blocksync) High throughput means a long transaction history which makes replaying from genesis challenging Most node operators will prefer to initialize their nodes by copying over recent state from other nodes and only replaying the last mile. This is what statesync accomplishes In statesync, a synchronizing node (“client”) provides their current view’s version and a target version and asks other nodes (“servers”) to help it progress from the current view to the target version MonadDb has versioning on each node in the trie. Servers use this version information to identify which trie components need to be sent Nodes can also request blocks from their peers in a protocol called blocksync. This is used if a block is missed (not enough chunks arrived), as well as when executing the “last mile” after statesync completes (since more blocks will have come in since the start of statesync) Thanks for reading and be sure to check out the docs

English
6
15
157
39.2K
bill monday
bill monday@billmondays·
'Thanks For Playing' is a commemorative community mint for Testnet NFT week. Featuring art from @Liliia_Eth this piece is offered as a 1 day open mint. You may attribute your own value to it, but otherwise this means nothing beyond vibes. Thx4playing magiceden.io/mint-terminal/…
English
137
58
547
41.8K