winverse

799 posts

winverse banner
winverse

winverse

@TagboWinner

gov + infra | ETH Global Hackathon winner | Contributor @RariFoundation | prev @blackflagDAO newsletter lead

가입일 Ocak 2020
695 팔로잉508 팔로워
winverse
winverse@TagboWinner·
@aisconnolly @TACEO_IO Congratulations ais! You are so passionate about privacy (everyone should be honestly) so I’m happy to see this go live.
English
0
0
1
31
winverse 리트윗함
Elon Musk
Elon Musk@elonmusk·
@iam_smx *trillioniare
English
7.6K
7.3K
127.7K
10.7M
winverse
winverse@TagboWinner·
@philogy @plankevm During Devconnect, you said you would have something by Q1. Happy to see this!
English
1
0
1
62
philogy
philogy@real_philogy·
IT'S ALIVE!!! ⚡️ Super pumped to share a major milestone in @plankevm's development: We have the first E2E codegen working. Meaning plank in, EVM bytecode out. ⚙️ There's so much low hanging fruit both in usability & gas optimization but so happy to see progress.
philogy tweet mediaphilogy tweet media
English
6
5
60
2.8K
winverse
winverse@TagboWinner·
DeFi keeps maturing. Over the past couple days, I’ve discovered possibilities that were once out to reach.
English
0
0
1
46
winverse
winverse@TagboWinner·
@VitalikButerin Does the concept of coordination flags still apply here though?
English
0
0
0
6
vitalik.eth
vitalik.eth@VitalikButerin·
We need more DAOs - but different and better DAOs. The original drive to build Ethereum was heavily inspired by decentralized autonomous organizations: systems of code and rules that lived on decentralized networks that could manage resources and direct activity, more efficiently and more robustly than traditional governments and corporations could. Since then, the concept of DAOs has migrated to essentially referring to a treasury controlled by token holder voting - a design which "works", hence why it got copied so much, but a design which is inefficient, vulnerable to capture, and fails utterly at the goal of mitigating the weaknesses of human politics. As a result, many have become cynical about DAOs. But we need DAOs. * We need DAOs to create better oracles. Today, decentralized stablecoins, prediction markets, and other basic building blocks of defi are built on oracle designs that we are not satisfied with. If the oracle is token based, whales can manipulate the answer on a subjective issue and it becomes difficult to counteract them. Fundamentally, a token-based oracle cannot have a cost of attack higher than its market cap, which in turn means it cannot secure assets without extracting rent higher than the discount rate. And if the oracle uses human curation, then it's not very decentralized. The problem here is not greed. The problem is that we have bad oracle designs, we need better ones, and bootstrapping them is not just a technical problem but also a social problem. * We need DAOs for onchain dispute resolution, a necessary component of many types of more advanced smart contract use cases (eg. insurance). This is the same type of problem as price oracles, but even more subjective, and so even harder to get right. * We need DAOs to maintain lists. This includes: lists of applications known to be secure or not scams, lists of canonical interfaces, lists of token contract addresses, and much more. * We need DAOs to get projects off the ground quickly. If you have a group of people, who all want something done and are willing to contribute some funds (perhaps in exchange for benefits), then how do you manage this, especially if the task is too short-duration for legal entities to be worth it? * We need DAOs to do long-term project maintenance. If the original team of a project disappears, how can a community keep going, and how can new people coming in get the funding they need? One framework that I use to analyze this is "convex vs concave" from vitalik.eth.limo/general/2020/1… . If the DAO is solving a concave problem, then it is in an environment where, if faced with two possible courses of action, a compromise is better than a coin flip. Hence, you want systems that maximize robustness by averaging (or rather, medianing) in input from many sources, and protect against capture and financial attacks. If the DAO is solving a convex problem, then you want the ability to make decisive choices and follow through on them. In this case, leaders can be good, and the job of the decentralized process should be to keep the leaders in check. For all of this to work, we need to solve two problems: privacy, and decision fatigue. Without privacy, governance becomes a social game (see vitalik.eth.limo/general/2025/0… ). And if people have to make decisions every week, for the first month you see excited participation, but over time willingness to participate, and even to stay informed, declines. I see modern technology as opening the door to a renaissance here. Specifically: * ZK (and in some cases MPC/FHE, though these should be used only when ZK along cannot solve the problem) for privacy * AI to solve decision fatigue * Consensus-finding communication tools (like pol.is, but going further) AI must be used carefully: we must *not* put full-size deepseek (or worse, GPT 5.2) in charge of a DAO and call it a day. Rather, AI must be put in thoughtfully, as something that scales and enhances human intention and judgement, rather than replacing it. This could be done at DAO level (eg. see how deepfunding.org works), or at individual level (user-controlled local LLMs that vote on their behalf). It is important to think about the "DAO stack" as also including the communication layer, hence the need for forums and platforms specially designed for the purpose. A multisig plus well-designed consensus-finding tools can easily beat idealized collusion-resistant quadratic funding plus crypto twitter. But in all cases, we need new designs. Projects that need new oracles and want to build their own should see that as 50% of their job, not 10%. Projects working on new governance designs should build with ZK and AI in mind, and they should treat the communication layer as 50% of their job, not 10%. This is how we can ensure the decentralization and robustness of the Ethereum base layer also applies to the world that gets built on top.
English
811
521
3.2K
383.5K
shafu
shafu@shafu0x·
it is finally all coming together! - stablecoins - agents - x402 - wallets - block space - chain abstraction
English
63
24
342
15.3K
winverse
winverse@TagboWinner·
Ethereum address ≠ memory address A memory address is just a number used as an index into a byte array, which has no meaning outside current execution. So essentially, stack computes, memory packages bytes, and storage persists.
English
0
1
2
82
winverse
winverse@TagboWinner·
Newer AI dApps are facing a lot of hacks. Understand the infra, then leverage AI. Not as easy as it sounds though.
winverse tweet media
English
0
0
2
54
winverse
winverse@TagboWinner·
Memory starts from 0 and expands dynamically.
English
0
0
1
38
winverse
winverse@TagboWinner·
lib code can import lib code. bin code can import bin code. bin code can import lib code. lib code CANNOT import bin code.
Français
1
0
1
64
winverse
winverse@TagboWinner·
pc as usize vs u32 usize: no brainer for indexing, suitable when building a local interpreter. u32: needed for serialization, imperative when designing a spec-aligned VM.
English
0
0
2
57
winverse
winverse@TagboWinner·
The txn processor has 3 major arms that makes execution flow quite seamless. The txn pool(one arm) records incoming txn and does validation+ordering. As I study the reth arch more deeply, I will make the execution model diagram I previously shared more elaborate Mental model >
English
1
1
1
69
winverse
winverse@TagboWinner·
@VitalikButerin Driving changes/updates that are absolutely necessary but with further experimentations still being carried out in the backend regardless
English
0
0
4
54
vitalik.eth
vitalik.eth@VitalikButerin·
An important, and perenially underrated, aspect of "trustlessness", "passing the walkaway test" and "self-sovereignty" is protocol simplicity. Even if a protocol is super decentralized with hundreds of thousands of nodes, and it has 49% byzantine fault tolerance, and nodes fully verify everything with quantum-safe peerdas and starks, if the protocol is an unwieldy mess of hundreds of thousands of lines of code and five forms of PhD-level cryptography, ultimately that protocol fails all three tests: * It's not trustless because you have to trust a small class of high priests who tell you what properties the protocol has * It doesn't pass the walkaway test because if existing client teams go away, it's extremely hard for new teams to get up to the same level of quality * It's not self-sovereign because if even the most technical people can't inspect and understand the thing, it's not fully yours It's also less secure, because each part of the protocol, especially if it can interact with other parts in complicated ways, carries a risk of the protocol breaking. One of my fears with Ethereum protocol development is that we can be too eager to add new features to meet highly specific needs, even if those features bloat the protocol or add entire new types of interacting components or complicated cryptography as critical dependencies. This can be nice for short-term functionality gains, but it is highly destructive to preserving long-term self-sovereignty, and creating a hundred-year decentralized hyperstructure that transcends the rise and fall of empires and ideologies. The core problem is that if protocol changes are judged from the perspective of "how big are they as changes to the existing protocol", then the desire to preserve backwards compatibility means that additions happen much more often than subtractions, and the protocol inevitably bloats over time. To counteract this, the Ethereum development process needs an explicit "simplification" / "garbage collection" function. "Simplification" has three metrics: * Minimizing total lines of code in the protocol. An ideal protocol fits onto a single page - or at least a few pages * Avoiding unnecessary dependencies on fundamentally complex technical components. For example, a protocol whose security solely depends on hashes (even better: on exactly one hash function) is better than one that depends on hashes and lattices. Throwing in isogenies is worst of all, because (sorry to the truly brilliant hardworking nerds who figured that stuff out) nobody understands isogenies. * Adding more _invariants_: core properties that the protocol can rely on, for example EIP-6780 (selfdestruct removal) added the property that at most N storage slots can be changedakem per slot, significantly simplifying client development, and EIP-7825 (per-tx gas cap) added a maximum on the cost of processing one transaction, which greatly helps ZK-EVMs and parallel execution. Garbage collection can be piecemeal, or it can be large-scale. The piecemeal approach tries to take existing features, and streamline them so that they are simpler and make more sense. One example is the gas cost reforms in Glamsterdam, which make many gas costs that were previously arbitrary, instead depend on a small number of parameters that are clearly tied to resource consumption. One large-scale garbage collection was replacing PoW with PoS. Another is likely to happen as part of Lean consensus, opening the room to fix a large number of mistakes at the same time ( youtube.com/watch?v=10Ym34… ). Another approach is "Rosetta-style backwards compatibility", where features that are complex but little-used remain usable but are "demoted" from being part of the mandatory protocol and instead become smart contract code, so new client developers do not need to bother with them. Examples: * After we upgrade to full native account abstraction, all old tx types can be retired, and EOAs can be converted into smart contract wallets whose code can process all of those transaction types * We can replace existing precompiles (except those that are _really_ needed) with EVM or later RISC-V code * We can eventually change the VM from EVM to RISC-V (or other simpler VM); EVM could be turned into a smart contract in the new VM. Finally, we want to move away from client developers feeling the need to handle all older versions of the Ethereum protocol. That can be left to older client versions running in docker containers. In the long term, I hope that the rate of change to Ethereum can be slower. I think for various reasons that ultimately that _must_ happen. These first fifteen years should in part be viewed as an adolescence stage where we explored a lot of ideas and saw what works and what is useful and what is not. We should strive to avoid the parts that are not useful being a permanent drag on the Ethereum protocol. Basically, we want to improve Ethereum in a way that looks like this:
YouTube video
YouTube
vitalik.eth tweet media
English
660
523
3.7K
380K
winverse
winverse@TagboWinner·
@_m_y_k_e But you know these things already 😅
English
1
0
0
34
winverse
winverse@TagboWinner·
Attempt to access an element using indexing. If index > array length, Rust ‘panics’ Pretty useful safety check to avoid invalid memory access. Diving deeper into error handling over the coming weeks.
English
1
0
2
76