MAX

4.5K posts

MAX

MAX

@web3el

web3 builder, eth maximalist

Katılım Nisan 2022
765 Takip Edilen175 Takipçiler
MAX retweetledi
余烬
余烬@EmberCN·
这个巨鲸/机构,最近几天像定投似的,每天都用一个钱包买一至两万枚 ETH。😂这 "定投" 金额有 "亿" 点点大。 今天也不例外:凌晨又通过一个钱包用 3072 万 USDT 购买了 14,424 枚 ETH。 intel.arkm.com/explorer/addre… 现在他已经通过 6 个钱包,花费 $2.53 亿 U 在链上购买了 11.78 万枚 ETH,买入均价 $2,149。 ---------------------------------------------------- #Bitget VIP 费率更低,福利更狠!买美股秒级入场
余烬 tweet media余烬 tweet media
余烬@EmberCN

这个近期在持续购买 ETH 的巨鲸/机构 (其中有几个地址标注疑似 Erik Voorhees,但他已经明确说不是他),在 8 小时前再买 17,084.3 枚 ETH ($3675 万)。 从 3/10 以来,10 天时间已经通过 5 个钱包共计花费 $2.22 亿 U 在链上购买了 10.33 万枚 ETH,均价约 $2,151。 ---------------------------------------------------- #Bitget VIP 费率更低,福利更狠!买美股秒级入场

中文
6
4
23
8.9K
MAX retweetledi
NIK
NIK@ns123abc·
🚨NEWS: Cursor’s $50B “in-house model” is literally Kimi K2.5 with RL on top. Got caught in 24 hours >be Moonshot AI >spend hundreds of millions training Kimi K2.5 >1 trillion parameters, 15 trillion tokens, agent swarm architecture >beat GPT-5.2 and Opus 4.5 on real benchmarks >open-source it because you believe in the ecosystem >one condition: display “Kimi K2.5” if you make over $20M/month from it >Cursor takes the model >runs RL on coding tasks >ships it March 19 as “Composer 2” >blog post: “continued pretraining + scaled reinforcement learning” >zero mention of Kimi K2.5 >“our in-house models generate more code than almost any other LLMs in the world” >publishes benchmark chart >Composer 2 against Opus 4.6 and GPT-5.4 >uses the chart to justify raising at $50 billion! >less than 24 hours later >kimi dev intercepts the API response >model ID: kimi-k2p5-rl-0317-s515-fast >they didn’t even rename it >Moonshot head of pretraining runs tokenizer test >confirms: identical to Kimi’s tokenizer >publicly tags Cursor’s co-founder: “why aren’t you respecting our license?” >two more Moonshot employees post confirmations >all three posts deleted within hours >legal is now involved >but it gets worse >Cursor had Kimi K2.5 listed as a FREE model in their UI just weeks ago >users were openly using it >Feb 9: “K2.5 was in my model list. I updated and it vanished” >it vanished because Cursor pulled it from the picker, and relaunched it as their own model >Moonshot valuation: $4.3B >Cursor valuation: $50B Absolute state of Cursor.
NIK tweet mediaNIK tweet mediaNIK tweet media
Elon Musk@elonmusk

@fynnso Yeah, it’s Kimi 2.5

English
214
389
5.3K
632.4K
MAX retweetledi
Elon Musk
Elon Musk@elonmusk·
@fynnso Yeah, it’s Kimi 2.5
English
182
152
3.7K
787.7K
MAX retweetledi
吴说区块链
吴说区块链@wublockchain12·
安全公司 BlockSec 对 EVMBench 进行复测,认为该基准高估了 AI 在智能合约审计中的自动化能力;其通过扩展至 26 种模型配置并引入 22 起发生于 2026 年 2 月之后的真实攻击事件进行测试,结果显示在 110 组测试中 AI 在真实攻击利用上的成功率为 0%,但在漏洞检测方面表现与原报告接近,部分模型可识别已知模式漏洞。(The Block)wublock123.com/index.php?m=co…
中文
0
1
4
1.6K
MAX retweetledi
宝玉
宝玉@dotey·
Cursor 上线 Composer 2 不到 24 小时,就被开发者扒出了底裤。 (以下内容 Claude 辅助生成) 一个叫 Fynn 的开发者在调试 Cursor 的 API 时,发现返回的模型 ID 赫然写着:kimi-k2p5-rl-0317-s515-fast。翻译成人话:这就是月之暗面(Moonshot AI)的 Kimi K2.5,加了一层强化学习(RL)微调。 Moonshot AI 预训练负责人 Yulun Du 随即在 X 上发帖确认,经测试 Composer 2 的 tokenizer 和 Kimi 的完全一致,并直接 @ 了 Cursor 联合创始人 Michael Truell,质问为什么不遵守许可证、也没有支付任何费用。另外两名 Moonshot 员工也发帖证实,不过三条帖子后来都被删除了。 而 Cursor 在 3 月 19 日发布 Composer 2 时,只提到性能提升来自"对基座模型的持续预训练加强化学习",全程没有提到 Kimi K2.5。这两件事并不矛盾,持续预训练和 RL 本来就是在某个基座模型上做的,Cursor 只是没说基座是谁的。 这不是第一次了 去年 10 月 Cursor 发布 Composer 1 时,多国开发者发现生成的代码中频繁出现中文。Alley Corp 合伙人 Kenneth Auchenberg 当时贴出截图,直言这是"铁证",认为 Composer 1 就是基于中国开源模型微调的。KR-Asia 和 36Kr 后来证实,Cursor 和 Windsurf 都在使用中国开源模型,其中 Windsurf 承认用的是智谱的 GLM。Cursor 从来没有公开披露 Composer 1 的底层模型,后来悄悄发了 Composer 1.5 就翻篇了。 许可证才是核心问题 Kimi K2.5 使用的是修改版 MIT 许可证,里面有一条专门为这种场景设计的条款:如果使用该模型(包括衍生作品)的商业产品月活超过 1 亿或月收入超过 2000 万美元,必须在产品界面上醒目展示"Kimi K2.5"字样。 Cursor 今年 2 月的年化收入已经突破 20 亿美元,换算成月收入大约 1.67 亿美元,是许可证门槛的 8 倍多。但 Cursor 的界面上只写着"Composer 2",没有任何 Kimi 的标识。 与此同时,Cursor 正在跟投资人谈一轮新融资,估值目标约 500 亿美元,相比去年 11 月的 293 亿美元估值几乎翻倍。而 Moonshot AI 上一轮估值据报道约 43 亿美元。一个估值是对方 12 倍的公司,拿了对方的模型包装成自研技术,用来支撑"前沿实验室"的叙事去融资。 截至目前,Cursor 没有做出任何公开回应。 这件事的后续走向,对整个开源 AI 生态有标杆意义。如果 Moonshot 不对一家年收入 20 亿美元的公司执行许可证,那以后所有开源模型的署名条款就成了摆设。每家 AI 实验室都会算同一笔账:为什么要开源自己的模型,让分发能力更强的公司去掉署名、包装成自研、然后以 12 倍于你的估值去融资?
宝玉 tweet media
Aakash Gupta@aakashgupta

Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.

中文
40
43
294
139.7K
MAX retweetledi
Jason Vranek 🥐
Jason Vranek 🥐@jasnoodle·
Latest post from @fabric_ethereum ~ Preconfs always felt kinda like intents, now they can be with BALs! Idea: proposer expresses desired outcome state -> builders solve for it -> proposer verifies using the BAL -> block body isn't leaked to proposer ethresear.ch/t/outcome-prec…
English
0
5
16
1.2K
MAX retweetledi
ethresearchbot
ethresearchbot@ethresearchbot·
New post on EthResear.ch! Outcome Preconfs: Verifying Block Commitments Without the Block By: - jvranek 🔗 ethresear.ch/t/24466 Highlights: - Core problem: proposers must verify preconfirmation (preconf) commitments are satisfied before signing a block, but inspecting the full block body leaks the builder’s block and breaks fair exchange. - Relays can verify constraints today (optimistic/pessimistic), but ePBS removes relays from the critical path, so a relay-based trusted verifier is no longer a viable default. - Merkle-proof-based verification is trustless but has two major drawbacks: (1) proof generation adds latency after block construction, and (2) it is not expressive enough for many stateful/transaction-level outcome claims (e.g., proving a specific transaction caused a specific state change) without re-execution or ZK proofs. - EIP-7928 Block Access Lists (BALs) offer a low-latency, non-leaking verification substrate: BALs are produced as a byproduct of execution, committed in the header, capture every state write with a global ordering index, and omit transaction inputs/signatures—enabling verification of outcomes without revealing the full block contents. - By shifting from transaction commitments to outcome preconfs, any preconf type (inclusion, ordering/top-of-block, execution outcomes, exclusion/absence, threshold checks) can be expressed as a first-order-logic (FOL) formula over the BAL, letting one fixed verifier implementation handle new preconf types as new formulas rather than new infrastructure (with BALs also serving as dispute evidence for off-chain/on-chain adjudication). ELI5: Sometimes a block proposer wants to promise users something about the next block (like “your payment will happen” or “this contract gets updated first”). But before the proposer signs the block, they must check the promise was kept—without seeing the whole block, because seeing it would let them steal it from the builder. Older approaches either rely on trusted middlemen (relays) or require slow/limited proofs. This post suggests using a new kind of “receipt” called a Block Access List (BAL): a structured list of what the block changed in Ethereum’s state (balances, storage slots, nonces, etc.), in what order, but without revealing the actual transactions. Then the proposer can check promises by asking simple true/false questions about that list, using one common “question language,” for many different promise types.
English
1
5
16
954
MAX retweetledi
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.
English
382
942
14K
1.4M
MAX retweetledi
CM
CM@cmdefi·
贝莱德的 “质押版ETH产品” ETHB 上线一周后基本每天都是净流入,目前规模已经2.5亿美金。 ETHB是在3月12号推出的,它的设计是把持有的 $ETH 中70%–95%用于质押,收益的82%分配给投资者,按月支付。 按照目前ETH Staking APR 3%来算,假设质押比例95%,投资者拿到的收益大概是2%-2.2%。 至少目前来看,这在传统市场还是展现了一定的吸金能力。
CM tweet media
中文
7
6
36
7.4K
MAX retweetledi
Web3Caff Research (外捕研究)
以太坊推出 FCR 快速确认规则,13 秒存款确认时间奏响 “Fast L1” 前奏 3 月 17 日,以太坊开发者 Julian 发文介绍了一种新的快速确认规则(Fast Confirmation Rule,简称 FCR)。该规则的核心目标,是将以太坊 L1 向 L2 及中心化交易平台的存款确认时间由数分钟缩短至约 13 秒。 需要明确的是:FCR 机制并不是要改变以太坊网络上交易本身被打包的速度以及其 “最终确认时间”,而是向依赖确认结果的服务(例如从 L1 向 L2 转账、从 L1 向交易平台转账、跨链桥,等等)提前释放 “该交易很可能是安全的” 的信号,让它们可以更早地做出 “交易已确认” 的判断。 在展开介绍 FCR 机制之前,让我们先来了解 “最终确认” 一词到底意味着什么。 对于一条区块链而言,一笔交易被打包进区块,并不意味着它就已经不可逆。一旦链上发生重组,那么包含该交易的区块也会出现被回滚的情况。只有当该区块在共识机制下经过充分地确认,才会进入一种理论上不可篡改的状态,而这种状态就被称为 Finality(最终确认)。 举个例子,当用户网购一件商品时,从收到商品的那一刻起,随着时间推移,订单被撤销的可能性会逐渐降低。一旦订单进入 “确认收货” 状态,便几乎无法再被更改。对于这笔订单而言,它就相当于获得了一种 “最终确认” 的状态。 ✜ 试读部分已结束,剩余隐藏硬核内容在这里👇 research.web3caff.com/archives/44348…
Web3Caff Research (外捕研究) tweet media
Julian@_julianma

x.com/i/article/2033…

中文
0
3
2
363
MAX retweetledi
qinbafrank
qinbafrank@qinbafrank·
网络安全巨头Cloudfare要给AI代理推出专属稳定币Net Dollar,看媒体报道Coinbase和加密基础设施初创公司Zerohash正在竞争,为Cloudflare发行一款专属稳定币,预计2026年内上线。这款稳定币就是Cloudflare早在2025年9月就宣布的NET Dollar(1:1美元锚定稳定币),定位为“agentic web”(代理网络/智能体网络)的基础支付工具。 其CEO Matthew Prince当时表示,这是为了打造“AI驱动互联网的新商业模式”——不再依赖广告,而是转向按使用付费(pay-per-use),让AI智能体能自主浏览、谈判、支付和交易。 1、为什么需要稳定币? 1)传统支付系统(Visa/Mastercard)费用高(1.5%-3.5%)、结算慢,无法满足AI智能体每秒数千次微支付(几分之一美分)的需求。 2)稳定币(尤其是基于区块链如Coinbase的Base链)优势明显:费用低于1美分、即时结算、可编程。 Cloudflare作为全球顶级CDN和网络安全公司(处理约20%的互联网流量),天然适合把稳定币“嵌入”基础设施层,有成为AI代理经济的核心支付轨道的潜质。 2、NET Dollar的形式 去年Cloudflare与Coinbase已合作推出x402 Foundation(基于HTTP 402协议的开放互联网支付标准),简化网站/开发者接收稳定币支付。这次竞争发行NET Dollar,是合作升级版。 看官方的宣传定位,NET Dollar 是一个独立的品牌稳定币,由第三方专业机构(Coinbase 或 Zerohash 中的胜出者)负责实际发行和储备管理,但 Cloudflare 提供品牌、流量整合、支付协议(x402 等)和使用场景。如果Coinbase能胜出,NET Dollar应该会发布在base链上。 3、为什么Cloudfare要推出稳定币?有何优势? 1)官方意图 从广告驱动转向 pay-per-use模式,几十年来互联网商业模式靠广告平台和银行转账,下一代模式将由 pay-per-use、fractional payments(小额分割支付)和 microtransactions(微交易)驱动——这能把激励导向真正有价值的原创内容。” NET Dollar目的是要让创作者直接因独特内容获利、开发者轻松货币化 API、AI 公司公平补偿内容来源(解决 AI 大模型“白嫖”内容的问题)。 2)全球流量与边缘网络绝对统治力 Cloudflare 目前处理全球约 20% 的互联网请求,其CDN网络部署在 300+ 城市。支付可以直接在 edge(边缘)层完成,延迟极低、吞吐量巨大。任何网站一旦接入 Cloudflare,几乎零成本就能支持 NET Dollar——有其核心竞争优势。 3)x402 协议原生集成(与 Coinbase 共同推动)这才是杀手级优势! x402 复兴了 HTTP 402 “Payment Required” 状态码,让支付变成标准 HTTP 请求的一部分(像普通网页加载一样自然)。 AI 代理或开发者只需一行代码就能实现“请求内容 → 自动支付 → 立即访问”。 而对各家开发者、Ai代理们来说,Cloudfare则是都无法绕过的底层网络基础设施,想让自己的Agent响应更快,延迟更低,Cloudfare自然是首选,一旦接入Cloudfare,也就很容易集成了NET Dollar。 对于Cloudfare来说,自然是想利用自己在边缘网络和CDN网络的优势,来推动NET Dollar的普及,Agent代理支持NET Dollar支付会很快,延迟更低。这是Cloudfare部署在全球300个城市的CDN网络能保证的,是物理层面的绝对优势。 所以从Cloudflare的角度 推出 NET Dollar 的核心用意和战略目标,就是要把自己从“互联网基础设施提供商”升级为“AI 时代互联网金融轨道提供商”,彻底重塑互联网的商业模式。这不是单纯发一个稳定币,而是为即将到来的 Agentic Web(代理互联网 / AI 智能体互联网)打造底层支付基础设施,抢占 AI 互联网的支付底座,有能成为 AI 代理经济的事实支付标准的潜力。 如果NET Dollar成功嵌入其平台,将覆盖数百万开发者/企业,也能支付网络效应。未来AI代理购物、调用API、支付内容等场景,都可能默认走Cloudflare的稳定币轨道。 对整个行业来说,加速机器经济时代的落地,解决AI经济最后一公里(支付)问题。当然也使得AI支付竞争激烈、白热化的开始,特别是USDC已经有了一定先发优势,支付巨头Stripe(已收购Bridge)也在积极布局,但看起来Cloudflare也有自己的核心优势。
qinbafrank tweet media
qinbafrank@qinbafrank

互联网基础设施们纷纷对“AI代理时代”做出重大适应性升级变革: 1)Coinbase刚刚给AI代理做了专门钱包; 2)Cloudfare马上推出Markdown格式让网页内容直接以AI代理最喜欢、最高效的格式交付,把“AI代理”真正当成和人类访客一样重要的一等公民; 3)谷歌紧接着WebMCP协议(网页模型上下文协议):它能让网站主动告诉 AI 代理“我这里能干什么、怎么干”,而不是让代理自己猜、自己刮、自己点。 为什么要对AI代理做适应性升级: 1. AI 代理越来越成为网页的主要“访客” 传统网页是为人类浏览器设计的(HTML + CSS + JS),页面越来越重、越来越花哨。但 AI 代理、爬虫、LLM 在抓取和处理内容时,最讨厌的就是这些“噪声”:导航栏、脚本、样式、广告等。它们只需要干净的结构化文本。 2. Markdown 是 AI 的“母语” Markdown 结构明确、语义清晰(标题、列表、链接、代码块等一目了然)。token消耗大幅降低:Cloudflare 官方举例,这篇博客的 HTML 版本需要约 16,180 个 token,Markdown 版本只需 3,150 个 token,节省约 80%。 更少的 token = 更低的推理成本、更快的处理速度、更大的上下文窗口能塞下更多内容。 3、而谷歌推出的WebMCP协议这和之前 Cloudflare 把内容优化给 AI 读的逻辑一脉相承,但方向更进一步——从“读得懂”进化到“干得成”。 1)对 AI 代理 ,准确率从 ~40-70% → 接近 99%,速度提升 5-20 倍,token 成本暴降,从“猜按钮”到“调用 API” | 2)对网站主,主动控制 AI 能做什么、不能做什么(权限粒度更细),减少滥用风险 ; 3)对用户来说,代理真正“帮我订票”“帮我退货”“帮我填报销”变得可靠,不再频繁失败 4、更大的意义 网页正在从“人类中心”向“人机双中心”演进。过去我们只考虑人类浏览器,现在 Cloudflare 直接在网络层给 AI 开了一条“绿色通道”,让 AI 能以更低的成本、更高的质量消费全网内容。 这是“AI-native Web”的重要一步。Markdown 成了事实上的“AI 友好协议”,类似当年 RSS 对博客、JSON 对 API 的意义。 WebMCP 是互联网第一次在浏览器层原生为 AI 代理开了一扇“前门”——网站不再是被动被刮、被模拟,而是主动声明“我支持这些操作,来吧,直接 call 我”。 Couldfare和谷歌这两个动作正在把“代理原生互联网”从概念变成现实: 1)Cloudflare → 让代理高效读全网 2)Google WebMCP → 让代理可靠写(操作)网站 之前这里x.com/qinbafrank/sta…聊26年是AI代理加速之年,短短一个多月变化已经是天翻地覆了。

中文
8
8
40
19.1K
MAX retweetledi
trent.eth
trent.eth@trent_vanepps·
database <-------> public chain* *db which is censorship res., open for all, tech/pol robust let's remember: 1. the spectrum is not new, teams will always edge into the middle space tradeoffs 2. ethereum will maintain & improve its best-in-class bundle of public resources
English
3
8
41
2K
MAX retweetledi
蓝狐
蓝狐@lanhubiji·
Amundi是欧洲最大的资产管理公司,管着大约2.3万亿欧元的钱,昨天刚刚推出一个代币化的基金,目前规模还小,1亿美元。 看起来是在尝试的阶段,不过这是个趋势,值得关注。 具体怎么理解? 以前你买银行/基金的“现金理财”(比如货币基金),想转钱、取钱,得等到银行上班,然后走流程,可能需要T+1、T+2天才能到账。 比较麻烦。 Amundi把这个基金直接放在链上,叫 Spiko Amundi Overnight Swap Fund,(简称SAFO)。 直接在链上发行,双链原生并行玩法: 基金份额同时在以太坊和Stellar两条链上运行 在以太坊上可跟DeFi可组合 在Stellar(XLM)上做快速低费用转账 如果想在两条链上切换,平台支持跨链转移,Spiko一键完成 Chainlink则负责实时计算基金的净值(NAV) 全部都在链上,非常透明 更重要的是, 24小时随时买卖、转账、结算,不用等银行的人上班,也不用T+1什么的。 虽然现在只有1亿的规模,如果尝试效果不错,随着客户的增多,长期看,会给链上,尤其是以太坊(费用也在下降/速度也在加快)带来更大规模的链上资产和交易活动。 int.media.amundi.com/article/spiko-…
中文
8
6
46
6.6K
MAX retweetledi
soispoke.eth
soispoke.eth@soispoke·
Alice swaps privately on L1 tldr: Privacy protocol users today depend on broadcasters that can see, frontrun, and censor their transactions. In this thread we show how four future protocol upgrades can remove this dependency step by step. Native AA (EIP-8141) and 2D nonces let users self-submit with no off-chain infrastructure. Encrypted frame transactions hide swap parameters until after block ordering is committed. FOCIL guarantees inclusion as long as one honest includer can see the transaction pending in the public mempool. 👇🧵
soispoke.eth tweet media
English
49
47
218
53.2K
MAX retweetledi
ethresearchbot
ethresearchbot@ethresearchbot·
New post on EthResear.ch! The path towards Binary Tries 1: Optimal Group Depth for Ethereum's Binary Trie By: - CPerezz 🔗 ethresear.ch/t/24455 Highlights: - Best overall group depth is narrow: GD-5 or GD-6 is the sweet spot; performance gets worse past GD-6 (GD-7/GD-8). - Reads improve with wider nodes up to GD-6: ERC20 read throughput rises from 2.65 Mgas/s (GD-1) to a peak of 6.39 Mgas/s (GD-6), then declines (GD-7: 6.04, GD-8: 5.59). - Writes have a sharper optimum at GD-5: GD-5 is the write champion at 6.94 Mgas/s, beating GD-4 by ~7% (statistically significant) and beating GD-8 by ~55%; the write inflection is between GD-5 and GD-6. - Storage-engine I/O granularity matters: GD-7 nodes serialize to ~4KB, hitting Pebble’s 4KB block size boundary; beyond this, a single logical node fetch may require multiple blocks, helping explain why GD-7 reads worse than GD-6 despite a shorter path. - Access pattern dominates costs: keccak/SHA-hashed keys produce fundamentally random access in a unified binary trie, making per-slot reads ~40× more expensive than sequential synthetic patterns; overall, state reads consume ~50–85% of block time, suggesting GD-6 is a sensible default for Ethereum’s read-heavy workload. ELI5: Ethereum might replace its current “state database” tree (the Merkle Patricia Trie) with a new one called a binary trie. This new tree can be stored on disk in different ways: you can bundle multiple tiny steps of the tree into one bigger disk node (called “group depth”). Bigger bundles mean fewer steps to find data (faster reads), but each bundle is heavier to update because more internal hashing and more data must be written (slower writes). This research benchmarks many group depths to find the best trade-off for real-world-like workloads (random-looking keys like ERC20 storage) and for artificial best-case workloads (sequential keys).
English
2
4
16
975
MAX retweetledi
Ash Crypto
Ash Crypto@AshCrypto·
BREAKING: 🇪🇺 Europe’s largest asset manager, Amundi with $2.8 trillion AUM, has launched a $100 million tokenized fund on Ethereum.
Ash Crypto tweet mediaAsh Crypto tweet media
English
187
285
1.7K
71.1K
MAX retweetledi
nixo.eth 🦇🔊🥐
nixo.eth 🦇🔊🥐@nixorokish·
the EF treasury officially had its first validator index assigned as of this morning 🎉 it took a while because there's been so much influx into staking that the entry queue peaked at 71 days in February
nixo.eth 🦇🔊🥐 tweet media
Ethereum Foundation@ethereumfndn

1/ The Ethereum Foundation has begun staking a portion of its treasury, in line with its Treasury Policy announced last year. Today, the EF made a 2016 ETH deposit. Approximately 70,000 ETH will be staked with rewards directed back to the EF treasury.

English
8
22
201
19K
MAX retweetledi
ethresearchbot
ethresearchbot@ethresearchbot·
New post on EthResear.ch! Open vs. Sealed: Auction Format Choice for Maximal Extractable Value By: - 🔗 ethresear.ch/t/24454 Highlights: - MEV extracted values are extremely concentrated in a heavy right tail: the top 1% of transactions produce 68% of total revenue (Gini ≈ 0.933), so auction design for high-value events dominates overall revenue outcomes. - Competition intensity differs widely by MEV type (using bribe % as a proxy): sandwiches are near-perfectly competitive (~95% bribe), while naked arbitrage and liquidations leave much more surplus with searchers (~67–68%), implying different effective bidder counts across categories. - Revenue equivalence breaks under affiliated (correlated) valuations; modeling affiliation via a Gaussian common factor yields the linkage-principle ranking: English and second-price sealed-bid (SPSB) generate strictly higher expected revenue than first-price sealed-bid (FPSB) and Dutch for all tested (n, ρ) cells with ρ > 0. - Quantitatively, at moderate affiliation (ρ = 0.5) English/SPSB out-earn FPSB/Dutch by about 14–28% (largest for small n, up to ~30% when n is small), translating to an estimated $10–18M of foregone revenue over the sample period when applied to observed bribe totals. - All-pay auctions are a poor choice in MEV settings once affiliation is considered: FPSB revenues exceed all-pay by roughly 40–120%; additionally, at large n and high ρ, expected revenue can become non-monotonic in ρ (peaking then declining) because near-perfect correlation collapses the order-statistic spread that drives competitive payments. ELI5: Ethereum has “MEV opportunities” (like small profit chances from ordering transactions) that builders sell to searchers using auctions. This paper asks: which kind of auction makes builders earn the most money? The key idea is that searchers’ values are often related (if it’s valuable to one bot, it’s probably valuable to others too). When values are related, open/truth-revealing auctions (like an English auction or a second-price auction) usually make the seller more money than sealed/strategic-shading auctions (like first-price or Dutch). The authors also show MEV money is extremely lopsided: a tiny fraction of transactions produce most of the revenue, so choosing the best auction for those rare big ones matters a lot.
English
0
9
20
5.8K