aprender Ξ 🦇🔊 retweetledi
aprender Ξ 🦇🔊
388 posts

aprender Ξ 🦇🔊
@mossoutopia
Dark Forest ⟠ “Follow your dreams, they know the way.” 🦀
Katılım Kasım 2021
2.6K Takip Edilen1.4K Takipçiler
aprender Ξ 🦇🔊 retweetledi

Now, the quantum resistance roadmap.
Today, four things in Ethereum are quantum-vulnerable:
* consensus-layer BLS signatures
* data availability (KZG commitments+proofs)
* EOA signatures (ECDSA)
* Application-layer ZK proofs (KZG or groth16)
We can tackle these step by step:
## Consensus-layer signatures
Lean consensus includes fully replacing BLS signatures with hash-based signatures (some variant of Winternitz), and using STARKs to do aggregation.
Before lean finality, we stand a good chance of getting the Lean available chain. This also involves hash-based signatures, but there are much fewer signatures (eg. 256-1024 per slot), so we do not need STARKs for aggregation.
One important thing upstream of this is choosing the hash function. This may be "Ethereum's last hash function", so it's important to choose wisely. Conventional hashes are too slow, and the most aggressive forms of Poseidon have taken hits on their security analysis recently. Likely options are:
* Poseidon2 plus extra rounds, potentially non-arithmetic layers (eg. Monolith) mixed in
* Poseidon1 (the older version of Poseidon, not vulnerable to any of the recent attacks on Poseidon2, but 2x slower)
* BLAKE3 or similar (take the most efficient conventional hash we know)
## Data availability
Today, we rely pretty heavily on KZG for erasure coding. We could move to STARKs, but this has two problems:
1. If we want to do 2D DAS, then our current setup for this relies on the "linearity" property of KZG commitments; with STARKs we don't have that. However, our current thinking is that it should be sufficient given our scale targets to just max out 1D DAS (ie. PeerDAS). Ethereum is taking a more conservative posture, it's not trying to be a high-scale data layer for the world.
2. We need proofs that erasure coded blobs are correctly constructed. KZG does this "for free". STARKs can substitute, but a STARK is ... bigger than a blob. So you need recursive starks (though there's also alternative techniques, that have their own tradeoffs). This is okay, but the logistics of this get harder if you want to support distributed blob selection.
Summary: it's manageable, but there's a lot of engineering work to do.
## EOA signatures
Here, the answer is clear: we add native AA (see eips.ethereum.org/EIPS/eip-8141 ), so that we get first-class accounts that can use any signature algorithm.
However, to make this work, we also need quantum-resistant signature algorithms to actually be viable. ECDSA signature verification costs 3000 gas. Quantum-resistant signatures are ... much much larger and heavier to verify.
We know of quantum-resistant hash-based signatures that are in the ~200k gas range to verify.
We also know of lattice-based quantum-resistant signatures. Today, these are extremely inefficient to verify. However, there is work on vectorized math precompiles, that let you perform operations (+, *, %, dot product, also NTT / butterfly permutations) that are at the core of lattice math, and also STARKs. This could greatly reduce the gas cost of lattice-based signatures to a similar range, and potentially go even lower.
The long-term fix is protocol-layer recursive signature and proof aggregation, which could reduce these gas overheads to near-zero.
## Proofs
Today, a ZK-SNARK costs ~300-500k gas. A quantum-resistant STARK is more like 10m gas. The latter is unacceptable for privacy protocols, L2s, and other users of proofs.
The solution again is protocol-layer recursive signature and proof aggregation. So let's talk about what this is.
In EIP-8141, transactions have the ability to include a "validation frame", during which signature verifications and similar operations are supposed to happen. Validation frames cannot access the outside world, they can only look at their calldata and return a value, and nothing else can look at their calldata. This is designed so that it's possible to replace any validation frame (and its calldata) with a STARK that verifies it (potentially a single STARK for all the validation frames in a block).
This way, a block could "contain" a thousand validation frames, each of which contains either a 3 kB signature or even a 256 kB proof, but that 3-256 MB (and the computation needed to verify it) would never come onchain. Instead, it would all get replaced by a proof verifying that the computation is correct.
Potentially, this proving does not even need to be done by the block builder. Instead, I envision that it happens at mempool layer: every 500ms, each node could pass along the new valid transactions that it has seen, along with a proof verifying that they are all valid (including having validation frames that match their stated effects). The overhead is static: only one proof per 500ms. Here's a post where I talk about this:
ethresear.ch/t/recursive-st…
firefly.social/post/farcaster…
English
aprender Ξ 🦇🔊 retweetledi
aprender Ξ 🦇🔊 retweetledi
aprender Ξ 🦇🔊 retweetledi

Stop wasting hours trying to learn AI. 📘📚
I have already done it for you.
With one list. Zero confusion. And no fluff
📹 Videos:
1. LLM Introduction: t.co/kyDon6qLrb
2. LLMs from Scratch: t.co/2hyMhuKoiI
3. Agentic AI Overview (Stanford): t.co/FXu6cAqITC
4. Building and Evaluating Agents: t.co/ZigR1tdOFL
5. Building Effective Agents: t.co/uYwfwO55mO
6. Building Agents with MCP: t.co/4arFTW1b3i
7. Building an Agent from Scratch: t.co/eOmveyM9Hz
8. Philo Agents: t.co/zLu7x1tx9m
🗂️ Repos
1. GenAI Agents: t.co/eXCl2YaRPv
2. Microsoft's AI Agents for Beginners: t.co/3CSW4zPAwf
3. Prompt Engineering Guide: t.co/GVzvxPYDVO
4. Hands-On Large Language Models: t.co/0rgDvhx3pI
5. AI Agents for Beginners: t.co/3CSW4zPAwf
6. GenAI Agentshttps://lnkd.in/dEt72MEy
7. Made with ML: t.co/9z5KHF9DMe
8. Hands-On AI Engineering:t.co/dldAj5Xkr6
9. Awesome Generative AI Guide: t.co/U2WZhT4ERV
10. Designing Machine Learning Systems: t.co/sYAZX34YdQ
11. Machine Learning for Beginners from Microsoft: t.co/NjFxHbC9jZ
12. LLM Course: t.co/N34YTPu1OK
🗺️ Guides
1. Google's Agent Whitepaper: t.co/bW3Ov3vMW0
2. Google's Agent Companion: t.co/wredwWAbBA
3. Building Effective Agents by Anthropic: t.co/fxtE4alVrJ.
4. Claude Code Best Agentic Coding practices: t.co/lLSwJ9pG7C
5. OpenAI's Practical Guide to Building Agents: t.co/xgkEIogGfh
📚Books:
1. Understanding Deep Learning: t.co/CjcKpTemmV
2. Building an LLM from Scratch: t.co/DaWBxOx8o3
3. The LLM Engineering Handbook: t.co/ZA1n0N41Mf
4. AI Agents: The Definitive Guide - Nicole Koenigstein: t.co/boLkl1VlKb
5. Building Applications with AI Agents - Michael Albada: t.co/H1Xf5EkJLL
6. AI Agents with MCP - Kyle Stratis: t.co/JI3ELQZE6a
7. AI Engineering: t.co/Xk0JzMIf7o
📜 Papers
1. ReAct: t.co/QNqE4UU55w
2. Generative Agents: t.co/CwEpoJgY1U.
3. Toolformer: t.co/5m9xZd5teZ
4. Chain-of-Thought Prompting: t.co/KjVlgdWi77.
🧑🏫 Courses:
1. HuggingFace's Agent Course: t.co/7FSUYKxIdG
2. MCP with Anthropic: t.co/IkZGiWm2yS
3. Building Vector Databases with Pinecone: t.co/2YRoMfLdXd
4. Vector Databases from Embeddings to Apps: t.co/23A50ixbHJ
5. Agent Memory: t.co/uc3L9BrNF7
Repost for your network ♻️

English
aprender Ξ 🦇🔊 retweetledi

I don't care what kind of hardware you have, you should be running local models
It will save you a ton on money on OpenClaw and keep your data private
Even if you're on the cheapest Mac Mini you can be doing this
Here's a complete guide:
1. Download LMStudio
2. Go to your OpenClaw and say what kind of hardware you have (computer and memory and storage)
3. Ask what's the biggest local model you can run on there
4. Ask 'based on what you know about me, what workflows could this open model replace?'
5. Have OpenClaw walk you through downloading the model in LM Studio and setting up the API
6. Ask OpenClaw to start using the new API
Boom you're good to go.
You just saved money by using local models, have an AI model that is COMPLETELY private and secure on your own device, did something advanced that 99% of people have never done, and have entered the future.
If you are on smaller hardware you probably are not going to replace all your AI calls with this, but you could replace smaller workflows which will still save you good money
Own your intelligence.
English
aprender Ξ 🦇🔊 retweetledi
aprender Ξ 🦇🔊 retweetledi
aprender Ξ 🦇🔊 retweetledi
aprender Ξ 🦇🔊 retweetledi

Axis is excited to announce a raise of $5 million, led by @galaxyhq, in order to bring institutional yield across USD, Bitcoin, and Gold.
The digital asset market still lacks reliable, transparent ways to earn yield that work for both institutions and everyday users, in both bull and bear markets.
Too many products depend on inflationary token incentives or singular strategies, making returns unsustainable and unpredictable.
The solution?
We're building a transparent return stream that is, by design, diversified delta-neutral and weather-resistant to market conditions.
With a round led by Galaxy Ventures, and participation from
@FalconXGlobal, @OKX_Ventures, @CMT_Digital, @Maven11Capital, @GSR_io, @cmsholdings, and @Marczeller of @AaveChan, Axis is taking a different approach.
It's time to bring these quantitative institutional strategies onchain for all of you.
English
aprender Ξ 🦇🔊 retweetledi
aprender Ξ 🦇🔊 retweetledi

Tomorrow: Fusaka
Ethereum’s second major upgrade this year.
→ Feature highlight: PeerDAS - Unlocking up to 8x data throughput. For rollups, this means cheaper blob fees and more space to grow.
Learn more.
ethereum.org/roadmap/fusaka/

English
aprender Ξ 🦇🔊 retweetledi

@MetaMask Worst multichain exchange in the world @JumperExchange with 3 business day fund back
English
aprender Ξ 🦇🔊 retweetledi




















