Honestnode

250 posts

Honestnode banner
Honestnode

Honestnode

@KaspaHonest

Beigetreten Aralık 2023
48 Folgt12 Follower
Honestnode retweetet
XXIM Podcast
XXIM Podcast@xximpod·
🚨 New Episode w Louis from @ZealousSwap "Kaspa is the Ultimate Chain for AI Agents + How AI Agents Fixes DeFi UX" Full Video - youtu.be/ejI-ptgPWWw In this episode, Ankit sits down with Louis from @ZealousSwap to discuss AI agents as the killer UX layer for DeFi. The conversation opens with a frank acknowledgment that DeFi has a well-known UI problem, only one to two million active users across all platforms. Louis argues that DeFi was never really built for humans it's code, on-chain, fully trackable, and readable by machines. AI agents don't need pretty interfaces. They just need an intent. Louis lays out his three-phase framework for AI agents taking over DeFi: Phase 1 (now): Agents execute tasks but still ask for confirmation. Trust and permissions are limited. Phase 2: A reliable permissions layer ERC-4337 or EIP-7702 lets agents act autonomously within defined rules without handing over private keys. Phase 3: Fully autonomous agents with persistent memory, self-directed protocol research, and portfolio management — think BlackRock's Aladdin, democratized for retail. The episode covers how agents can already protect users from MEV attacks and slippage by splitting large swaps, routing across protocols, and using private mempools. Louis @ZealousSwap uses the $50M Aave swap disaster as the textbook example of what a well-instructed agent would have prevented. Louis also breaks down ERC-4337 smart wallets and ZealousSwap's gas fee sponsorship model where users don't need native tokens to transact alongside Coinbase's X402 protocol for autonomous agent payments with zero human interaction. The episode closes on #Kaspa's edge for AI agents: speed and reliability. Millisecond block times shrink MEV attack windows to near-zero. Compared to Ethereum's multi-minute confirmations or Solana's downtime history, Kaspa's Layer 1 is a natural environment for agents that think and execute faster than any human. ----------------------- DISCLAIMER This video is for educational and informational purposes only and is NOT financial or investment advice. We do not recommend you to buy or sell any assets. Opinions of guests are their own and do not constitute endorsements. Cryptocurrency and blockchain investments are highly risky and can result in total loss of capital. Do your own research and consult licensed professionals. The channel and its hosts are not liable for any investment decisions or losses.
YouTube video
YouTube
English
1
13
42
2.3K
Honestnode retweetet
KasSigner
KasSigner@KasSigner·
Hey Kaspians….XXIM talks KasSigner’s. Give it a view and support this, your own Kaspa Signer. To celebrate that v1.0.2 will be out on the next 24 hours. Go and DIY… build your own piece of Sovereignty…🤟🏼 youtu.be/g4pnQ_c-LJI
YouTube video
YouTube
English
5
24
63
2.2K
Honestnode retweetet
Kapsa✨(the, DΨor𐤊)
@KasSigner is the option for the most paranoid (my fav). Get your QR code seed phrase engraved or stamped on a metal plate with an offline raspberry pi zero seedstamper, this is your seed. Import seed via camera and sign transaction 100% offline and airgapped. Raspberry pie zero has no Bluetooth, WiFi. The only 100% safe method. Also very safe seed generation via TRNG using a picture of the built in camera as a source of entropy. Seedless is a big no go for me.
English
2
3
9
310
Honestnode retweetet
XXIM Podcast
XXIM Podcast@xximpod·
Thank you everyone for your suggestions! Here is the plan moving forward - Because of ongoing X DM issues (messages simply aren’t reaching people reliably anymore), so have decided to start posting publicly about our guest openings going forward. On Kaspa Core front - where we at currently with some of the brilliant minds: @hashdag – We reached out publical (no response) x.com/xximpod/status… x.com/xximpod/status… @michaelsuttonil – Covenants++ (politely declined earlier due to being very busy) @OriNewman – SilverScript (no response yet) @hus_qy – ZK (no DM’s allowed) would love to discuss and focus on ZK  @coderofstuff_ – (declined earlier) If we missed anyone please let us know. Outside of core, these are other couple of topics we are waiting on- @KaspaKii - on Warpcore  Kaspa L1 & Kaspa L2s together - x.com/xximpod/status… As always, our focus is no hype / no price action just real honest passionate conversation.
XXIM Podcast@xximpod

👋 Tag your favorite person or project you’d love us to bring on the podcast. @xximpod is a platform dedicated to thoughtful, unfiltered conversations with visionary builders, thinkers, and innovators who go Beyond Mainstream!, We explore ideas that challenge conventional systems. Tag your favorite person or project below and suggest the topic you’d love us to discuss in our signature style of honest, nuanced, and intellectually rigorous dialogue..

English
2
10
46
2K
Honestnode retweetet
Michael Saylor
Michael Saylor@saylor·
Incoming...
English
977
1.2K
12.3K
670K
Honestnode retweetet
Kaskad
Kaskad@AppKaskad·
Kaspa is a magnet for bright minds.
eliott@eliottmea

wahey #kas community, as you know I've been working on a mathematical framework for price oracles for kaspa for the past year: formal proofs, tight bounds, rigorous security guarantees. I am currenty working for @AppKaskad, who is for now the sole responsible for funding my research, something i much appreciate. I'd like to contribute to defi on an even larger scale, potentially being a contender for leaders such as Pyth or Chainlink. To that end i'd like my work to reach even more people, and to focus full time on this oracle as well as the L1 kaspa auction, a separate paper that I started under the supervision of yonatan. oracles are not optional infrastructure for defi: they are the foundation everything else is built on. with defi coming to kaspa, getting this right matters. i'd rather we have something mathematically sound than ship something that gets exploited six months later. if you know of grants, research funding, DAOs, or individuals who want to back my work in the kaspa ecosystem, DMs are open. paper attached, happy to discuss with anyone who wants to engage with the content. I plan on updating weekly the content of my work, and in my next post will do a timeline on what still needs to be done.

English
1
19
139
3.5K
Honestnode retweetet
Igra Labs
Igra Labs@Igra_Labs·
Exit to Kaspa L1 (aka iKAS unwrapping) is live. Anyone can send iKAS on Igra to the KasExitBridge smart contract (explorer.igralabs.com/address/0x00d3…) to receive KAS on L1 at their designated address. Parameters for the initial rollout: - Rate limit: 20 withdrawals per 2^16 blocks (~24 hours) - Minimum: 1000 iKAS (dust would congest the Kaspa-side release pipeline) - Maximum: 50000 iKAS - Processing: typically within 72 hours These conservative limits exist since the bridge is currently secured by federated multi-sig and each outgoing transaction is semi-manually reviewed as part of operational risk controls during the initial rollout. We're optimizing for zero loss of funds over speed. Process is conducted by a designated operations group, consisting of experienced ecosystem contributors familiar to the Igra team. Processing typically completes within 72 hours and may take longer in exceptional cases. Transfer of exit bridge contract ownership and control to the Igra DAO has been proposed for DAO voting. Details to follow. Developer guide: igra-labs.gitbook.io/igralabs-docs/… Detailed specs: igra-labs.gitbook.io/igralabs-docs/…
English
17
110
413
9.3K
Honestnode retweetet
Kaskad
Kaskad@AppKaskad·
ETH restaking vs KSKD staking: a thread 1/ On April 18, 2026, ~$100–293M was drained from Aave V3/V4 via the rsETH/KelpDAO exploit. The mechanism: a bug in KelpDAO's bridge allowed minting rsETH without depositing real ETH. That unbacked rsETH was used as collateral on Aave → borrowed real assets → bad debt left behind. Not a bug in Aave per se, rather a broken trust chain upstream. 2/ What stKSKD actually is On Kaskad, stKSKD exists for a completely different purpose: governance, not leverage. To vote on protocol parameters, you need to: ☑️ Stake KSKD → receive stKSKD (1:1, backed by locked KSKD) ☑️ Have supplied capital to the protocol (TVL requirement) ☑️ Have maintained that supply for a meaningful % of our 30-day epoch system No TVL in the pool = no voting weight. Skin in the game is mandatory = verifiable on-chain proof of participation. 3/ How voting weight is calculated Voting weight = TVL component + loyalty term ↪️ TVL component: your share of protocol supply × uptime score (need ≥90% epoch uptime to qualify — governance-adjustable between 80–92.5% by voters themselves) ↪️ Loyalty term: your stKSKD share of the vault × a time-boost (saturating curve, grows fast, plateaus) + lifetime vote count You can't game it with a flash stake the day before a vote: the time-weighted holding duration in `StKSKDVault.holdingDuration()` is the core input. 4/ Why the mint is isolated stKSKD mint logic is intentionally simple and isolated: `safeTransferFrom(KSKD) → then _mint(stKSKD)` stKSKD is not using any external bridge. No price ratio. No ERC-4626 share math that can be manipulated. 1:1, hardcoded. If the KSKD transfer fails, nothing is minted. This is the structural opposite of what happened to rsETH. 5/ Why KSKD was designed as a utility token KSKD was designed as a utility token backed by a legal opinion (see: x.com/i/status/20008… ) We're not in the overleveraging game. Our focus: better oracle design, healthier lending game theory, and transparent protocol fee allocation based on weighted proof of participation = where your governance power is earned by the value you actually provide to the protocol. 6/ The bigger point The rsETH exploit is a composability risk story — collateral damage from the permissive way DeFi has let human greed get encoded into protocol design. Existing DeFi can be better, and needs to be. The good news: builders across the #Kaspa ecosystem are tackling this from multiple angles. Lending, oracles, game theory, Stag Hunt... the design space is wide open. We're early on a path toward a more principled DeFi. One where participation is earned, incentives are honest, and the primitives are actually safe. That's what's being built on Kaspa. And it's just getting started.
English
0
17
52
2.8K
Honestnode retweetet
XXIM Podcast
XXIM Podcast@xximpod·
🚨 New Episode w @KasSigner "Kaspa 1st SeedSigner: DIY Your Own Cold Wallet for Under $20" Full Video - youtu.be/g4pnQ_c-LJI In this episode, Ankit sits down with @KasSigner contributor — an open source project inspired by SeedSigner, but built specifically for #Kaspa. The conversation opens with a foundational breakdown of what SeedSigners are, why they're the antithesis of a hardware wallet black box, and why "not your keys, not your coins" is more relevant than ever — from Mt. Gox to FTX. From there, the episode walks through a live demo of Kassigner from scratch — including a fresh flash of the ESP32-S3 device, the one-command macOS installer, seed generation with camera entropy, optional passphrase (the 25th word), and a creative steganographic seed backup feature that hides your encrypted seed phrase inside an image's EXIF data, protected by a password only you know. The demo continues with a full transaction flow: exporting a public key via QR code to the companion KasSee watch-only wallet app, constructing a transaction, scanning it to the offline @KasSigner for signing, and broadcasting back to the network — all without the private keys ever touching the internet. The guest also shows how to connect the KasSee wallet to your own Kaspa node for added privacy and Kaspa-speed instant transactions. ----------------------- DISCLAIMER This video is for educational and informational purposes only and is NOT financial or investment advice. We do not recommend you to buy or sell any assets. Opinions of guests are their own and do not constitute endorsements. Cryptocurrency and blockchain investments are highly risky and can result in total loss of capital. Do your own research and consult licensed professionals. The channel and its hosts are not liable for any investment decisions or losses.
YouTube video
YouTube
English
3
23
78
4.3K
Honestnode retweetet
Michael Saylor
Michael Saylor@saylor·
Think ₿igger.
Michael Saylor tweet media
English
1K
1.9K
15.9K
715.6K
Honestnode retweetet
Peter Schiff
Peter Schiff@PeterSchiff·
@saylor Yes, it's just as worthless in space as it is on earth.
English
252
46
1.8K
48.2K
Honestnode retweetet
Michael Saylor
Michael Saylor@saylor·
Bitcoin works in space. $BTC
English
861
1K
10K
358.8K
Honestnode retweetet
saefstroem
saefstroem@asaefstroem·
Man it feels so good to be back working on $KAS, what a crazy alien technology. 🚀👽
saefstroem tweet media
English
26
73
419
7K
Honestnode retweetet
XXIM Podcast
XXIM Podcast@xximpod·
🚨 New Episode w @bitstreetcap "Why $KAS Price Moves the way it Does" Full youtube - youtu.be/lCpocPBLW7o In this episode, Ankit sits down with Shivam and Abhimanyu (AB) from @bitstreetcap Market Maker for a deep dive into one of crypto's most misunderstood and opaque roles. @bitstreetcap A liquidity tech company currently active across 30–35 projects and 80+ exchanges, with three to four of those projects in the Kaspa ecosystem, including Kaspa itself. AB and Shivam break down the fundamentals of market making from scratch, what it actually means to sit on both sides of the orderbook, why spreads and depth matter, and how exchanges use these metrics to assess and ultimately delist underperforming tokens. They walk through the two core #MarketMaking models projects encounter: the retainer model, where the project retains custody and the market maker operates on their funds, and the loan-option model, where the market maker deploys their own capital against loaned tokens at a strike price, a model that only makes sense at higher market caps and comes with significantly more risk on both sides. The conversation gets candid on the darker side of the industry — wash trading, spoofing, stop hunting, and the blackbox culture that defines most legacy market makers. AB and Shivam explain why 80–90% of firms won't tell you how they operate, how regulatory gaps in unregulated jurisdictions make enforcement nearly impossible, and how exchange metrics can inadvertently push market makers into grey-area behavior just to stay compliant. Shivam also offers a rare look inside the monopolistic market making model — what it means to be the only market maker for a smaller project, how pricing decisions are made, and why in that setup, Delta neutral hedging is essentially irrelevant. The episode wraps with a look at the power dynamic between exchanges and projects. ------------------------ DISCLAIMER This video is for educational and informational purposes only and is NOT financial or investment advice. We do not recommend you to buy or sell any assets. Opinions of guests are their own and do not constitute endorsements. Cryptocurrency and blockchain investments are highly risky and can result in total loss of capital. Do your own research and consult licensed professionals. The channel and its hosts are not liable for any investment decisions or losses.
YouTube video
YouTube
English
3
11
42
1.4K
Honestnode retweetet
Shai (Deshe) Wyborski
Shai (Deshe) Wyborski@DesheShai·
UTXO set commitments are $kas Kaspa's quantum achilles hill In light of the recent truly astounding advances in building quantum computers, I think it's time to explain the most significant threat to Kaspa's consensus mechanism that such machines pose. It's not an immediate threat, but arguably something that requires more attention given the shift in the landscape. Before I start, I want to mention that @mcpauld invited me to a recorded session where we will talk about the new quantum advances, their meaning, and their consequences to blockchains. Stay tuned to know when it is published. Incremental Hash commitments and MuHash When a new Kaspa node syncs from an existing one, it gets a copy (actually, two copies, but never mind) of the UTXO set, along with a commitment. The commitment is a small hash that cryptographically assures that the supplied UTXO set matches the expected one. Hashing the entire UTXO set is an ever-daunting task, whose computational cost grows with the number of UTXOs. It's reasonable to do once during sync for verification, but for a miner, recomputing the entire hash for every new block would gradually make mining less and less accessible. To address this, Kaspa headers use an incremental hash. It's a special kind of hash that is used to commit to a set of strings (each representing a UTXO). What makes it special is that given the current commitment, as well as a list of elements to add and remove, one can compute the hash of the resulting set without recomputing the entire hash. So when creating a new block, the miner just uses the existing hash and updates it according to the UTXOs consumed and created in its block. As long as the block wasn't pruned, all nodes can repeat this check and verify that the miner is honest. Generally speaking, hashes are not incremental. Incremental hashes are specially designed to provide this functionality. In particular, Kaspa uses MuHash, a very lightweight incremental hash. Quantum Shor Attacks I will not go into the details of what quantum computers can or cannot break. But what's important to remember is that they can break what we call "discrete log assumptions". Stock hash families like Keccak, SHA, Blake, and so on do not rely on any such assumption, so they are considered quantum secure (in the sense that it is impossible to quantum-optimize them beyond the obligatory Grover quadratic speedup). However, MuHash relies on elliptic discrete log assumptions, very similar to ECDSA. This means that a quantum adversary can invert the hash commitment. In other words: they can find a completely different UTXO set with the same MuHash commitment. Consequences The UTXO set can only be verified independently of the UTXO commitment until the block is pruned. After that, Kaspa clients will accept any UTXO set that matches the commitment. This, for example, allows the following 51% attack: 1. Locate the UTXO commitment of the latest pruning block 2. Use your quantum computer to find another UTXO set with the same commitment 3. Build a competing heavier chain that assumes the UTXO set at pruning is the one you manufactured and not the original one Voila! A 51% the length of a single pruning window that can rewrite Kaspa's enitre history. Comparison to the current state Currently, Kaspa relies on social consensus in the short term, followed by cryptographic security in the long term. Social consensus prevents committing to UTXO sets that weren't a consequence of legitimate transactions. Cryptography uses state commitment to cement the UTXO set agreed upon by consensus. This is a very mild relaxation of Bitcoin's trust model, which does not require social consensus in the short term for chain consistency. Breaking MuHash means that the cryptographic backbone of this model no longer holds. UTXO commitments become unreliable, compromising Kaspa's trust model. I want to stress two things: 1. The attack only requires one application of Shor's algorithm to find a preimage. It might require some clever mix-and-match to find a preimage you actually like, but factors like BPS or difficulty do not make the attack any harder. 2. The attack cost is directly proportional to the length of a pruning window (in RW time, not blocks). So shorter pruning windows = less quantum secure network. Partial Solutions 1. Relying on archival nodes. If archival nodes are always available, then the problem "goes away". The issue is that archival nodes become a trusted source of truth. Currently, we don't have to trust archival nodes, because the UTXO commitment ensures that the UTXO set they describe is genuine. With this assumption quantum-broken, we need to either trust archival nodes or have enough archival nodes to trust decentralization. One of Kaspa's strong points over Bitcoin's antiquated model is a trust model that does not require trusted archives. Removing this will make Kaspa de-facto centralized. Worse yet, the reliance on archival nodes is fragile, as if, for some reason, there is a period of time longer than a pruning window that was not archived by anyone, the chain becomes indefinitely unverifiable. 2. Changing Hash There are post-quantum hashes like LtHash. The first issue (but not the key one) is that such a commitment is much larger (2KB versus a few dozen bytes). Recall that the UTXO commitment is a part of the header, so using such large commitments will make headers 9-10 times larger, drastically increasing storage costs for pruned nodes. (One can argue that pruned non-mining nodes can run in a mode that chucks away the commitments after verifying them. This will reduce storage, but it is impossible to sync from such nodes trustlessly, recreating the few sources of truth problem.) But even if we do magically find a tiny post-quantum hash, that will only provide a partial solution. A quantum adversary could not forge the UTXO set from the latest pruning point, but would have to go back far enough to split from a block that still uses MuHsah. Possible solution I haven't spent any time trying to come up with a better solution. It is very possible that a better approach exists. Below is a strating point for a discussion, not a concrete proposal: 1. Converge on a post-quantum incremental hash, lets call it QuHash 2. Decide on a block from which commitments must be in QuHash 3. Decide on a period of time (say, a year) after which reorgs below the QuHash depth are considered invalid. This is a very problematic solution, for several reasons: 1. (After qday) any archival information from before the QuHash days cannot be trusted. This includes any form of cryptographic receipt. All could be easily forged without tampering with the commitment. 2. (After qday) there will no longer be a reliable way to verify a UTXO set "all the way to genesis", just "all the way to when we started using QuHash". What happened before qday is delegated to social consensus. 3. Headers will become larger by an order of magnitude. Conclusion MuHash is a considerable quantum weak point that is unique to Kaspa. Arguably, it's time to start brewing up solutions.
English
21
36
176
13.1K
Honestnode retweetet
XXIM Podcast
XXIM Podcast@xximpod·
🚨 New Episode w @eliottmea @BagayokoJack from @AppKaskad "On-Chain Lending Hits $1 Trillion: Kaskad Is Ready to Launch on Kaspa L2" In this episode, Ankit sits down with @eliottmea & @BagayokoJack from Kaskad building on #Kaspa L2 for a long-overdue catch-up. @AppKaskad lending and borrowing app is now live on testnet on @Igra_Labs 's Galleon Network, the whitepaper is out, the audit with @sherlockdefi is wrapping up, and the team has locked in @Fibonacci_HFT as their first market maker alongside an official partnership with @MEXC_Official. Mainnet is in sight and April is the target. @eliottmea breaks down two of DeFi's most talked-about recent incidents: the $50M @aave swap disaster and the $27M Oracle-triggered liquidation — both serving as a masterclass in why patched solutions on fundamentally broken models keep creating new attack vectors. He makes a sharp case for why AMMs are on their way out, why time-weighted average prices fail in real trading conditions, and how @AppKaskad 's own Oracle approach is being built to avoid exactly these failure modes. @BagayokoJack then walks through a live demo of the Kaskad testnet, showing the full lending platform in action from supply and borrow mechanics to the epoch-based KasKad token rewards system that ties incentivization directly to on-chain participation. The episode wraps with a look at Kaskad's governance model, its MICA-compliant tokenomics, a 65/35 treasury split that's immutably on-chain, and the team's vision for making Kaskad fully readable by AI agents. ------------------------ DISCLAIMER This video is for educational and informational purposes only and is NOT financial or investment advice. We do not recommend you to buy or sell any assets. Opinions of guests are their own and do not constitute endorsements. Cryptocurrency and blockchain investments are highly risky and can result in total loss of capital. Do your own research and consult licensed professionals. The channel and its hosts are not liable for any investment decisions or losses.
English
4
33
85
6.2K
Honestnode retweetet
Hans Moog
Hans Moog@hus_qy·
Okay, it's time for a little update: I just finished the work on the zero knowledge part of the vprogs framework, which introduces the ability to prove arbitrary computation. It consists of the following 8 PRs that gradually introduce the necessary features: 1. ZK-framework preparations (github.com/kaspanet/vprog…): This PR cleans up the scheduler and storage layers, extends the build tooling with workspace-wide dependency checking, adds the ability to publish artifacts for transactions and batches (which will later hold the proofs), renames some core types for clarity, and introduces lifecycle events on the Processor trait that allow a VM to hook into key scheduler events like batch creation, commit, shutdown, and rollback. 2. Core Codec (github.com/kaspanet/vprog…): This PR introduces a lightweight encoding library for ZK wire formats. In a zkVM guest, every byte operation contributes to the proof cost, so the codec is designed to reinterpret data in-place rather than copying it. It includes zero-copy binary decoding (Reader, Bits) and sorted-unique encoding for deterministic key ordering. It is built for no_std so it runs inside zkVM guests. 3. Core SMT (github.com/kaspanet/vprog…): To prove state transitions, we need cryptographic state commitments. This PR adds a versioned Sparse Merkle Tree that produces a single root hash representing the entire state. It includes all state-of-the-art optimizations: shortcut leaves at higher tree levels to avoid full-depth paths for sparse regions, multi-proof compression that shares sibling hashes across multiple keys, and compact topology bit-packing to minimize proof size. It integrates into the existing storage and scheduler layers so that every batch commit updates the authenticated state root, while rollback and pruning maintain tree consistency. 4. ZK ABI (github.com/kaspanet/vprog…): Defines the wire format for communication between the host and zkVM guest programs, establishing a universal language for proof composition. It specifies how inputs, outputs, and journals are structured for two levels of proving: the transaction processor, which proves individual transaction execution against a set of resources, and the batch processor, which aggregates transaction proofs and proves the resulting state root transition. Because the ABI is backend-agnostic and no_std compatible, any zkVM backend can directly use it (non-Rust zkVMs would need to reimplement the ABI in their language). 5. ZK Transaction Prover (github.com/kaspanet/vprog…): Introduces the transaction proving worker, which receives serialized execution contexts via the ABI wire format and submits them to a backend-specific prover on a dedicated thread. The Backend trait abstracts the actual proof generation, so different zkVM backends can be swapped without changing the pipeline. 6. ZK Batch Prover (github.com/kaspanet/vprog…): Introduces the batch proving worker, which collects the individual transaction proof artifacts, pairs them with an SMT proof covering the batch's resources, and submits the combined input to a backend-specific batch prover. The result is a single proof attesting to the entire batch's state root transition. Like the transaction prover, the Backend trait abstracts proof generation so different zkVM backends can be swapped without changing the pipeline. 7. ZK VM (github.com/kaspanet/vprog…): Wires everything together by implementing the scheduler's Processor trait with ZK proving support. The VM hooks into the lifecycle events introduced in PR 1 to feed executed transactions into the transaction prover and batches into the batch prover. Proving is optional and configurable - it can be disabled entirely, run at the transaction level only, or run the full batch proving pipeline. 8. ZK Backend RISC0 (github.com/kaspanet/vprog…): Provides the first concrete zkVM backend using risc0. It implements the transaction and batch Backend traits, includes two pre-compiled guest programs (one for transaction processing, one for batch aggregation), and ships with an integration test suite that verifies the full pipeline end-to-end - from transaction execution through batch proof generation to state root verification. TL;DR: While the early version of the framework focused on maximizing the parallelizability of execution, this feature focuses on extending this capability to maximizing the parallelizability of proof production. If you're a builder: this is the first version of the framework that lets you write guest programs with a Solana-like API (resources, instructions, program contexts) and have them proven in a zkVM. The current milestone uses a single hardcoded guest program - composability across multiple programs and bridging assets in and out of the L1 are part of the upcoming milestones, but if you're eager to start tinkering, the execution and proving pipeline is fully functional and provides a minimal environment to build and test guest logic today. Once we add user-deployed guests, they will move one logical layer down: the current transaction processor will become a hardcoded-circuit that handles invocation and access delegation to user programs, similar to how SUI handles programmable transactions (including linear type safety at the program boundary). In practice, this means guest programs will be invoked with a very similar API but scoped to a subset of resources, so the basic programming model won't change. Note that guests currently handle their own access authentication (e.g. signature checks) - the framework will eventually manage this automatically. If you want to contribute, two areas where community involvement would be especially impactful: - An Anchor-like DSL for writing guest programs -- the ABI is stable enough to build on, and a good developer experience layer would make this accessible to a much wider audience. - A second zkVM backend (e.g. SP1) - the Backend traits are designed for this, and a second implementation would prove out the abstraction. One thing I find particularly interesting in the context of PoW: the block hash provides an unpredictable, unbiasable random input that is revealed after transaction sequencing. This gives guest programs native access to on-chain randomness without oracles or additional infrastructure - something traditionally hard to achieve in smart contract platforms. PS: I am also planning to start with the promised regular hangouts but since I will visit my family over easter and want to get a better understanding of the open questions next week (it's good to have some problems to wrestle during that slower time 😅), I decided to start with that once I am back (12th of April). Generally speaking, is there a day that people would prefer for these hangouts? I guess monday would be bad as there is already another community event (write your preferences in the comments if you have a strong opinion).
English
54
310
880
55.7K
Honestnode retweetet
Shai (Deshe) Wyborski
Shai (Deshe) Wyborski@DesheShai·
A good time to remind that @or_sattath and myself put together the only protocol that can securely spend coins from pre-quantum addresses after q-day. Link in comment.
Justin Drake@drakefjustin

Today is a monumentous day for quantum computing and cryptography. Two breakthrough papers just landed (links in next tweet). Both papers improve Shor's algorithm, infamous for cracking RSA and elliptic curve cryptography. The two results compound, optimising separate layers of the quantum stack. The results are shocking. I expect a narrative shift and a further R&D boost toward post-quantum cryptography. The first paper is by Google Quantum AI. They tackle the (logical) Shor algorithm, tailoring it to crack Bitcoin and Ethereum signatures. The algorithm runs on ~1K logical qubits for the 256-bit elliptic curve secp256k1. Due to the low circuit depth, a fast superconducting computer would recover private keys in minutes. I'm grateful to have joined as a late paper co-author, in large part for the chance to interact with experts and the alpha gleaned from internal discussions. The second paper is by a stealthy startup called Oratomic, with ex-Google and prominent Caltech faculty. Their starting point is Google's improvements to the logical quantum circuit. They then apply improvements at the physical layer, with tricks specific to neutral atom quantum computers. The result estimates that 26,000 atomic qubits are sufficient to break 256-bit elliptic curve signatures. This would be roughly a 40x improvement in physical qubit count over previous state-of-the-art. On the flip side, a single Shor run would take ~10 days due to the relatively slow speed of neutral atoms. Below are my key takeaways. As a disclaimer, I am not a quantum expert. Time is needed for the results to be properly vetted. Based on my interactions with the team, I have faith the Google Quantum AI results are conservative. The Oratomic paper is much harder for me to assess, especially because of the use of more exotic qLDPC codes. I will take it with a grain of salt until the dust settles. → q-day: My confidence in q-day by 2032 has shot up significantly. IMO there's at least a 10% chance that by 2032 a quantum computer recovers a secp256k1 ECDSA private key from an exposed public key. While a cryptographically-relevant quantum computer (CRQC) before 2030 still feels unlikely, now is undoubtedly the time to start preparing. → censorship: The Google paper uses a zero-knowledge (ZK) proof to demonstrate the algorithm's existence without leaking actual optimisations. From now on, assume state-of-the-art algorithms will be censored. There may be self-censorship for moral or commercial reasons, or because of government pressure. A blackout in academic publications would be a tell-tale sign. → cracking time: A superconducting quantum computer, the type Google is building, could crack keys in minutes. This is because the optimised quantum circuit is just 100M Toffoli gates, which is surprisingly shallow. (Toffoli gates are hard because they require production of so-called "magic states".) Toffoli gates would consume ~10 microseconds on a superconducting platform, totalling ~1,000 sec of Shor runtime. → latency optimisations: Two latency optimisations bring key cracking time to single-digit minutes. The first parallelises computation across quantum devices. The second involves feeding the pubkey to the quantum computer mid-flight, after a generic setup phase. → fast- and slow-clock: At first approximation there are two families of quantum computers. The fast-clock flavour, which includes superconducting and photonic architectures, runs at roughly 100 kHz. The slow-clock flavour, which includes trapped ion and neutral atom architectures, runs roughly 1,000x slower (~100 Hz, or ~1 week to crack a single key). → qubit count: The size-optimised variant of the algorithm runs on 1,200 logical qubits. On a superconducting computer with surface code error correction that's roughly 500K physical qubits, a 400:1 physical-to-logical ratio. The surface code is conservative, assuming only four-way nearest-neighbour grid connectivity. It was demonstrated last year by Google on a real quantum computer. → future gains: Low-hanging fruit is still being picked, with at least one of the Google optimisations resulting from a surprisingly simple observation. Interestingly, AI was not (yet!) tasked to find optimisations. This was also the first time authors such as Craig Gidney attacked elliptic curves (as opposed to RSA). Shor logical qubit count could plausibly go under 1K soonish. → error correction: The physical-to-logical ratio for superconducting computers could go under 100:1. For superconducting computers that would be mean ~100K physical qubits for a CRQC, two orders of magnitude away from state of the art. Neutral atoms quantum computers are amenable to error correcting codes other than the surface code. While much slower to run, they can bring down the physical to logical qubit ratio closer to 10:1. → Bitcoin PoW: Commercially-viable Bitcoin PoW via Grover's algorithm is not happening any time soon. We're talking decades, possibly centuries away. This observation should help focus the discussion on ECDSA and Schnorr. (Side note: as unofficial Bitcoin security researcher, I still believe Bitcoin PoW is cooked due to the dwindling security budget.) → team quality: The folks at Google Quantum AI are the real deal. Craig Gidney (@CraigGidney) is arguably the world's top quantum circuit optimisooor. Just last year he squeezed 10x out of Shor for RSA, bringing the physical qubit count down from 10M to 1M. Special thanks to the Google team for patiently answering all my newb questions with detailed, fact-based answers. I was expecting some hype, but found none.

English
5
11
64
9.5K
Honestnode retweetet
Michael Sutton
Michael Sutton@michaelsuttonil·
@CryptoEndeavr @hus_qy The upcoming one. L1 will support based standalone zk apps with canonical bridging, and Hans is building the complementary client/L1.5 for it
English
9
89
383
6.1K