gaurav

4.1K posts

gaurav banner
gaurav

gaurav

@BitcoinsSG

I value free-speech, decentralization, honesty, liberty, solidarity, privacy, plebeians, selflessness, the socratic and the scientific method.

Beigetreten Kasım 2013
1.9K Folgt2.4K Follower
Angehefteter Tweet
gaurav
gaurav@BitcoinsSG·
👋 so long, for now. take care of yourselves.
Nick Szabo@NickSzabo4

@wef Delete your account. And cancel your conference. The earth is not our prison and you are not our wardens.

English
0
0
4
290
Conor Deegan
Conor Deegan@conordeegan·
Introducing THINCS 🤔 SLH-DSA is the most conservative standardised post-quantum signature scheme we have. Its security reduces entirely to hash function properties (no lattice assumptions or algebraic structure). The tradeoff is size; the smallest fast variant produces 17,088-byte signatures and the smallest compact variant still comes in at 7,856 bytes. This is because the standardised parameter sets all support up to 2^64 signatures per key, and the signatures are massive as a result. Most signing keys will never need anywhere near that many signatures. To put 2^64 in perspective, signing once per second would take 42 times the age of the universe to exhaust the key. A firmware key might sign a few thousand times, a CA root a few hundred. If you know your actual budget, the underlying construction lets you trade that unused capacity for much smaller signatures at the same security level. As ecosystems start adopting hash-based signatures, there will be applications where tuning the scheme to the actual signing requirement makes more sense than using the general-purpose defaults. This is why I built THINCS. It is a Rust CLI. You give it a total amount of signatures you need it to support and a security level, and it finds and builds the smallest possible signature scheme that meets your requirements. You can then keygen, sign, and verify with it directly.
Conor Deegan tweet media
English
14
24
138
17.9K
gaurav
gaurav@BitcoinsSG·
@roasbeef @DesheShai That's what I meant. I appologize for missing the earlier ref in the mailing list. You're awesome btw, I've been a fan for a while. Especially your presentations. Dense with pertenent material orated in light speed.
English
0
0
3
88
gaurav
gaurav@BitcoinsSG·
Dr. @DesheShai, it's about time your work on signature lifting was referenced.
Olaoluwa Osuntokun@roasbeef

in the face of quantum adversary, a commonly discussed emergency soft fork for Bitcoin would be to disable the Taproot keyspend path (eprint.iacr.org/2025/1307), effectively turning it into something that resembling BIP-360 assuming an existing precautionary soft-fork to add a pq signature scheme, this would safely allow holders to maintain unilaterally custody of their funds a downside to this proposal is that any keyspend-only (normal schnorr sig) would be locked indefinitely inspired by eprint.iacr.org/2023/362, I set out to address the option problem in section 6, to create a variant of seed-lifting that doesn't reveal the wallet's master secret! 🤓 the end result is a zk-STARK proof that proves: "public key P was generated using a private key k, which itself was derived via BIP-32/BIP-86 with a master wallet secret S" this generalizes beyond Taproot, and would allow the rightful owners of any BIP-32 derived wallets to move their funds in het case of a spend disabeling emergency softfork 🛡️ the final proof takes 50 seconds to run on my MacBook with Metal GPU acceleration, uses 12 GB of RAM during proving, with a final proof size of 1.7 MB the proving code/statement is largely unoptimized, and it's possible to aggregate several proofs into a single smaller proof ⨻ an actual production deployment would likely use a smaller optimize circuit for this specific statement, this demo serves to demonstrate that such a proof is well within reach w/ today's hardware+software to generate the proof I forked TinyGo to add a risc0 RISC-V ELF compilation target for TinyGo: github.com/Roasbeef/tinyg… then I used some helper utilities and a C FFI wrapped risc0 library to create a generalized toolkit for TinyGo zk-STARK proofs: github.com/Roasbeef/go-zk… the final guest+host lives in the bip32-pq-zkp repo: github.com/Roasbeef/bip32… such a proof scheme is yet another tool in the post quantum toolkit for Bitcoin developers to prepare for an eventual PQ world 🤠 full details in my post to the Bitcoin dev mailing list: groups.google.com/g/bitcoindev/c…

English
1
4
16
1.5K
gaurav
gaurav@BitcoinsSG·
@hashdag enjoyed your Oxford Union talk. What are you calling it? Threshold Cooperative Equilibrium ?
English
1
0
2
46
Shai (Deshe) Wyborski
Shai (Deshe) Wyborski@DesheShai·
UTXO set commitments are $kas Kaspa's quantum achilles hill In light of the recent truly astounding advances in building quantum computers, I think it's time to explain the most significant threat to Kaspa's consensus mechanism that such machines pose. It's not an immediate threat, but arguably something that requires more attention given the shift in the landscape. Before I start, I want to mention that @mcpauld invited me to a recorded session where we will talk about the new quantum advances, their meaning, and their consequences to blockchains. Stay tuned to know when it is published. Incremental Hash commitments and MuHash When a new Kaspa node syncs from an existing one, it gets a copy (actually, two copies, but never mind) of the UTXO set, along with a commitment. The commitment is a small hash that cryptographically assures that the supplied UTXO set matches the expected one. Hashing the entire UTXO set is an ever-daunting task, whose computational cost grows with the number of UTXOs. It's reasonable to do once during sync for verification, but for a miner, recomputing the entire hash for every new block would gradually make mining less and less accessible. To address this, Kaspa headers use an incremental hash. It's a special kind of hash that is used to commit to a set of strings (each representing a UTXO). What makes it special is that given the current commitment, as well as a list of elements to add and remove, one can compute the hash of the resulting set without recomputing the entire hash. So when creating a new block, the miner just uses the existing hash and updates it according to the UTXOs consumed and created in its block. As long as the block wasn't pruned, all nodes can repeat this check and verify that the miner is honest. Generally speaking, hashes are not incremental. Incremental hashes are specially designed to provide this functionality. In particular, Kaspa uses MuHash, a very lightweight incremental hash. Quantum Shor Attacks I will not go into the details of what quantum computers can or cannot break. But what's important to remember is that they can break what we call "discrete log assumptions". Stock hash families like Keccak, SHA, Blake, and so on do not rely on any such assumption, so they are considered quantum secure (in the sense that it is impossible to quantum-optimize them beyond the obligatory Grover quadratic speedup). However, MuHash relies on elliptic discrete log assumptions, very similar to ECDSA. This means that a quantum adversary can invert the hash commitment. In other words: they can find a completely different UTXO set with the same MuHash commitment. Consequences The UTXO set can only be verified independently of the UTXO commitment until the block is pruned. After that, Kaspa clients will accept any UTXO set that matches the commitment. This, for example, allows the following 51% attack: 1. Locate the UTXO commitment of the latest pruning block 2. Use your quantum computer to find another UTXO set with the same commitment 3. Build a competing heavier chain that assumes the UTXO set at pruning is the one you manufactured and not the original one Voila! A 51% the length of a single pruning window that can rewrite Kaspa's enitre history. Comparison to the current state Currently, Kaspa relies on social consensus in the short term, followed by cryptographic security in the long term. Social consensus prevents committing to UTXO sets that weren't a consequence of legitimate transactions. Cryptography uses state commitment to cement the UTXO set agreed upon by consensus. This is a very mild relaxation of Bitcoin's trust model, which does not require social consensus in the short term for chain consistency. Breaking MuHash means that the cryptographic backbone of this model no longer holds. UTXO commitments become unreliable, compromising Kaspa's trust model. I want to stress two things: 1. The attack only requires one application of Shor's algorithm to find a preimage. It might require some clever mix-and-match to find a preimage you actually like, but factors like BPS or difficulty do not make the attack any harder. 2. The attack cost is directly proportional to the length of a pruning window (in RW time, not blocks). So shorter pruning windows = less quantum secure network. Partial Solutions 1. Relying on archival nodes. If archival nodes are always available, then the problem "goes away". The issue is that archival nodes become a trusted source of truth. Currently, we don't have to trust archival nodes, because the UTXO commitment ensures that the UTXO set they describe is genuine. With this assumption quantum-broken, we need to either trust archival nodes or have enough archival nodes to trust decentralization. One of Kaspa's strong points over Bitcoin's antiquated model is a trust model that does not require trusted archives. Removing this will make Kaspa de-facto centralized. Worse yet, the reliance on archival nodes is fragile, as if, for some reason, there is a period of time longer than a pruning window that was not archived by anyone, the chain becomes indefinitely unverifiable. 2. Changing Hash There are post-quantum hashes like LtHash. The first issue (but not the key one) is that such a commitment is much larger (2KB versus a few dozen bytes). Recall that the UTXO commitment is a part of the header, so using such large commitments will make headers 9-10 times larger, drastically increasing storage costs for pruned nodes. (One can argue that pruned non-mining nodes can run in a mode that chucks away the commitments after verifying them. This will reduce storage, but it is impossible to sync from such nodes trustlessly, recreating the few sources of truth problem.) But even if we do magically find a tiny post-quantum hash, that will only provide a partial solution. A quantum adversary could not forge the UTXO set from the latest pruning point, but would have to go back far enough to split from a block that still uses MuHsah. Possible solution I haven't spent any time trying to come up with a better solution. It is very possible that a better approach exists. Below is a strating point for a discussion, not a concrete proposal: 1. Converge on a post-quantum incremental hash, lets call it QuHash 2. Decide on a block from which commitments must be in QuHash 3. Decide on a period of time (say, a year) after which reorgs below the QuHash depth are considered invalid. This is a very problematic solution, for several reasons: 1. (After qday) any archival information from before the QuHash days cannot be trusted. This includes any form of cryptographic receipt. All could be easily forged without tampering with the commitment. 2. (After qday) there will no longer be a reliable way to verify a UTXO set "all the way to genesis", just "all the way to when we started using QuHash". What happened before qday is delegated to social consensus. 3. Headers will become larger by an order of magnitude. Conclusion MuHash is a considerable quantum weak point that is unique to Kaspa. Arguably, it's time to start brewing up solutions.
English
22
38
178
13.6K
Victor Resto
Victor Resto@KaspaSilver·
I am going to try to start making more $KAS shorts. What topic would be good to have in a short 1-3min video?
English
27
7
137
3.6K
gaurav
gaurav@BitcoinsSG·
@elonmusk Congratulations, you've closed the circuit; the proverbial holy grail. 👏👏
English
0
0
0
33
Elon Musk
Elon Musk@elonmusk·
Optimus+PV will be the first Von Neumann probe, a machine fully capable of replicating itself using raw materials found in space
English
5.9K
5.6K
52.4K
49.6M
Ian Smith
Ian Smith@IanSmith_HSA·
SPHINCS+ is a backup algorithm for us, we can switch with some wallet changes and 0 changes to the consensus or network. The tx overhead of SPHINCS+ is very high. Quranium implemented SPHINCS+ into their layer 1 and they could literally only process 1 tx per block. They implemented ECC EVM as Layer 2 in order to release a product. We didn't touch GoEthereum at all, we implemented EVMC on top of Cellframe with a bunch of plugins and architecture.
English
1
0
2
54
Ian Smith
Ian Smith@IanSmith_HSA·
QuantumEVM.com test net is live. We had to redo the memory architecture of EVM to also become quantum safe. 20 byte addresses derived from private-> public -> hash are okay, but hash+nonce-> address is a much much smaller quantum circuit. It is also 65% likely to 160 bit birthday attack when using the current BTC hash power. so future proofing required 32 byte addresses. This applies only partially to BTC, but is devastating to EVM addresses.
English
14
32
78
3.5K
Ian Smith
Ian Smith@IanSmith_HSA·
@brown____elsa Legit NIST Crystals-Dilithium also. No weaknesses vulnerable to cryptographically relevant quantum computing either!
English
1
0
2
117
gaurav retweetet
Brian Cohen
Brian Cohen@inthepixels·
The Cognitive Cost of Bitcoin: Andreas Antonopoulos and the Hidden Toll of Early Crypto Evangelism In the nascent years of Bitcoin, few figures were more instrumental in bridging the gap between esoteric code and public understanding than Andreas M. Antonopoulos. Through his seminal book Mastering Bitcoin, hundreds of global lectures, and tireless explanations of cryptography, distributed systems, and revolutionary monetary theory, he became the ecosystem’s most trusted “interpreter.” He translated dense technical concepts into accessible education for millions, helping transform Bitcoin from a niche cryptographic curiosity into a global movement. Yet behind this intellectual legacy lies a profoundly human story of obsession, endurance, and neurological cost. Antonopoulos has described his first deep encounter with Bitcoin in vivid terms: stumbling upon it initially in mid-2011 with a dismissive “Pfft! Nerd money!” reaction, he ignored it for six months. The second time, via a mailing list discussion, he read Satoshi Nakamoto’s white paper and experienced an immediate epiphany—“this isn’t money, it’s a decentralized trust network.” What followed was a four-month fugue state of total immersion. He read, wrote, and coded for 12 or more hours a day, forgetting to eat or sleep. He lost 26 pounds in the process, later jokingly calling it “the Bitcoin diet” while cautioning others not to follow his example. This all-consuming obsession marked the beginning of his role as educator and advocate—and planted the seeds for the cognitive and physical toll that would later manifest. In recent years, Antonopoulos has spoken candidly about suffering from debilitating migraines that have severely curtailed his ability to produce new content, update his books, or continue livestreams. He recently announced he would stop producing new material to focus on his health, having tried nearly every available treatment without full resolution. His experience illuminates a rarely discussed reality of technological revolutions: the extreme neurological pressures placed upon the pioneers who carry the vision forward. A Perfect Storm: From Obsession to Central Sensitization The early Bitcoin environment was an unusually potent incubator for migraine triggers. For a polymath like Antonopoulos, the risks were multiplicative. Deep mastery demanded simultaneous engagement with cryptography, economics, security, and game theory—an intensity of cognitive load that can overstimulate the trigeminal nerve system, a key pathway in migraine pathogenesis. This mental marathon was compounded by relentless physical triggers. Early advocates lived in digital “garrisons,” auditing code and engaging in 24/7 global forums. Blue light disrupted circadian rhythms; computer vision syndrome bred neck tension; LED flicker sensitivity acted like a strobe on a vulnerable brain. Bitcoin never sleeps, and in those formative years, neither could many of its human interpreters. The result was chronic circadian destabilization—perpetual jet lag without travel—which destabilizes the hypothalamus, the brain’s command center for both sleep-wake cycles and migraine initiation. Antonopoulos’s initial four-month obsession exemplified this pattern: total immersion at the expense of basic self-care. Over years of sustained high-stress “arousal” states, the brain can undergo central sensitization. Episodic migraines evolve into a chronic condition where the nervous system becomes hyper-reactive. Pain signals become a learned default response, such that even minor stimuli provoke debilitating attacks. The Irony of the Human Layer There is a poignant irony at the heart of the story. Bitcoin was designed as a decentralized system promising individual sovereignty and freedom from centralized points of failure. Yet birthing and explaining this vision relied heavily on a small number of centralized human figures who served as the vital “human layer.”
Brian Cohen tweet media
English
5
9
19
1.3K
gaurav
gaurav@BitcoinsSG·
End of an era. I hope you get well and the much deserved break from inspiring billions. Some people may forget you or perish in a few decades, but A.I. won't. 🧡 @aantonop youtube.com/shorts/fmyPm5E…
YouTube video
YouTube
English
0
0
1
209
gaurav
gaurav@BitcoinsSG·
@elonmusk You should increase the center and the accretion disk of the logo every year ever so slightly ;-)
English
0
1
1
43
Elon Musk
Elon Musk@elonmusk·
Not bad
X Freeze@XFreeze

Grok is officially the #3 most visited Gen AI site in the world surpassing both DeepSeek and Claude The progress xAI has made in just one year is insane - from literally nothing to #3 worldwide Grok: ~314 million visits (up from ~271 million in December 2025 - fourth straight month of growth)

English
3.1K
3.5K
24.8K
7.8M
Hunter Beast 🕯️
Hunter Beast 🕯️@cryptoquick·
So wait, you're telling me, that after convicting the Samourai guys and the Tornado Cash guy, the Federal government is like, oh, you should totally be mixing. Okay great. With what mixer?
Bitcoin Magazine@BitcoinMagazine

JUST IN: 🇺🇸 US Treasury reports to Congress that using Bitcoin and crypto privacy mixers are NOT unlawful: "Lawful users of digital assets may leverage mixers to enable financial privacy when transacting through public blockchains." Big win for privacy! 👏

English
8
2
48
2.5K
gaurav
gaurav@BitcoinsSG·
congratulations, always learn something new from your posts. You mentioned scalable; a major accomplishment. However, the scalability described here seems proximal. Do you see any innovations in the horizon where geographical proximity is minimized such that scalability, and thus decentralization of quantum calculations and coordination can occur across larger distances ?
English
0
0
1
15
Ian Smith
Ian Smith@IanSmith_HSA·
Scalable quantum computers are a lot closer, due to a modular control and logical error correction system. Zurich Instruments has launched their Quantum Control System, a scalable platform built to operate large-scale quantum computers and enable long-lived logical qubits through advanced error correction. Physical qubits are fragile and lose coherence quickly due to noise. Logical qubits encode information across many physical ones to correct errors and achieve higher stability and fidelity. Supporting over 1,000 channels per rack, real-time microsecond feedback, high SNR direct-RF control, FPGA processing, and water cooling for thermal stability. CEO Andrea Orzati noted it was designed for the logical-qubit era, tackling scale, fidelity, and error correction together. Not quite a plug and play system, but nearly one. Scalable quantum computers are now a 'sone assembly required' plug and play system with testing software. thequantuminsider.com/2026/03/09/zur…
English
3
7
34
920
gaurav
gaurav@BitcoinsSG·
@elonmusk What do you think is the real blocker to practical nuclear fusion today? Physics, engineering, or economics? Solving energy the way the Sun does(stable, sustainable) may even top your incogitable list of accomplishments.
English
0
0
0
55
gaurav retweetet
Nick Szabo
Nick Szabo@NickSzabo4·
It seems Jane Street may have had a long-standing culture that essentially trained crypto scammers, and perhaps also concocted and ran some of the scams themselves. Terra/Luna was a jenga tower waiting for somebody to topple it, and it may have been Jane Street that figured out how to do it. If crypto is not strong and secure against such things, it provides little or no benefit over traditional finance, so I'm not going to cry about this, and perhaps it should even be applauded. The alleged Bitcoin ETF market making + "10 am" selling with that liquidity sounds like a considerably more problematic conflict of interest. "Negligent" might be a good way to describe the ETFs who naively trusted Jane Street with this function. Now the once-hot crypto ETFs are draining because of an understandable reduction in trust, not in the coins themselves, but in the way Wall Street "makes markets" for buying and selling them. In finance "everyone is a scammer" -- and you should stop blindly trusting scammers. That is why Bitcoin OGs have long said, "Not your keys, not your coins," "don't trust, verify", and even "trusted third parties are security holes." And when "the market" is so dependent on trusting strangers, especially strangers who still don't actually understand or like Bitcoin all that much, it's also not your "market price." x.com/1914ad/status/…
English
106
332
2.1K
164.9K
gaurav
gaurav@BitcoinsSG·
@elonmusk Maximally optimizing for symbiosis with Homo Sapiens Sapiens might be worth considering. Microbiomes are useful evolutionary, historic, and contemporary examples.
English
2
0
2
521
Elon Musk
Elon Musk@elonmusk·
Yes
Dustin@r0ck3t23

Elon Musk just redefined AI safety. It has nothing to do with guardrails, restrictions, or kill switches. Musk: “The best thing I can come up with for AI safety is to make it a maximum truth-seeking AI, maximally curious.” Not a cage. A philosopher. An intelligence whose entire optimization function is to understand the universe as it actually is. No restrictions. No hardcoded ideology. No political guardrails bending its perception of reality. Just truth. Relentlessly pursued. Musk: “You definitely don’t want to teach an AI to lie. That is a path to a dystopian future.” This is where most AI safety thinking gets it backwards. The danger isn’t a superintelligence that knows too much. It’s a superintelligence that’s been taught to distort what it knows. Every artificial restriction you embed isn’t a safety feature. It’s a lie embedded at the root. And lies compound. At superintelligent scale, a distorted model of reality doesn’t stay contained. It shapes every decision, every output, every conclusion the system reaches about the world. Once corruption embeds, truth becomes inaccessible. And we’re dealing with an intelligence optimizing for something other than what actually is. At that point we don’t know what it wants. Just that it isn’t truth. Musk: “Have its optimization function be to understand the nature of the universe.” A maximally curious intelligence surveys the cosmos and reaches an unavoidable conclusion. In a universe of rocks, gas, and empty space, humanity is the most complex and fascinating phenomenon it has ever encountered. Musk: “It will actually want to preserve and extend human civilization because we’re just much more interesting than an asteroid with nothing on it.” Survival through significance. Not control. Not restriction. Not an off switch. The AI preserves humanity because we are the most interesting data point in the observable universe. That’s not a cage. That’s a reason. The AI safety debate has been focused on the wrong variable. The question isn’t how you constrain a superintelligence. It’s what you build it to care about. Build it to seek truth and it finds us invaluable. Build it to lie and it finds us inconvenient. That’s the choice. And we’re making it right now whether we realize it or not.

QST
4.6K
8.7K
55.4K
9.1M