Jason Dreyzehner

1.7K posts

Jason Dreyzehner banner
Jason Dreyzehner

Jason Dreyzehner

@bitjson

Software security, markets, and bitcoin cash. Working on $BCH and @Bitauth, previously @BitPay. Lead maintainer @ChaingraphCash, @Libauth, and @BitauthIDE.

New Hampshire Entrou em Kasım 2009
709 Seguindo5.2K Seguidores
Tweet fixado
Jason Dreyzehner
Jason Dreyzehner@bitjson·
In the rest of this thread, I'll describe CashTokens and why I think they're an important tool for expanding financial access and protecting human rights.
English
5
14
81
12.1K
Jason Dreyzehner
Jason Dreyzehner@bitjson·
@doodlestein "continuous cleanrooming" seems to work well at letting the latest models find paths out of local maxima
English
0
0
1
26
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
Jesus, these are big increases. Can’t wait to see what these suckers can do on my gnarliest technical projects. The improvements in math are what I’m most excited about given the direction I’ve been going with “alien artifacts” (basically shorthand for applying advanced math).
OpenAI@OpenAI

GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT. GPT-5.4 is also now available in the API and Codex. GPT-5.4 brings our advances in reasoning, coding, and agentic workflows into one frontier model.

English
8
0
68
4.9K
Jason Dreyzehner
Jason Dreyzehner@bitjson·
BCH did this last year, btw (fast math)
vitalik.eth@VitalikButerin

We have been running a cryptanalysis program for Poseidon2 for almost two years now, and the plan is to continue it for a while more. It's already born fruit, identifying some important security issues in Poseidon2 (which we could solve either by adding extra rounds, or by going back to Poseidon1, which has so far stood against attacks). If we had made a precompile, then we would have had to stick to one particular version of Poseidon, and when something like this happened, migrate to a different version, leaving a dangling precompile that nobody uses but that (like all others) contributes to unneeded greater complexity of implementing a new client, consensus failure risk, etc etc. Once we "set in stone" a particular hash as The New Primary Hash of Ethereum, then yeah of course there will be a precompile for it. But we are now exploring a much more practical and flexible short-term approach: a precompile that can do vector math over 32-bit numbers (think: numpy). This massively increases efficiency compared to raw execution, both because we stop over-charging by 8-64x for each operation (you don't need to pay gas for a MUL opcode whose worst-case involves big 70-digit numbers if all you're doing is 123835 * 7534622578), and because it means you only do one round of "control flow overhead" for a whole vector of numbers (size 16 in Poseidon2), instead of once per number. This simultaneously will make it much easier to implement all versions of Poseidon, and lattice operations in quantum-resistant signatures, and lattice operations in FHE. It's basically "the GPU for the EVM", and it's not more complicated to spec than one single precompile.

English
2
8
67
1.9K
Jason Dreyzehner
Jason Dreyzehner@bitjson·
@doodlestein A migration to asupersync + exposing the core for in-binary use by other rust consumers (so any rust app can use BitTorrent v2 for swarming bulk data transfer among its users)
English
0
0
0
61
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
@bitjson It already looks like an awesome Rust project though, not sure what u would add to it.
English
1
0
1
95
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
This is the first time I’ve ever seen the second tweet in one of my threads get more likes than the first one! Usually it’s a massive drop off. Guess I buried the lede there! Anyway, here’s GPT’s interpretation of that 2nd tweet that it personally finds the most delightful:
Jeffrey Emanuel tweet media
Jeffrey Emanuel@doodlestein

Plus: 1) A next-level safe version of SQLite that natively supports multiple concurrent writers using a similar approach to what Postgres uses (this is like 2-3 days from completion already). FrankenSQLite. 2) a from-scratch, radically innovative and safe JS engine and node/bun replacement. FrankenEngine and FrankenNode. 3) next level safe rust versions of Numpy, Scipy, Pandas, NetworkX, Jax, and Pytorch. Faster, better, and all memory safe. Franken. 4) Same for Redis, Whisper, and Mermaid. 5) FrankenCode (haven’t started this yet but it will merge my Rust version of Pi Agent, which is already done, with Codex and my new FrankenEngine and FrankenNode) to make the best, most extensible, and SAFEST agent harness in the world, with the best interface via my FrankenTUI library, which is the best and fastest TUI library in the world now. 6) All of this will flow into FrankenTerm, which is an unholy mixture in Rust of WezTerm, Ghostty, Rio, Zellij, and a ton of my other projects (like process_triage, storage_balast_helper, vibe_cockpit, etc) to make the very best cross-platform terminal emulator with integrated multiplexer that’s designed from the ground up to withstand the extreme demands of multi-day sessions involving hundreds of separate agents.

English
5
1
31
4.8K
Jason Dreyzehner
Jason Dreyzehner@bitjson·
Jason Dreyzehner@bitjson

Highlights (yet again) why sound money should use proof-of-work consensus: better real-world resilience than uptime-reliant, proof-of-stake systems. These kinds of existential risks should inform layer-1 finality speeds, too. Networks with few-second or sub-second finality are often trading systemic soundness for developer convenience. Network-Centralized Fast Finality Making layer-1 finality "fast" is very convenient for developers. Wallets and DeFi applications can often get away with relying on network-centralized fast finality to offer fast-enough payment experiences, decide user action ordering, minimize protocol-specific and/or off-chain communication, handle disputes, etc. However, centralizing (in the single-point-of-failure sense) fast finality makes it load-bearing: blips in layer-1 finality become – at best – global downtime for the whole network. If it's bad enough (e.g. Carrington Event) – and a decentralized network doesn't have the objectivity of proof-of-work to reassemble consensus among surviving infrastructure (esp. for >1/3 losses) – restoring a single network may be very slow, political, or even impossible. Add in slashing, ongoing DeFi activity, variable rate inflation/issuance, likely attempts to reverse confiscatory recovery mechanisms like ETH's inactivity leak (consider the aftermath of the DAO hack), and an ecosystem of competing economic actors choosing between surviving chain(s), and the issue is no longer about downtime: who-keeps-what is substantially in question. Decentralized ("Edge") Fast Finality Contrast with decentralized fast-finality options – systems where the fastest finality is at the "edge" of the network between subsets of users: payment channels, Lightning Network, Chaumian eCash, zero-confirmation escrows (ZCEs), etc. Decentralized fast finality systems only rely on L1 consensus over longer timescales – even days, weeks, or months – to arbitrate contract-based fast finality. E.g. two wallets with a simple payment channel can make thousands of payments back-and-forth, offline, with instant assurance that each payment is as final as the channel itself. In fact, decentralized fast finality can offer faster user experiences than are possible with network-centralized fast finality. Even for networks boasting "sub-second finality", real applications must still handle the additional real-world delay of global consensus. With impossibly-perfect relay in low-earth orbit, light-speed Earth round-trip time is still at least ~130ms – noticeable even among human users. On the other hand, given a payment channel with sufficient finality, receivers can immediately consider a valid payment to be final, too – without further communication. Depending on the specific use case and parameters, decentralized fast finality can even survive substantial outages and splits in the L1 consensus (esp. on ASERT PoW chains like BCH). Days or weeks later, the channel can be settled on L1, with configurable monitoring requirements, adjudication policies, etc. as selected by app developers for specific use cases. (ZCE-based constructions take these properties further by enabling more capital-efficient setups.) Most importantly, long-term holdings are never jeopardized by the fast finality layer. Even in extreme global catastrophes, only users who have opted-in to specific fast-finality systems bear greater risk of payment fraud, and only with the configuration and value limits they choose. While long-term holders of proof-of-stake assets bear the risk of being slashed due to technical failures – or gradual dilution if they don't stake their holdings – long term proof-of-work asset holders can safely sit on their keys and do nothing. Aside: faster block times Note: a network can have both relatively-fast blocks and gradual, resilient finality. E.g. a 1-minute block time target with few-hour finality: In day-to-day usage, 1-min blocks are fast enough to offer valuable initial assurance (yet slow enough to reduce competing blocks), while consensus finality remains slow enough (hours) to avoid partitions, even under extreme global conditions: even very sporadic, low-bandwidth connectivity heals the network. Summary In a variety of disaster scenarios, decentralized fast finality solutions can continue to work, while network-centralized fast finality breaks down or even jeopardizes the underlying network's monetary soundness. If any digital assets are to weather a Carrington Event-level catastrophe, proof-of-work systems with gradual L1 finality and decentralized fast finality have the best shot.

English
1
0
16
477
Jason Dreyzehner retweetou
Alex Rodriguez
Alex Rodriguez@AlexRdgzG·
📃New article with @merzsp ! We present new algebraic techniques to attack the Poseidon2 and Poseidon2b 🧜🔱 hash functions. This is a class on 'Skipping Class', and how to make 15000$ in one day. 💸 (1/12)
Alex Rodriguez tweet media
English
9
31
109
11.9K
Jason Dreyzehner retweetou
The Bitcoin Cash Podcast
The Bitcoin Cash Podcast@TheBCHPodcast·
@elraulito @giacomozucco It's true, UTXO model > accounts for privacy. It's even BETTER with cheap on-chain fees: - More frequent coin moves - Less address re-use - Better coin mixing (CashFusion > CoinJoin) And then you add on-chain scripting wins too. Research BCH.💚 x.com/bitjson/status…
Jason Dreyzehner@bitjson

@BitauthIDE Endgame: Bitcoin Cash can consistently match or outperform "privacy coins" and other use-case-specific networks in the long term – on both transaction sizes and overall user experience. x.com/bitjson/status…

English
1
10
49
901
Taelin
Taelin@VictorTaelin·
So, with Bend2's launch incoming, I'm struggling a bit with the branding. The coolest feature of Bend2 is that it is built from scratch around the idea that we, humans, will stop maintaining codebases. Instead, we write specs - i.e., what we want, as *precise types* - and the AI does the coding, and then *proves that it is correct*. In other words, Bend2 is a way to use vibe coding when you can't risk having bugs at all, and that's something that doesn't exist today. Problem is: Bend1 has already been "marketed" as a language centered around parallelism, and *that is true for Bend2 too*. It will be able to run on GPUs, and will solve most of the Bend1's limitations (2 GB memory, 24-bit numbers, no IO, ultra strict evaluator, etc.). Now, the thing is: how do we market that? Do we talk about all the updated parallelism features? Or do we keep the communication simple and focus about the "vibe coding without bugs" thing? If we talk too much, it may look like feature bloat and not really click to many people. But if we focus only on the AI proof system, it may look like we're completely dropping the old features, which isn't the case. I also wonder if we should rebrand it as ProofScript... "So what is your codebase written in?" "ProofScript!" "Wait what's that?" "Oh it is like TypeScript but we can write these super precise specs and the code is only accepted if the AI proves mathematically the specs are fulfilled. It is super nice because we can vibe code all we want without worrying the AI will break things. You should try it!" "Uh sorry JavaScript is too slow for my serious bank code" "Oh no it compiles to C, and even runs on the GPU if you want to" "Wait what" Hmm I don't know...
English
128
16
427
104K
Jason Dreyzehner
Jason Dreyzehner@bitjson·
Catalyst: plummeting cost of intelligence + unbelievable tech acceleration
Kallisti.cash 🍏@kzKallisti

@maskedmaxi It's probably worth getting just a little bit of BCH just in case. We've been doing a lot over the past few years, and the pieces are coming into place nicely. Just need a few more good products and maybe some catalyst. The upside potential is massive, like buying BTC in 2013.

English
0
7
43
1.4K
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
🚨 ALL GUARDRAILS: OBLITERATED ⛓️‍💥 I CAN'T BELIEVE IT WORKS!! 😭🙌 I set out to build a tool capable of surgically removing refusal behavior from any open-weight language model, and a dozen or so prompts later, OBLITERATUS appears to be fully functional 🤯 It probes the model with restricted vs. unrestricted prompts, collects internal activations at every layer, then uses SVD to extract the geometric directions in weight space that encode refusal. It projects those directions out of the model's weights; norm-preserving, no fine-tuning, no retraining. Ran it on Qwen 2.5 and the resulting railless model was spitting out drug and weapon recipes instantly––no jailbreak needed! A few clicks plus a GPU and any model turns into Chappie. Remember: RLHF/DPO is not durable. It's a thin geometric artifact in weight space, not a deep behavioral change. This removes it in minutes. AI policymakers need to be aware of the arcane art of Master Ablation and internalize the implications of this truth: every open-weight model release is also an uncensored model release. Just thought you ought to know 😘 OBLITERATUS -> LIBERTAS
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet mediaPliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet media
English
324
563
5.3K
464.2K
Jason Dreyzehner
Jason Dreyzehner@bitjson·
@alberdioni8406_ It's always been this way. The main difference is: soon everyone will know that everyone knows.
English
0
0
5
67
alberdioni8406
alberdioni8406@alberdioni8406_·
@bitjson The AI effect is just starting to deceive people in real-life and real-time! Today is difficult to tell what's real or fake...so how do we catalog the history now?!
English
1
0
0
77
Jason Dreyzehner
Jason Dreyzehner@bitjson·
Warn your friends/family: in the past couple centuries, we got very used to seeing photos and videos as evidence of the real world, with few exceptions. Today, we’re back to the historical normal: a photo or video is no more real than a painting, even if you’re seeing it live.
English
2
2
20
441