

Mike
7.7K posts

@anothervariable
| Just another Army Grunt | Delivering Decentralization @hlabs_tech |







GOOGLE’S QUANTUM BREAKTHROUGH THREATENS CRYPTO SECURITY Breaking: Google’s latest quantum computing method could disrupt encryption standards across the crypto ecosystem. Details are scarce, raising serious concerns about transparency and post-quantum readiness. ⚠️ While Bitcoin faces no immediate risk, 30–35% of its supply tied to older keys is theoretically vulnerable. Altcoins and blockchain protocols like $ETH and $SOL may face similar challenges if upgrades lag. ⏰ This is a wake-up call for the entire industry to accelerate cryptographic innovation









DID WE JUST SOLVE THE DREP COMPENSATION PROBLEM? 😱 We deployed a contract written in @pebble_lang that not only pays for the fee of your voting transaction but also leaves you a tip to thank you for you efforts researching the proposal before voting. You get the voting reward regardless of your vote, as long as you vote for the proposal. This is a fully open-source, pebble smart contract, executing on mainnet 👀 Just remember that when you vote for it 😜

HLabs 2026 proposal, including Pebble and Gerolamo development, is on mainnet link in the first comment 👇 we also have a special surprise for DReps that involves @pebble_lang Just give me a couple minutes 👀


HLabs 2026 proposal, including Pebble and Gerolamo development, is on mainnet link in the first comment 👇 we also have a special surprise for DReps that involves @pebble_lang Just give me a couple minutes 👀




nvidia's 3B mamba destroyed alibaba's 3B deltanet on the same RTX 3090. only 24 days between releases. same active parameters, same VRAM tier, completely different architectures. nemotron cascade 2: 187 tok/s. flat from 4K to 625K context. zero speed loss. flags: -ngl 99 -np 1. that's it. no context flags, no KV cache tricks. auto-allocates 625K. qwen 3.5 35B-A3B: 112 tok/s. flat from 4K to 262K context. zero speed loss. flags: -ngl 99 -np 1 -c 262144 --cache-type-k q8_0 --cache-type-v q8_0. needed KV cache quantization to fit 262K. both models held a flat line across every context level. both architectures are context-independent. but nvidia's mamba2 is 67% faster at generating tokens on the exact same hardware and needs fewer flags to get there. same node, same GPU, same everything. the only variable is the model. gold medal math olympiad winner running at 187 tokens per second on single RTX 3090 a card from 6 years ago. nvidia cooked.

One of our HLabs engineers discussing and showcasing Gerolamo and Gerolamino (for the browser) at the @IOGroup client diversity workshop in Buenos Aires