virtu

49 posts

virtu banner
virtu

virtu

@vir7u

Bitcoin Dev & Researcher

Sol 3 Katılım Ekim 2020
22 Takip Edilen3.7K Takipçiler
virtu
virtu@vir7u·
@pete_rizzo_ I feel it's a bit unfair to include great achievements such as ETH, SOL, ADA and XRP under the Bitcoin Reserve umbrella. They obviously deserve better. How about a separate Strategic Shitcoin Reserve?
English
1
0
5
90
The Bitcoin Historian
The Bitcoin Historian@pete_rizzo_·
BREAKING: US COMMERCE SECRETARY HOWARD LUTNICK SAYS STRATEGIC #BITCOIN RESERVE "WILL BE ANNOUNCED" AT TRUMP WHITE HOUSE SUMMIT "OTHER TOKENS WILL BE TREATED POSITIVELY BUT DIFFERENTLY." THIS IS MASSIVE 🚀
The Bitcoin Historian tweet mediaThe Bitcoin Historian tweet media
English
458
1.7K
12.5K
799.1K
virtu
virtu@vir7u·
Hey @okx, what's up with having to re-verify addresses that already completed a Satoshi test when I send from them again? This is stupid!
English
0
0
0
283
virtu
virtu@vir7u·
What's the diff between a CBDC and centralized digital currencies like ETH, XRP, SOL and ADA? In the limit, zilch. Only fools and liars will try to convince you future govs won't pressure US-based teams of centralized devs to make any changes they want.
English
1
0
1
323
virtu
virtu@vir7u·
@alexocheema But can you run the full model on Apple hardware using multiple Minis? Or just the retarded ones on single Mini? As long as the multi-GPU setup is the only one capable of running the full model, I guess the GPU price premium is justified.
English
0
0
1
36
Alex Cheema
Alex Cheema@alexocheema·
Market close: $NVDA: -16.91% | $AAPL: +3.21% Why is DeepSeek great for Apple? Here's a breakdown of the chips that can run DeepSeek V3 and R1 on the market now: NVIDIA H100: 80GB @ 3TB/s, $25,000, $312.50 per GB AMD MI300X: 192GB @ 5.3TB/s, $20,000, $104.17 per GB Apple M2 Ultra: 192GB @ 800GB/s, $5,000, $26.04(!!) per GB Apple's M2 Ultra (released in June 2023) is 4x more cost efficient per unit of memory than AMD MI300X and 12x more cost efficient than NVIDIA H100! Why is this relevant to DeepSeek? DeepSeek V3/R1 are MoE models with 671B total parameters, but only 37B are active each time a token is generated. We don't know exactly which 37B will be active when we generate a token, so they all need to be ready in high-speed GPU memory. We can't use normal system RAM because it's too slow to load the 37B active parameters (we'd get <1 tok/sec). On the other hand GPUs have fast memory but GPU memory is expensive. Apple Silicon, however, uses Unified Memory and UltraFusion to fuse dies - a tradeoff that favors a large amount of medium-fast memory at a cheaper cost. Unified memory shares a single pool of memory between the CPU and GPU rather than having separate memory for each. There's no need to have separate memory and copy data between the CPU and GPU. UltraFusion is Apple's proprietary interconnect technology for connecting two dies with a super high speed, low latency connection (2.5TB/s). Apple's M2 Ultra is literally two Apple M2 Max dies fused together with UltraFusion. This is what enables Apple to achieve such a high amount of memory (192GB) and memory-bandwidth (800GB/s). Apple M4 Ultra is rumored to use the same UltraFusion technology to fuse together two M4 Max dies. This would give the M4 Ultra 256GB(!!) of unified memory @ 1146GB/s. Two of these could run DeepSeek V3/R1 (4-bit) at 57 tok/sec. All of this and Apple has managed to package this in a small form-factor for consumers with great power efficiency and great open-source (uncharacteristic of Apple!) software. MLX (h/t @awnihannun) has made it possible to leverage Apple Silicon for ML workloads and @exolabs has made it possible to cluster together multiple Apple Silicon devices to run large models, demonstrating DeepSeek R1 (671B) running on 7 M4 Mac Minis. It's unclear who will build the best AI models, but it seems likely that AI will run on American hardware, on Apple Silicon.
Alex Cheema tweet mediaAlex Cheema tweet mediaAlex Cheema tweet media
English
215
1.1K
7.1K
1.2M
virtu
virtu@vir7u·
@MrJamesMay Any recommendations on new cars that still have these?
English
0
0
2
27
virtu
virtu@vir7u·
Any (Bitcoin-friendly) VPS hosters to recommend? Hetzner flagged my crawler as net scanner; when I told them what it was, they told me ‚go away, no crypto here!‘ (despite them being the single-largest bitcoin-node-hosting AS on the planet) 🤷‍♂️
English
1
1
2
490
virtu
virtu@vir7u·
@1440000bytes @jonatack I just looked at more historical connection time data. In this larger context, the current spike seems like nothing out of the ordinary. If coindance is sourcing bitnodes' data, it's likely just a problem with bitnodes' infrastructure.
English
0
0
10
381
virtu
virtu@vir7u·
@1440000bytes @jonatack Interesting to see nodes drop on both bitnodes and coindance. But I just checked my data, and the number of reachable Onion nodes did not drop on my end. However, I did observe an increase in connection times, which is likely the explanation for numbers dropping on bitnodes (1/x)
English
5
4
25
3.1K
floppy.md
floppy.md@1440000bytes·
Half of the bitcoin nodes reachable using Tor went down in last couple of days. Did someone find a DoS vulnerability? Cc: @jonatack
floppy.md tweet media
English
11
17
105
74.6K
virtu
virtu@vir7u·
@1440000bytes @jonatack This number has been growing since then by several 1000s of nodes per day. What's fishy is that according to the torproject's metrics, everything seems in order with the network, so there's a chance this is an attack on Onion nodes. (4/5)
English
0
2
8
432
virtu
virtu@vir7u·
@1440000bytes @jonatack If the connection time distribution moves to the right for some reason (e.g. tor network attack), the number of nodes the crawler identifies as unreachable will grow. According to my data, on Jul 8 there were ~3k nodes with >15s conn time. (3/x)
English
0
0
4
169
virtu
virtu@vir7u·
@1440000bytes @jonatack The bitnodes crawler used to use a hardcoded 15s socket timeout (and it still does, I just checked on github), which resulted in it missing the long tail of Onion nodes in the past (some 1000s of nodes). [2/x]
virtu tweet media
English
2
5
30
14.3K
virtu retweetledi
virtu
virtu@vir7u·
@jonatack Right! Should have mentioned my list was intended for newcomers. Why? Dissemination of cjdns peers to new nodes can be slow because cjdns addrs get outcrowded by other net types in addr msgs on multi-networked cjddns nodes; addrman caching further lowers turnover.
English
1
0
1
63
…::: jon
…::: jon@jonatack·
@vir7u node runners can see similar stats using the RPC getnodeaddresses RPC and CLI -addrinfo for cjdns i currently usually see between 6 and 10 peers
English
1
0
2
50
virtu
virtu@vir7u·
Thanks to #Bitcoin Core 27.0, which enabled v2transport by default, we're getting close to one in ten Bitcoin nodes supporting P2P encryption! 🚀🚀🚀 Added network service metrics on 21.ninja to keep track of progress: 21.ninja/reachable-node…
virtu tweet media
English
2
7
40
3.9K
virtu retweetledi
Hodl Hodl
Hodl Hodl@hodlhodl·
Honeybadgers, ready for Christmas giveaway? 🔥 We're giving away one #BH2024 ticket! 🦡 🎟️ 1⃣ Follow @hodlhodl & @debificom 2⃣ Retweet this post Ends 27/12. Honeybadger doesn't care!
Hodl Hodl tweet media
English
11
78
55
19.6K
virtu
virtu@vir7u·
@elonmusk I, Robot, much? s/USR/TSLA/ | s/VIKI/GROK/
Polski
0
0
0
114
Elon Musk
Elon Musk@elonmusk·
Optimus
English
24.9K
33.4K
234.1K
46.3M