Alexander Eichhorn

598 posts

Alexander Eichhorn banner
Alexander Eichhorn

Alexander Eichhorn

@echa_io

Web3 founder & engineer. Working on data mining, analytics & research @ https://t.co/IeoetnDk1Q

Earth, Milky Way Bergabung Ocak 2015
39 Mengikuti919 Pengikut
Alexander Eichhorn
Alexander Eichhorn@echa_io·
Happy New Year everyone! 2026 will be awesome.
English
0
0
2
69
Alexander Eichhorn
Alexander Eichhorn@echa_io·
Add explicit checkpoints after each batch of events. Persist only on checkpoints. Apply each event as it arrives and forward checkpoints to consumers. On crash roll back to latest checkpoint. DB transaction semantics (atomicity & isolation) are no good fit for reasoning about event stream processing IMHO. For inspiration, look at Apache Flink. They call checkpoints ‘watermarks’ and ‘safepoints’. The latter exist to persist a coherent state snapshot across a cluster of shards, eg to backup or shut down safely.
English
1
0
1
58
Dominik Tornow
Dominik Tornow@DominikTornow·
In Event Sourcing, what’s the right order of operations?! 1️⃣ Receive → Process → Persist → Apply → Return 2️⃣ Receive → Process → Apply → Persist → Return Do you persist before apply or apply before persist?
Dominik Tornow tweet media
English
13
3
78
13K
Alexander Eichhorn
Alexander Eichhorn@echa_io·
No 1 rule for long-term investing in DeFi governance tokens: Look for how much of the value captured by a protocol is directly funneled back to token holders. - zero = unsustainable or scam (TAO) - 50% = gold standard (CRV) - more = too extractive, suspicious (SNX, VELO) Gamble responsibly and have a great day,
English
1
0
5
192
Alexander Eichhorn me-retweet
Daniel Lemire
Daniel Lemire@lemire·
Over the years, I’ve worked with many students, some now thriving in the software industry. The trait that sets the successful ones apart? Leadership. My best research assistants took initiative, helping without being prompted. Most, however, waited for me to organize their tasks. Our education system often guides students along rigid paths, encouraging compliance over creativity. This suits anxious students who prefer direction, but it’s counterproductive. Employers rarely value workers who need constant guidance, and such individuals are unfit for leadership roles. Let’s be frank: when we talk about AI replacing jobs, we often mean those lacking initiative. I once hired assistants to draft code after detailed instructions, saving me minimal time while offering them learning opportunities. Today, tools like ChatGPT outperform even the best interns at such tasks.
English
7
8
97
8.6K
Alexander Eichhorn me-retweet
Bruce Fenton
Bruce Fenton@brucefenton·
I heard someone today say “we shouldn’t pretend Tornado Cash isn’t used for illicit purposes”. I don’t pretend Tornado Cash is used for one thing or another - it’s none of my business what people use software for.  It’s also not anyone else’s business what anyone does with their money. It’s not “illicit” to move money.  Moving money is not a legitimate crime. ALL surveillance and control, all AML KYC and all violations of privacy are completely unethical, immoral and illegitimate. All of them. We have to go along with this because the tyrants pushing these horrible things have guns - but none of it has any place in a free society. Moving money and having privacy is not a legit crime.  There is no victim. If someone moves money for a purpose that is in violation of a legitimate law then the law enforcement can chase them for that legitimate crime.  (A legitimate law is a crime against someone’s life, body or property. It has an a direct clear victim.  No other laws are legitimate or ethical.) It’s wrong to treat the whole world as criminal suspects and to force draconian regulations on everyone.  It’s wrong to threaten, fine or jail people for paperwork or other made up violations which harm no one. This is all a very new and very bad idea. Almost none of this existed only 30 years ago. For most of human history the idea that thug tyrants in high offices had the right to visibility to who owns what and where and how it’s transferred would have been considered absurd. Even some of the most totalitarian regimes did not have as tight of financial controls as we have now. People didn’t even need an ID to buy stocks until the 90s. The tyrants have made people lose the plot and think that privacy or moving money without filing out some forms is a “crime”. This entire thing should be scrapped.  There’s no legitimate case for this.  Money should be able to move freely. “I see you deposited $8000.  We need to know where it came from.” “I see you are withdrawing $3400 cash to buy a rare Magic the Gathering card…not so fast, we need some forms.” Gets out of here. Totally immoral.  Unethical.  No one should support this. We need to fight much harder - not just for cases like Roman — but at a bigger picture level. Every conversation with every politician and regulator should note that ALL regs related to KYC and AML are illegitimate, anti business, anti freedom and a human rights violation.  All should be scrapped. “But Bruce what about the terrrorrooorrrrrrrists and the child traffickers 😭 think of the children!” That’s the problem of the terrorists and traffickers - chase them.  Don’t treat the other 4 billion of us like criminals.  Don’t grind down the wheels of the global economy and create friction on a trillion transactions — go do some police work.  Not my problem.   Money movement should be free. Privacy is a human right.
English
76
192
746
51.7K
Alexander Eichhorn me-retweet
The Babylon Bee
The Babylon Bee@TheBabylonBee·
Deranged Maniac Fires Off Over 17 Memes In Crowded German Shopping Mall buff.ly/411yaIo
The Babylon Bee tweet media
English
452
5.7K
34.1K
523.3K
Alexander Eichhorn me-retweet
_gabrielShapir0
_gabrielShapir0@lex_node·
Ethereum is the last bastion of cypherpunk / 3,3 values because a lot of people & institutions in Ethereum got rich at a time when it was easier to do that while being idealistic. That is why I say 'ETH goes hard'--it will be the only ecosystem that truly cares about decentralization & censorship resistance at a time when that matters relatively little (or is even a negative) in commercial terms, because Ethereum's key people & orgs have that luxury. You can't really expect the same thing from fledgling ecosystems and founders who don't have massive war chests built up from previous times, they just have to do anything they can to win--which, based on current market values (not fearing the govt, not caring much about decentralization, just wanting numberup tech), does not involve much idealism. If you need to 'sell something' to survive and the market is not buying cypherpunk, selling cypherpunk per se is somewhat a recipe for disaster. At most, you have to sell something else and sneak the cypherpunk in despite what your customers want.
English
45
56
530
65.6K
Alexander Eichhorn me-retweet
Andrew Tate
Andrew Tate@Cobratate·
Thank God. Men can’t get pregnant anymore.
English
4.7K
35.6K
366.7K
15.5M
Alexander Eichhorn me-retweet
BOS
BOS@BTC_OS·
We have done it! For the first time ever a ZK-proof has been verified on Bitcoin Mainnet by BitcoinOS. The final verification was confirmed in block 853626. A historic moment and a historic block. A new era has begun for Bitcoin enabling unlimited scaling and functionality - no softfork required! Here’s how we did it 🤝
BOS tweet media
English
363
2.4K
4.2K
2.7M
Alexander Eichhorn
Alexander Eichhorn@echa_io·
New Remote Code Execution vulnerability in OpenSSH. Fixed in version 9.8 released today. Update your servers.
FOFA@fofabot

⚠️⚠️ CVE-2024-6387: Critical OpenSSH Unauthenticated RCE Flaw ‘regreSSHion’ Exposes Millions of Linux Systems 🎯96.4 million+ Results are found on the en.fofa.info nearly year. FOFA Link🔗: en.fofa.info/result?qbase64… FOFA Query: app="OpenSSH" Refer🔖: securityonline.info/cve-2024-6387-… #OSINT #FOFA

English
0
0
3
293
Alexander Eichhorn me-retweet
Debasish (দেবাশিস্) Ghosh 🇮🇳
Revisiting Bloom Filter Efficiency in LSM Trees - thoughts from some recent readings .. Almost all LSM based storage structures use bloom filters in front of their memtable structures to reduce IO overhead at least for the target keys not present in the underlying secondary storage. This decision to use bloom filters is based on the fact that accessing data on secondary storage, e.g., hard disk drives (HDD) or solid-state drives (SSD), is several orders of magnitude more expensive than probing the filter in memory. Overall, BFs reduce the number of disk accesses and the overall query latency at the price of additional memory footprint and CPU computation. The question is - is this scenario likely to change in the context of newer and faster storage devices ? And are bloom filters in the current incarnation the be-all and end-all to improving performance in LSM based storage ? Let’s take a look .. Faster storage devices like SSDs and non-volatile memories (NVMs) - These can turn out to be the game changer. Quoting from this paper [1] by Zhu et. al. - “Taking into account that current SSD devices have several orders of magnitude lower access latency than disk, and that future SSDs and NVMs are bound to be faster, hashing latency is on its way to becoming comparable with data access latency. For example, accessing a 4KB data page on our off-the-shelf SSD needs 113μs, while the cost of hashing a 1KB-key using MurmurHash64 (...) is 235 ns, making storage about 480× more expensive than hashing. However, accessing a data page of our PCIe SSD device takes 10 μs [31] (7μs when bypass- ing the file system using Intel’s SPDK [42]), reducing this gap to 42x (30x without file system). In addition, future NVM devices are expected to offer access latency as low as ∼250 ns, being only 1.6x slower than DRAM [..], hence, making storage access comparable to hashing. Note that when data is cached in main memory, our experiments show that a single hash function calculation is ∼1.47x more expensive than accessing a memory page, thereby, making the use of a BF detrimental.” And this LSM hashing overhead gets further magnified as each lookup in the tree needs multiple bloom filter queries, at least one per level in the current implementations. Such repeated hash calculations turn querying over fast storage (or cached data) into a CPU-intensive operation. In order to address this issue we are seeing quite a few improvements and optimizations being suggested in various research papers and some of them have possibly been implemented in current LSM based stores as well (e.g. RocksDB). Here are a few of them - 1. Reducing the computational complexity of hashing. ​​Bloom filters are based on a number of hash functions and for each LSM tree we need to have multiple bloom filters (as noted above). This involves generating a number of hash functions, which, in the context of bloom filters, works but is computationally expensive. One way to reduce this complexity is to use the combinatorial approach. In this approach the bloom filter indexes are computed using a single hash function, followed by much cheaper bitwise operations (rotations and modulo) to generate the remaining indexes to be probed. This was first suggested in a paper [2] by Kirsch and Mitzenmacher. They suggest an approach that uses the combinatorial property of hash functions, where, given 2 hash functions h1(x) and h2(x), additional hash functions can be generated using a linear combination of them. This process yields the necessary number of hash functions in a bloom filter without the overhead of calling the hash calculation multiple times. However, Jianyuan Lu et. al. notes in their paper [3] that this technique cannot guarantee the independence of the synthetic hash functions, the false positive ratio in practice could be much higher than the theoretical expectation. They suggest an alternative approach which they call One-Hashing Bloom Filter (OHBF), which can be used to generate hash functions for bloom filters with low computational complexity. 2. Improving cache efficiency of bloom filters. A Bloom filter using k hash functions performs the same number of memory accesses. In the worst case, k cache misses can happen in one element query. The idea is to improve the cache efficiency by cutting the Bloom filter into blocks, each of which fits into one cache line. This paper [4] by Putze, Sanders and Singler presents a family of cache efficient bloom filters that offer more flexible trade-offs between false positive rate, space-efficiency, cache-efficiency, hash-efficiency, and computational effort. The bloom filter implementation in RocksDB uses this strategy (github.com/rockset/rocksd…). 3. Hash Sharing. Once the CPU computational complexity of hash computation is reduced as in 1 above, this efficiency can further be extended to implement sharing hash computations across multiple LSM levels and across a series of smaller bloom filters that forms a single logical bloom filter. [1] has more details. [1] Reducing Bloom Filter CPU Overhead in LSM-Trees on Modern Storage Devices by Zhu et. al. - dl.acm.org/doi/10.1145/34… [2] Building a Better Bloom Filter by Adam Kirsch and Michael Mitzenmacher - eecs.harvard.edu/~michaelm/post… [3] Low Computational Cost Bloom Filters by Jianyuan Lu et. al. - yangtonghome.github.io/uploads/LCCBF.… [4] Cache- Hash- and Space- Efficient Bloom Filters by Felix Putze, Peter Sanders and Johannes Singler - algo2.iti.kit.edu/documents/cach… BTW in case anyone wants to take a look at how a production ready scalable bloom filter looks like, take a look at the one implemented as part of RocksDB (github.com/rockset/rocksd…) and the associated wiki page (github.com/facebook/rocks…)
English
1
13
83
14.3K