Alexey

48 posts

Alexey banner
Alexey

Alexey

@OnChainAlexey

Head of Engineering @chainsafeth

Katılım Şubat 2010
173 Takip Edilen138 Takipçiler
Alexey retweetledi
ChainSafe
ChainSafe@ChainSafeth·
Security is our top priority. Huge thanks to the @hexensio team for their rigorous work stress-testing our defences 💛
hexens@hexensio

We recently completed an Advanced Persistent Threat (APT) assessment with @ChainSafeth. They commissioned us to simulate a real attack against their organization, not a standard security audit, but a covert operation run the way advanced threat actors actually work. Using novel technical tradecraft alongside targeted social engineering, we achieved the objective and bypassed multiple layers of defense, including controls that are widely trusted across the industry. Hats off to the @ChainSafeth team, who's significant defences certainly made our team sweat. They've since used the engagement findings to further harden their security posture. The engagement is a clear reminder that organizations need to be ready for adversaries who don't stop at the first layer of defense but work through them methodically until something gives. That's the threat organizations need to be prepared for.

English
0
4
13
465
Alexey retweetledi
ChainSafe
ChainSafe@ChainSafeth·
Forest is now the only Filecoin client to support trace_call 🤯 Trace transaction execution for debugging and testing without ever hitting the network. Read the full update👇
English
2
1
5
611
Alexey
Alexey@OnChainAlexey·
Really like the framing of security as minimizing divergence between intent and execution. One missing angle is that a large part of user intent exists outside the transaction itself. Users don’t reason about raw addresses - they reason about publishers/recipients, domains, and apps. Many exploits are technically valid transactions that are semantically wrong because the action diverges from the user’s mental model of who they are interacting with. A model I’ve been exploring is issuer-level attestations: organizations bind a domain identity to signed sets of contract addresses, adding a human-semantic projection of intent alongside simulation, permissions, and economic bounds. Not a source of truth - just another independent signal that can disagree. Identity is socially defined and imperfect, but it encodes expectations that purely cryptographic checks cannot. If multiple projections of intent diverge - address, simulation, behavior, issuer context, that divergence itself becomes the security signal.
English
0
0
1
130
vitalik.eth
vitalik.eth@VitalikButerin·
How I think about "security": The goal is to minimize the divergence between the user's intent, and the actual behavior of the system. "User experience" can also be defined in this way. Thus, "user experience" and "security" are thus not separate fields. However, "security" focuses on tail risk situations (where downside of divergence is large), and specifically tail risk situations that come about as a result of adversarial behavior. One thing that becomes immediately obvious from the above definition, is that "perfect security" is impossible. Not because machines are "flawed", or even because humans designing the machines are "flawed", but because "the user's intent" is fundamentally an extremely complex object that the user themselves does not have easy access to. Suppose the user's intent is "I want to send 1 ETH to Bob". But "Bob" is itself a complicated meatspace entity that cannot be easily mathematically defined. You could "represent" Bob with some public key or hash, but then the possibility that the public key or hash is not actually Bob becomes part of the threat model. The possibility that there is a contentious hard fork, and so the question of which chain represents "ETH" is subjective. In reality, the user has a well-formed picture about these topics, which gets summarized by the umbrella term "common sense", but these things are not easily mathematically defined. Once you get into more complicated user goals - take, for example, the goal of "preserving the user's privacy" - it becomes even more complicated. Many people intuitively think that encrypting messages is enough, but the reality is that the metadata pattern of who talks to whom, and the timing pattern between messages, etc, can leak a huge amount of information. What is a "trivial" privacy loss, versus a "catastrophic" loss? If you're familiar with early Yudkowskian thinking about AI safety, and how simply specifying goals robustly is one of the hardest parts of the problem, you will recognize that this is the same problem. Now, what do "good security solutions" look like? This applies for: * Ethereum wallets * Operating systems * Formal verification of smart contracts or clients or any computer programs * Hardware * ... The fundamental constraint is: anything that the user can input into the system is fundamentally far too low-complexity to fully encode their intent. I would argue that the common trait of a good solution is: the user is specifying their intention in multiple, overlapping ways, and the system only acts when these specifications are aligned with each other. Examples: * Type systems in programming: the programmer first specifies *what the program does* (the code itself), but then also specifies *what "shape" each data structure has at every step of the computation*. If the two diverge, the program fails to compile. * Formal verification: the programmer specifies what the program does (the code itself), and then also specifies mathematical properties that the program satisfies * Transaction simulations: the user specifies first what action they want to take, and then clicks "OK" or "Cancel" after seeing a simulation of the onchain consequences of that action * Post-assertions in transactions: the transaction specifies both the action and its expected effects, and both have to match for the transaction to take effect * Multisig / social recovery: the user specifies multiple keys that represent their authority * Spending limits, new-address confirmations, etc: the user specifies first what action they want to take, and then, if that action is "unusual" or "high-risk" in some sense, the user has to re-specify "yes, I know I am doing something unusual / high-risk" In all cases, the pattern is the same: there is no perfection, there is only risk reduction through redundancy. And you want the different redundant specifications to "approach the user's intent" from different "angles": eg. action, and expected consequences, expected level of significance, economic bound on downside, etc This way of thinking also hints at the right way to use LLMs. LLMs done right are themselves a simulation of intent. A generic LLM is (among other things) like a "shadow" of the concept of human common sense. A user-fine-tuned LLM is like a "shadow" of that user themselves, and can identify in a more fine-grained way what is normal vs unusual. LLMs should under no circumstances be relied on as a sole determiner of intent. But they are one "angle" from which a user's intent can be approximated. It's an angle very different from traditional, explicit, ways of encoding intent, and that difference itself maximizes the likelihood that the redundancy will prove useful. One other corollary is that "security" does NOT mean "make the user do more clicks for everything". Rather, security should mean: it should be easy (if not automated) to do low-risk things, and hard to do dangerous things. Getting this balance right is the challenge.
English
621
274
1.7K
201.5K
Alexey
Alexey@OnChainAlexey·
No trust without transparency, so in this video, I explain how we conducted Forest for #Filecoin's benchmark tests. You can follow these methods to confirm our Forest vs Lotus benchmark stats, shared in the 🧵
English
1
4
9
1.2K
Alexey
Alexey@OnChainAlexey·
How do we measure Forest against Lotus? In this video I demonstrate how we benchmark #Filecoin nodes using our Filecoin Benchmark Suite. I’ll explain the setup, resource monitoring with Grafana, and how to generate performance reports. youtube.com/watch?v=edaLcR…
YouTube video
YouTube
English
0
1
2
566
Alexey
Alexey@OnChainAlexey·
While we're talking about recent benchmark numbers, it's worth noting that @getblockio had already integrated ChainSafe’s Forest @Filecoin client into their RPC infrastructure long before that. They moved to a Rust-based stack early to lock in better performance and long-term stability for their users. Proud to see top-tier providers hardening their stack with Forest. 🤝
English
0
2
6
1.1K
Alexey
Alexey@OnChainAlexey·
Bottom line: For general node duties, Lotus remains excellent. For RPC API workloads, Forest leads. Forest is often dramatically faster while staying extremely lightweight. Full Compatibility & Benchmark Report — February 2026 🌲 notion.so/3032103664e880…
English
0
0
2
71
Alexey
Alexey@OnChainAlexey·
CPU pattern follows the same trend. Lotus maintains a heavier baseline load. Forest spikes during execution and returns close to idle afterwards.
English
1
0
2
82
Alexey
Alexey@OnChainAlexey·
Yesterday we benchmarked Forest vs Lotus. And results are mind-blowing 🧵⬇️
English
1
2
3
622
Alexey
Alexey@OnChainAlexey·
Forest boosts @Filecoin performance. Apparently, it also helps with car repairs. How do you use your Forest swag? No Forest swag yet? Find me at @EthCC Cannes this spring.
English
0
1
3
333