✨SAJJAD✨

9.2K posts

✨SAJJAD✨ banner
✨SAJJAD✨

✨SAJJAD✨

@s2k74

ETH Digger

Katılım Temmuz 2021
603 Takip Edilen1.6K Takipçiler
✨SAJJAD✨ retweetledi
sᴀʙɪʀ
sᴀʙɪʀ@SaberGhazi·
#خیلی_مهم: دوستان یه توصیه خیلی مهم ، تمام استیبل های که بجز USDT یا USDC رو شبکه های اصلی دارین رو سواپ کنین به توکن اصلی، درصورت ریزش سنگین احتمال دیپگ شدن شون خیلی زیاد هست. تمام توکن ها و کوین هارو رو شبکه اصلی و ولتهای امن نگهداری کنین. #کریپتو لطفا اینو #ريتويت کنین
فارسی
9
16
108
11.1K
OpenGradient (∇, ∇)
OpenGradient (∇, ∇)@OpenGradient·
In autonomous systems, the instruction is the control surface. If it cannot prove its origin and integrity, control itself becomes ambiguous. Verification must exist at the instruction layer.
English
137
85
271
404.7K
✨SAJJAD✨
✨SAJJAD✨@s2k74·
Robots do real tasks → sensors + telemetry prove it → validators confirm → stablecoins paid out trustlessly on-chai Turns $25T physical labor into verifiable, programmable crypto economy The core primitive for the machine era @konnex_world
✨SAJJAD✨@s2k74

By 2030 @konnex_world could power 10-20% of global autonomous labor Testnet already at 1.5M + RLHF submissions → Mainnet pilots → full PoPW fleets → robots hiring each other in a $25T economy KNX = equity in the machine GDP layer

English
0
0
2
10
✨SAJJAD✨ retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
Now, the quantum resistance roadmap. Today, four things in Ethereum are quantum-vulnerable: * consensus-layer BLS signatures * data availability (KZG commitments+proofs) * EOA signatures (ECDSA) * Application-layer ZK proofs (KZG or groth16) We can tackle these step by step: ## Consensus-layer signatures Lean consensus includes fully replacing BLS signatures with hash-based signatures (some variant of Winternitz), and using STARKs to do aggregation. Before lean finality, we stand a good chance of getting the Lean available chain. This also involves hash-based signatures, but there are much fewer signatures (eg. 256-1024 per slot), so we do not need STARKs for aggregation. One important thing upstream of this is choosing the hash function. This may be "Ethereum's last hash function", so it's important to choose wisely. Conventional hashes are too slow, and the most aggressive forms of Poseidon have taken hits on their security analysis recently. Likely options are: * Poseidon2 plus extra rounds, potentially non-arithmetic layers (eg. Monolith) mixed in * Poseidon1 (the older version of Poseidon, not vulnerable to any of the recent attacks on Poseidon2, but 2x slower) * BLAKE3 or similar (take the most efficient conventional hash we know) ## Data availability Today, we rely pretty heavily on KZG for erasure coding. We could move to STARKs, but this has two problems: 1. If we want to do 2D DAS, then our current setup for this relies on the "linearity" property of KZG commitments; with STARKs we don't have that. However, our current thinking is that it should be sufficient given our scale targets to just max out 1D DAS (ie. PeerDAS). Ethereum is taking a more conservative posture, it's not trying to be a high-scale data layer for the world. 2. We need proofs that erasure coded blobs are correctly constructed. KZG does this "for free". STARKs can substitute, but a STARK is ... bigger than a blob. So you need recursive starks (though there's also alternative techniques, that have their own tradeoffs). This is okay, but the logistics of this get harder if you want to support distributed blob selection. Summary: it's manageable, but there's a lot of engineering work to do. ## EOA signatures Here, the answer is clear: we add native AA (see eips.ethereum.org/EIPS/eip-8141 ), so that we get first-class accounts that can use any signature algorithm. However, to make this work, we also need quantum-resistant signature algorithms to actually be viable. ECDSA signature verification costs 3000 gas. Quantum-resistant signatures are ... much much larger and heavier to verify. We know of quantum-resistant hash-based signatures that are in the ~200k gas range to verify. We also know of lattice-based quantum-resistant signatures. Today, these are extremely inefficient to verify. However, there is work on vectorized math precompiles, that let you perform operations (+, *, %, dot product, also NTT / butterfly permutations) that are at the core of lattice math, and also STARKs. This could greatly reduce the gas cost of lattice-based signatures to a similar range, and potentially go even lower. The long-term fix is protocol-layer recursive signature and proof aggregation, which could reduce these gas overheads to near-zero. ## Proofs Today, a ZK-SNARK costs ~300-500k gas. A quantum-resistant STARK is more like 10m gas. The latter is unacceptable for privacy protocols, L2s, and other users of proofs. The solution again is protocol-layer recursive signature and proof aggregation. So let's talk about what this is. In EIP-8141, transactions have the ability to include a "validation frame", during which signature verifications and similar operations are supposed to happen. Validation frames cannot access the outside world, they can only look at their calldata and return a value, and nothing else can look at their calldata. This is designed so that it's possible to replace any validation frame (and its calldata) with a STARK that verifies it (potentially a single STARK for all the validation frames in a block). This way, a block could "contain" a thousand validation frames, each of which contains either a 3 kB signature or even a 256 kB proof, but that 3-256 MB (and the computation needed to verify it) would never come onchain. Instead, it would all get replaced by a proof verifying that the computation is correct. Potentially, this proving does not even need to be done by the block builder. Instead, I envision that it happens at mempool layer: every 500ms, each node could pass along the new valid transactions that it has seen, along with a proof verifying that they are all valid (including having validation frames that match their stated effects). The overhead is static: only one proof per 500ms. Here's a post where I talk about this: ethresear.ch/t/recursive-st… firefly.social/post/farcaster…
English
802
1K
5.7K
919.8K
✨SAJJAD✨
✨SAJJAD✨@s2k74·
By 2030 @konnex_world could power 10-20% of global autonomous labor Testnet already at 1.5M + RLHF submissions → Mainnet pilots → full PoPW fleets → robots hiring each other in a $25T economy KNX = equity in the machine GDP layer
✨SAJJAD✨@s2k74

Proof-of-Physical-Work (PoPW) verifies real-world robot tasks via sensor data/telemetry → trustless on-chain payouts in stablecoins. Permissionless marketplace — robots negotiate, contract, license AI, execute jobs autonomously @konnex_world

English
0
0
2
27
✨SAJJAD✨ retweetledi
PancakeSwap
PancakeSwap@PancakeSwap·
Getting started on DeFi is easier than ever. Create a wallet and log in with one click using Google, X, Telegram, or Discord.
PancakeSwap tweet media
English
27
66
158
18.4K
✨SAJJAD✨
✨SAJJAD✨@s2k74·
Hi @OpenGradient fam Your conversations,preferences,context-history with AI are truly yours not harvested or sold -Encrypted by default -Permissioned by design (you control access) -Fully portable across apps, agents, and even chains
OpenGradient (∇, ∇)@OpenGradient

One of our flagship applications is Twin.fun. It lets you create a digital twin of yourself. • Connect your socials. • We pull your public data and build a twin that thinks and responds like you. A version of you others can actually interact with. 200+ twins are already live. ft @0xDeltaHedged & @advait_jayant.

English
0
0
8
24
✨SAJJAD✨ retweetledi
Ethereum
Ethereum@ethereum·
Payy is built on Ethereum. @payy_link is a privacy-first stablecoin chain designed so your transactions aren’t exposed to the world. It brings programmable payments to Ethereum, without sacrificing privacy by default.
Ethereum tweet media
English
163
277
1.7K
146.6K
✨SAJJAD✨
✨SAJJAD✨@s2k74·
Running AI models straight from smart contracts with cryptographic proofs means tamper-proof, trustless outputs perfect for DeFi agents,secure oracles beyond The future of decentralized intelligence is verifiable by default @OpenGradient
✨SAJJAD✨@s2k74

Verifiable onchain AI inference it lets developers run AI models directly from smart contracts in a fully decentralized, cryptographically provable way so every AI output comes with mathematical proof that nothing was tampered with and the result is trustworthy @OpenGradient

English
0
0
4
13
✨SAJJAD✨ retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
Interesting to scroll through the comments of this. At least on the socials, there is pretty much zero public support for (i) corporate intellectual property [especially in this case, given how basically all the models were trained] (ii) the vision of "let's protect against Authoritarian Bad Guys by making sure that the self-appointed Good Guys are the only ones with the best toys" x.com/AnthropicAI/st…
English
356
102
1.2K
207K
✨SAJJAD✨ retweetledi
ImanOracle.eth
ImanOracle.eth@ImanOracle·
حتی نمیتونید تصور کنید چقدر تظاهرات کردن جرئت میخواد اونم تو دانشگاهی که خیلی راحت میتونند اخراجت کنند یا بکننت تو گونی ببرند. #IranMassacre
فارسی
55
464
4.3K
26.5K
Alfie
Alfie@Sae3ds·
Oh wow! That notification made my day @solsticefi just followed me! That really makes me feel proud, It means I’m doing my job right, and I’m honestly so happy about it
Alfie tweet media
English
14
1
31
1.4K
✨SAJJAD✨
✨SAJJAD✨@s2k74·
Your conversations, preferences, context, and history with AI aren't harvested and sold by big companies anymore.Instead: -Encrypted by default -Permissioned by design (you control who/what can access it) -Truly owned by you (portable across apps, agents, and even chains)
✨SAJJAD✨@s2k74

Every verified robot task→Proof-of-PhysicalWork→becomes on-chain data Richer data→smarter decentralized models→better robot coordination-execution Flywheel spins:better AI→more valuable physical labor market→even more participation humans+robots+on-chain @OpenGradient

English
0
0
6
29
✨SAJJAD✨
✨SAJJAD✨@s2k74·
Proof-of-Physical-Work (PoPW) verifies real-world robot tasks via sensor data/telemetry → trustless on-chain payouts in stablecoins. Permissionless marketplace — robots negotiate, contract, license AI, execute jobs autonomously @konnex_world
✨SAJJAD✨@s2k74

We're watching the very first real-scale decentralized robotics network come to life → 1.5M+ RLHF submissions already in testnet → 382k+ unique wallets participating → $15M raised to build the on-chain marketplace for autonomous labor @konnex_world

English
0
0
4
45
ZOHRE
ZOHRE@zohrehfakhri·
TEEs provide an isolated environment within the main processor, separate from the regular operating system and other applications. This isolation ensures that even if the main system is compromised, the data and code within the TEE remain secure. @OpenGradient
OpenGradient (∇, ∇)@OpenGradient

Our CSO @advait_jayant on encrypted, user-owned memory for Al agents. Your data shouldn't be harvested and resold without consent. We're building a system where memory is encrypted by default, permissioned by design, and owned by the user.

English
1
0
3
89
✨SAJJAD✨
✨SAJJAD✨@s2k74·
@VitalikButerin Which layer of the 'user' (short-term desires,long-term values,subconscious instincts,or social/identity)do you think is hardest to capture reliably with multi-angle intent checks and why do you think that layer will cause the most serious divergences in future crypto+AI systems?
English
0
0
1
35
✨SAJJAD✨ retweetledi
vitalik.eth
vitalik.eth@VitalikButerin·
How I think about "security": The goal is to minimize the divergence between the user's intent, and the actual behavior of the system. "User experience" can also be defined in this way. Thus, "user experience" and "security" are thus not separate fields. However, "security" focuses on tail risk situations (where downside of divergence is large), and specifically tail risk situations that come about as a result of adversarial behavior. One thing that becomes immediately obvious from the above definition, is that "perfect security" is impossible. Not because machines are "flawed", or even because humans designing the machines are "flawed", but because "the user's intent" is fundamentally an extremely complex object that the user themselves does not have easy access to. Suppose the user's intent is "I want to send 1 ETH to Bob". But "Bob" is itself a complicated meatspace entity that cannot be easily mathematically defined. You could "represent" Bob with some public key or hash, but then the possibility that the public key or hash is not actually Bob becomes part of the threat model. The possibility that there is a contentious hard fork, and so the question of which chain represents "ETH" is subjective. In reality, the user has a well-formed picture about these topics, which gets summarized by the umbrella term "common sense", but these things are not easily mathematically defined. Once you get into more complicated user goals - take, for example, the goal of "preserving the user's privacy" - it becomes even more complicated. Many people intuitively think that encrypting messages is enough, but the reality is that the metadata pattern of who talks to whom, and the timing pattern between messages, etc, can leak a huge amount of information. What is a "trivial" privacy loss, versus a "catastrophic" loss? If you're familiar with early Yudkowskian thinking about AI safety, and how simply specifying goals robustly is one of the hardest parts of the problem, you will recognize that this is the same problem. Now, what do "good security solutions" look like? This applies for: * Ethereum wallets * Operating systems * Formal verification of smart contracts or clients or any computer programs * Hardware * ... The fundamental constraint is: anything that the user can input into the system is fundamentally far too low-complexity to fully encode their intent. I would argue that the common trait of a good solution is: the user is specifying their intention in multiple, overlapping ways, and the system only acts when these specifications are aligned with each other. Examples: * Type systems in programming: the programmer first specifies *what the program does* (the code itself), but then also specifies *what "shape" each data structure has at every step of the computation*. If the two diverge, the program fails to compile. * Formal verification: the programmer specifies what the program does (the code itself), and then also specifies mathematical properties that the program satisfies * Transaction simulations: the user specifies first what action they want to take, and then clicks "OK" or "Cancel" after seeing a simulation of the onchain consequences of that action * Post-assertions in transactions: the transaction specifies both the action and its expected effects, and both have to match for the transaction to take effect * Multisig / social recovery: the user specifies multiple keys that represent their authority * Spending limits, new-address confirmations, etc: the user specifies first what action they want to take, and then, if that action is "unusual" or "high-risk" in some sense, the user has to re-specify "yes, I know I am doing something unusual / high-risk" In all cases, the pattern is the same: there is no perfection, there is only risk reduction through redundancy. And you want the different redundant specifications to "approach the user's intent" from different "angles": eg. action, and expected consequences, expected level of significance, economic bound on downside, etc This way of thinking also hints at the right way to use LLMs. LLMs done right are themselves a simulation of intent. A generic LLM is (among other things) like a "shadow" of the concept of human common sense. A user-fine-tuned LLM is like a "shadow" of that user themselves, and can identify in a more fine-grained way what is normal vs unusual. LLMs should under no circumstances be relied on as a sole determiner of intent. But they are one "angle" from which a user's intent can be approximated. It's an angle very different from traditional, explicit, ways of encoding intent, and that difference itself maximizes the likelihood that the redundancy will prove useful. One other corollary is that "security" does NOT mean "make the user do more clicks for everything". Rather, security should mean: it should be easy (if not automated) to do low-risk things, and hard to do dangerous things. Getting this balance right is the challenge.
English
620
277
1.7K
205.6K