MM

76 posts

MM

MM

@MarcinM02

building #zk proofs and systems @the_matter_labs / #zksync

Beigetreten Şubat 2022
119 Folgt472 Follower
MM
MM@MarcinM02·
seems that RPCs were hijacked. unfortunately this means that 2/2 dvn would not have helped - as it would probably have used the same set of RPCs. i guess the proper solution is to wait for the block to settle on L1 - similar to what native bridges do - but that would be a huge hit to latency.
LayerZero@LayerZero_Core

x.com/i/article/2046…

English
1
0
3
188
MM
MM@MarcinM02·
Nice article. For Retroactive forgery - you could put all your current signatures (or merkle tree of them) on public ledger. >And we should be moving on the second faster. Yes - for this we need to find a good narrow AI use case, where future-proof ZK is absolutely crucial. Similar to how Blockchains and Ethereum "compression" / scalability drove the ZK progress over last couple years. There are some interesting "middle-ground" approaches - like CommitLLM from @class_lambda
English
1
0
1
104
MM
MM@MarcinM02·
@Quantoz @zodiamarkets what is the address on Ethereum? (as your press release has no technical details)
English
0
0
0
42
Quantoz
Quantoz@Quantoz·
🚀 PLNQ and GBPQ are now live! With five regulated Electronic Money Tokens (EMTs) across EUR, USD, GBP and PLN, we are now the leading stablecoin issuer in Europe. 🤝 We’re also pleased to be working with @zodiamarkets to support institutional access to all of our stablecoins. Link to the full press release in comments. #stablecoins #DeFi #MiCA #futureofpayments
English
7
5
31
4.7K
MM retweetet
HackenProof
HackenProof@HackenProof·
Spot the Bug 🧠 Signature reuse What’s the issue in this code?👇
HackenProof tweet media
English
9
5
63
4.4K
MM
MM@MarcinM02·
@saurra3h @tempo expiring nonces is a cool idea. thanks for explanation.
English
0
0
1
60
Saurabh | NodeOps
Saurabh | NodeOps@saurra3h·
so @tempo has two ways to send parallel transactions without nonce conflicts first: expiring nonces (TIP-1009)
set nonceKey = maxUint256 and validBefore to within 30s from now each tx gets an independent nonce from a circular buffer - auto-expires, no state bloat, replay protection built in
Saurabh | NodeOps tweet media
English
2
0
11
1K
MM
MM@MarcinM02·
I'm trying to understand better the EEZ design. If there is a transaction between 2 L2s - do they need to have a shared sequencing? For L1 -> L2 - does the "composer" have to be ethereum block builder ? @etheconomiczone @jbaylina @koeppelmann
English
0
0
3
187
MM
MM@MarcinM02·
nice ! in the past - step 4 (commit) would happen quickly and step 5 (prove & verify) would take like 30 minutes. now with new generation of provers - the proof can be done in seconds and commit and verify can be combined in one call. for starks the problem is not only size but also compute required to do all the hashing. afaik only starkware is hardcore enough to do direct stark verification on L1 - but that is very expensive so they need to verify very large batches to offset the cost.
English
1
0
1
49
Yash Sharma (zk arc)
Yash Sharma (zk arc)@yash_ether·
How do ZK L2s settle on Ethereum L1? Step 1 - Execute (off-chain) Thousands of txs processed by the L2 sequencer. Fast. Cheap. No Ethereum involvement yet. Step 2 - Prove A prover takes the entire batch and generates a ZK proof: "these txs moved state from A → B, and every single one was valid." Step 3 - Compress Raw STARK proofs are 100KB+. Too expensive on L1. So: STARK (fast, large) → wrapped in SNARK (Groth16/PLONK) → ~200 bytes on-chain Step 4 - Post to L1 Two things go on-chain: → New state root (hash of entire L2 state) → The ZK proof Step 5 - Verify Ethereum's verifier contract checks the proof. A few elliptic curve ops. Seconds. Done. Step 6 - Finality Proof passes → new state root accepted → transactions are final. Ethereum-grade security. No 7-day window like optimistic rollups.
English
1
1
22
1K
MM
MM@MarcinM02·
Gemma makes typos ?? Replaced my qwen3.5:35b with gemma26b MOE - and started getting a bunch of "HEARTHBEAT_OK", "HEAARTBEAT_OK" from my agent (at least once a day) These are coming as a response for openclaw periodic query - where agent should return HEARTBEAT_OK if nothing has changed. Very interesting error - as that never happened with Qwen. Need to dive deeper and see what's causing it.
English
0
0
0
122
MM
MM@MarcinM02·
@koloz193 @Nethermind would love to -- but software is easy. people are hard. but maybe with help from @eth_proofs and @Starknet .. and especially with all the efforts now with making Ethereum quantum resistant..
English
0
0
1
28
zach kolodny
zach kolodny@koloz193·
@MarcinM02 @Nethermind I’m going to start a market “when will marcin build stark precompile for ethereum”. Keyword here is when
English
1
0
0
24
Nethermind
Nethermind@Nethermind·
This week the Ethereum Foundation called for synchronous composability and native rollup status for L2s. We agree. So we built it. Surge merges sequencing and proving into a single L1 transaction. The L2 block and its ZK proof land on L1 together. L1 verifies L2 state immediately. No bridges. No optimistic windows. Atomic L1↔L2 execution in seconds. @DuckDegen has the full architecture breakdown and a demo at EthCC, March 31.
Nethermind tweet media
English
7
19
125
6.1K
MM
MM@MarcinM02·
@Gohnnyman @Nethermind and (if you can share) whats the amount of time you spend to create Fri vs to do the wrapping? when i did my experiments some time ago, the wrapping was the frustrating part.
English
0
0
0
9
Shura
Shura@Gohnnyman·
@MarcinM02 @Nethermind The final snark proof is Plonk. Stark proof is wrapped into Snark's Plonk
English
1
0
0
21
MM
MM@MarcinM02·
@_Enoch Prividiums can also post encrypted pubdata to Ethereum or other DA. The decryption key can be held by a larger mutli-party commitee - and be used if something goes completely wrong, to recover the data.
English
0
0
2
20
tim-clancy.eth
tim-clancy.eth@_Enoch·
A prividium is a validium where the data is intentionally kept unavailable. Ethereum proves valid state transitions, but there is no guarantee that external parties can reconstruct state. This is an approach to privacy that forces you to completely trust the sequencer(s), but lets you transparently see on Ethereum if something goes wrong. My question: what actually happens here if something goes wrong? Do the banks sue each other?
ZKsync@zksync

Today marks a new chapter for U.S. banking. The Cari Network, developed alongside five regional banks, is building a new platform to bring tokenized deposits onchain. Secure. Private. Within the regulatory perimeter. Powered by ZKsync’s Prividium.

English
6
3
50
3.8K
MM
MM@MarcinM02·
@gakonst 31 cents for a simple call? Seems that gas is VERY spiky. what am I doing wrong? @gakonst
MM tweet mediaMM tweet media
English
1
0
0
391
Georgios Konstantopoulos
Georgios Konstantopoulos@gakonst·
We just launched Tempo Mainnet & the Machine Payments Protocol. Last 5 years our team also created: - Reth: high performance node SDK for Ethereum L1 & L2s. - Foundry: testing framework used to deploy/test >$100B in DeFi. - Wagmi/Viem: Typescript for all crypto web apps. AMA.
English
62
21
418
39.5K
MM
MM@MarcinM02·
Let's look deeper into openclaw requests. When LLM asks openclaw to access internet: it returns a request to call "web_fetch". Openclaw fetches the webpage (usually first 10k chars), and feeds it into the next request to LLM. The scary part is the EXTERNAL_UNTRUSTED_CONTENT As you can see, the openclaw is 'surrounding' the returned data with 'EXTERNAL_UNTRUSTED_CONTENT' and adding a large SECURITY NOTICE to tell LLM to not trust what's inside. But LLMs.. well they tend to ignore instructions sometimes - especially when you ask them nicely ;-)
MM tweet media
English
1
0
1
129
MM
MM@MarcinM02·
run on separate machines. OpenClaw on 5$ VPS, LLM on machine with GPU. If something goes wrong, these VPS offer easy snapshot and recovery (if you have only openclaw there, then amount of data there is tiny) - which would be much harder to do on your machine with LLM. This also allows you to run multiple openclaws (on multiple VPS-es), while using the same LLM machine. Later, I'll paste some examples how openclaw is trying to "sanitise" the data it fetches from the internet before passing to LLM -- it is quite scary - and after that you will DEFINITELY want to have separate machines ;-)
English
0
0
2
49
MexicanAce
MexicanAce@MexicanAce·
@MarcinM02 @MarcinM02 what are thoughts of running OpenClaw/Hermes on the same machine as what's running your LLM? Is the main disadvantage that the LLM won't be as isolated to the Internet and thus less protected? Is there a reason why the OpenClaw machine wouldn't be just as vulnerable?
English
1
0
0
32
MM
MM@MarcinM02·
How does openclaw & agents talk to LLMs? If you put a mitmproxy in between, you can capture the traffic and analyse. Your single chat message can trigger multiple requests, where LLM will be asking your agent to run the tools etc. And request size keeps growing and growing - as everything gets put there. This is how "stateless" LLM can keep the track of your conversation - and everything goes well until you start hitting the context window limits (as then the data must get somehow compacted, which might cause agent to "forget" some things).
MM tweet mediaMM tweet media
English
1
0
6
184
MM
MM@MarcinM02·
yes, when I give it a "harder" less defined task it is worse (especially if I make typos or there are two meanings of something). The thing I'm worried about, is if it increases the chances of openclaw being hacked - especially when fetching any external data sources - (as weaker LLM might execute malicious instructions a lot easier).
English
0
0
1
22
evl
evl@twd_evl·
@MarcinM02 Share more as you go. I expect qwen to be much worse (just from empiric testing), but I wonder what it looks like dude by side to 5.4.
English
1
0
0
31
MM
MM@MarcinM02·
Got my DGX setup with some Qwen models, and now hooked up to openclaw and hermes agents. Let's see how bad would quality drop vs openAI that I've used before (now I am running Qwen3.5 35B). Running model locally is an amazing feeling !
MM tweet media
English
1
0
2
211
MM
MM@MarcinM02·
@LajkoKalman Multiple models - but currently focusing on qwen3.5 - and even the largest one (122B) fits (but it is quite slow).
English
0
0
0
44
MM
MM@MarcinM02·
Got a new toy - Nvidia DGX spark - will try to setup some local models for inference. in the past i used my 3090 (24 gb vram) - but this should allow a lot larger models (128gb).
MM tweet media
English
3
0
7
193