Fran Aligned 🟩

2.4K posts

Fran Aligned 🟩 banner
Fran Aligned 🟩

Fran Aligned 🟩

@fran_aligned

Director of @AlignedFndn

Katılım Ekim 2012
888 Takip Edilen3.6K Takipçiler
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
We've been working on something similar with @LambdaClass, @alignedlayer and @polfinance_ since @EthCC, great work by @kubimensah and @titanbuilderxyz. This will change things very fast. The bid ask spread is the hidden tax every trader pays on every trade. Tighter spreads mean better prices, more volume, and deeper liquidity. Right now Uniswap owns DeFi, Binance owns CEX, and Solana ate part of Ethereum's lunch. If PropAMM brings spread compression to Ethereum onchain, things change entirely and Ethereum will become the best place to trade. Traders follow liquidity, liquidity follows better prices. Better prices come from tighter spreads. All of it gets contested: Uniswap's dominance, Solana's momentum, Binance's volume. PropAMM on Ethereum could redraw the entire power map of this ecosystem. As always, LambdaClass will work with anybody trying to make Ethereum win. As always, LambdaClass will work with anybody trying to make Ethereum win. We've already delivered @ethrex_client. We're about to deliver an open source Wallet-as-a-Service and LambdaVM, a RISC-V zkVM built with @alignedlayer. We're building @ethlambda_lean, a Lean Ethereum client. We're also launching stablecoins on Ethereum for LATAM with @lemonapp_ar. We've been helping build a new coalition in the block building world, and now we're entering DeFi.
Titan Builder 🌕👷‍♂️@titanbuilderxyz

1/3 PropAMM liquidity is now fully operational on Ethereum mainnet! Three makers are live in every Titan block, and quotes are already consistently beating Binance VIP9 taker fees for retail orders (trades <$1k).

English
2
12
81
16.3K
Fran Aligned 🟩 retweetledi
RJ 🟩
RJ 🟩@rj_aligned·
DeFi will survive. We need to build a second generation of more robust and decentralized infrastructures with clear policies and circuit breakers. No more room for move fast and then fix.
English
0
1
8
349
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
it's funny to me that some people still frame cypherpunk values as if they were contradictory with pragmatism or accelerationism. the opposite is true: they are the most pragmatic and the most accelerationist tradition we have. everything we built, the protocols, the primitives, the institutions, the culture around them, was built precisely because we wanted tools that empower individuals and independent organizations against incumbents, gatekeepers, and rent extractors. it is the only strategy that actually ships. the proof is on the table: thanks to cypherpunk values, in less than 15 years we rebuilt a better version of the entire history of finance, settlement, issuance, custody, clearing, credit, derivatives, market structure, in the open, adversarially, and at a speed no regulated incumbent has ever matched.
English
7
8
90
8.9K
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
raise the gas limit to a stupid amount in the L1 move to @leanEthereum in the near future have multiple ZK implementations to prove the L1 add privacy if we really need rollups please please let's get based rollups and native rollups done as soon as possible and let's have multiple one click solutions like @ethrex_client to launch a rollup.
English
4
8
84
6K
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
With @rj_aligned we have been talking and starting to work with market makers and block builders. We're about to release a new protocol that should eliminate of price impact at size considerably in Ethereum L1 and would enable Ethereum to better copete with CEXes and Solana.
English
3
3
49
12.4K
Fran Aligned 🟩
Fran Aligned 🟩@fran_aligned·
Probably the guy slept only 15 hours in two weeks.
Fede’s intern 🥊@fede_intern

LLMs now make critical decisions in hospitals, defense, banks, and governments. Yet nobody can verify which model actually ran, or whether the output was tampered with. A provider or middleman can swap weights, silently requantize the model, alter decoding, inject hidden prompts, do supply chain attacks, or change the deployment surface without the user knowing. This problem is already serious. It will become critical. We think this needs a practical solution, not just a theoretically clean one. CommitLLM is designed to be deployable on existing serving stacks now: the provider keeps the normal GPU serving path, does not need a proving circuit, does not need a kernel rewrite, and does not generate a heavy proof for every response. In practice, two families of approaches dominated the conversation before this work: fingerprinting, which can be gamed, and proof-based systems, which are theoretically strong but too expensive for production inference. We built CommitLLM to target the middle ground. The core idea is to keep the verification discipline of proof systems, but specialize it to open weight LLM inference. The cryptographic core is simple: Freivalds style randomized checks for the large linear layers, plus Merkle commitments for the traced execution. Then a lot of engineering work is needed to make that line up with real GPU inference. The key trick is this. A provider claims `z = W × x` for a massive weight matrix. Normally you would verify that by redoing the multiply. Instead, the verifier samples a secret random vector `r`, precomputes `v = rᵀ × W`, and later checks whether `v · x = rᵀ · z`. Two dot products instead of a full matrix multiply. In the current implementation, a wrong result passes with probability at most `1 / (2^32 - 5)` per check. A full matrix multiply, audited with two dot products. Most of the transformer can then be checked exactly or canonically from committed openings. Nonlinear operations such as activations and layer norms are canonically re executed by the CPU verifier. The one honest caveat is attention: native FP16/BF16 attention is not bit reproducible across hardware. CommitLLM verifies the shell around attention exactly, then independently replays attention and checks that the committed post attention output stays within a measured INT8 corridor. So attention is bounded and audited, not proved exactly. That means the protocol already gives very strong exact guarantees on the parts that matter operationally most. If an audited response used the wrong model, the wrong quantization/configuration, or a tampered input/deployment surface, the audit catches that exactly. That includes things like model swaps, silent requantization, and provider side prompt or system prompt injection. Today the implementation and measurements are strongest on Qwen and Llama. But the protocol itself is not meant to be Qwen or Llama specific: we expect it to generalize across open weight decoder only families. What still has to be done is the engineering work to integrate and validate more families explicitly, and we are already working on that. On the measured path, online generation overhead is about 12 to 14% with the provider staying on the normal GPU serving path. The heavier receipt finalization cost is separate and can be deferred off the user facing path. The main systems costs are RAM and bandwidth, not proof generation. The full response is always committed, but only a random fraction of responses are opened for audit. Individual audits are much larger, roughly 4 MB to 100 MB depending on audit depth. The important number is the amortized one: under a reasonable audit policy, the added bandwidth averages to roughly 300 KB per response. After too many weeks without sleep, I’m proud to show what I built with @diego_aligned: CommitLLM. Thanks Diego for your patience. I've been calling you at random hours. The code and paper still need some cleaning and formalization. We’re already in talks with multiple providers and teams that have cryptography related ideas on how to improve it even more. We’re really excited about this and we will continue doubling down on building products in AI, cryptography and security with my company @class_lambda. If governments, hospitals, defense and financial systems are going to run on LLMs, verifiable inference is not optional. It is infrastructure. I will be explaining this in more details in the days to come and I will show how to test it and run it.

English
1
2
11
1.7K
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
LLMs now make critical decisions in hospitals, defense, banks, and governments. Yet nobody can verify which model actually ran, or whether the output was tampered with. A provider or middleman can swap weights, silently requantize the model, alter decoding, inject hidden prompts, do supply chain attacks, or change the deployment surface without the user knowing. This problem is already serious. It will become critical. We think this needs a practical solution, not just a theoretically clean one. CommitLLM is designed to be deployable on existing serving stacks now: the provider keeps the normal GPU serving path, does not need a proving circuit, does not need a kernel rewrite, and does not generate a heavy proof for every response. In practice, two families of approaches dominated the conversation before this work: fingerprinting, which can be gamed, and proof-based systems, which are theoretically strong but too expensive for production inference. We built CommitLLM to target the middle ground. The core idea is to keep the verification discipline of proof systems, but specialize it to open weight LLM inference. The cryptographic core is simple: Freivalds style randomized checks for the large linear layers, plus Merkle commitments for the traced execution. Then a lot of engineering work is needed to make that line up with real GPU inference. The key trick is this. A provider claims `z = W × x` for a massive weight matrix. Normally you would verify that by redoing the multiply. Instead, the verifier samples a secret random vector `r`, precomputes `v = rᵀ × W`, and later checks whether `v · x = rᵀ · z`. Two dot products instead of a full matrix multiply. In the current implementation, a wrong result passes with probability at most `1 / (2^32 - 5)` per check. A full matrix multiply, audited with two dot products. Most of the transformer can then be checked exactly or canonically from committed openings. Nonlinear operations such as activations and layer norms are canonically re executed by the CPU verifier. The one honest caveat is attention: native FP16/BF16 attention is not bit reproducible across hardware. CommitLLM verifies the shell around attention exactly, then independently replays attention and checks that the committed post attention output stays within a measured INT8 corridor. So attention is bounded and audited, not proved exactly. That means the protocol already gives very strong exact guarantees on the parts that matter operationally most. If an audited response used the wrong model, the wrong quantization/configuration, or a tampered input/deployment surface, the audit catches that exactly. That includes things like model swaps, silent requantization, and provider side prompt or system prompt injection. Today the implementation and measurements are strongest on Qwen and Llama. But the protocol itself is not meant to be Qwen or Llama specific: we expect it to generalize across open weight decoder only families. What still has to be done is the engineering work to integrate and validate more families explicitly, and we are already working on that. On the measured path, online generation overhead is about 12 to 14% with the provider staying on the normal GPU serving path. The heavier receipt finalization cost is separate and can be deferred off the user facing path. The main systems costs are RAM and bandwidth, not proof generation. The full response is always committed, but only a random fraction of responses are opened for audit. Individual audits are much larger, roughly 4 MB to 100 MB depending on audit depth. The important number is the amortized one: under a reasonable audit policy, the added bandwidth averages to roughly 300 KB per response. After too many weeks without sleep, I’m proud to show what I built with @diego_aligned: CommitLLM. Thanks Diego for your patience. I've been calling you at random hours. The code and paper still need some cleaning and formalization. We’re already in talks with multiple providers and teams that have cryptography related ideas on how to improve it even more. We’re really excited about this and we will continue doubling down on building products in AI, cryptography and security with my company @class_lambda. If governments, hospitals, defense and financial systems are going to run on LLMs, verifiable inference is not optional. It is infrastructure. I will be explaining this in more details in the days to come and I will show how to test it and run it.
Fede’s intern 🥊 tweet media
English
35
54
360
98.6K
Fran Aligned 🟩
Fran Aligned 🟩@fran_aligned·
Si una persona hace 10x (y esto es conservador hay sistemas de personas que hacen más de 10x) y hay personas que no llegan a hacer 2x hay gente que va a quedar fuera del sistema. Esto no es por que haya gente vive codeando es por que hay devs o sistemas de devs que van a aumentar mucho su eficiencia.
Español
1
0
0
1.5K
pablo
pablo@fernandezpablo·
2026 va a ser el año donde los que decían que la programación se terminaba y los programadores se quedaban sin laburo van a empezar a decir "no yo en verdad lo que quería decir era blablabla".
pablo tweet media
Español
45
45
612
30.7K
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
Lambda @class_lambda brings together engineers from every discipline, chemical, structural, civil, mechanical, industrial, with computer scientists, mathematicians, physicists, and experienced software engineers. We're using that depth, plus LLMs, to ship free alternatives to the tools that have charged $5k to $50k/seat to the engineering profession for decades. Pull request by pull request.
Fede’s intern 🥊 tweet mediaFede’s intern 🥊 tweet media
English
11
6
91
6.3K
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
Claude Code: The End of Software as We Know It Claude Code changed the rules of software. Moltbook, vibe coding, the fall of corporate moats, and the future of programming. Analysis from @421Net.
Fede’s intern 🥊 tweet media
English
4
7
25
5.1K
Fran Aligned 🟩 retweetledi
Aligned
Aligned@alignedlayer·
Ethereum is ready to eat finance. Aligned makes it inevitable. Aligned is the solution that can bring the world's businesses onchain. Choosing Aligned is choosing Ethereum as the backbone of global finance. Join us 👇
English
367
8K
9.2K
48.6K
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
The last few weeks, @paradigm's Reth became the fastest execution client as measured by latency on @ethPandaOps. For reasons we'll explain in a blog post, I'm not a huge fan of measuring this metric alone. But hey, I don't make the rules. You give us a benchmark and we show up. The last 6 hours with the latest version, on average, on @ethereum mainnet: 1. Reth: 38.5 ms 2. @ethrex_client: 39.8 ms 3. Nethermind: 41.5 ms Proud of @class_lambda, we're about to push a few more commits.
Fede’s intern 🥊 tweet media
English
12
7
77
7.9K
Fran Aligned 🟩 retweetledi
𝖚𝖈𝖎𝖝𝖌𝖊 🧙‍♂️🔮
le agregué un montón de funcionalidades nuevas (y super útiles) al sitio de @421Net quedan fixear la pantalla de suscripción y alguna gilada más, pero está en su mejor momento.
Español
5
4
127
6.1K
Fran Aligned 🟩 retweetledi
Fede’s intern 🥊
Fede’s intern 🥊@fede_intern·
If you don't follow @blockspaceforum, @kubimensah, @alextes and @DrewVdW you won't be able to understand what's happening in Ethereum. Give them a follow.
Fede’s intern 🥊@fede_intern

A few weeks ago I said there was a seismic change in Ethereum's social layer about to happen. It's happening. Multiple big infrastructure players that most users don't know about are starting to collaborate and build together, doubling down on ETH and the Ethereum network. Follow @blockspaceforum to keep up with it. The biggest block builder (@titanbuilderxyz), the biggest relay (@ultrasoundmoney), @ETHGasOfficial, @Commit_Boost (38% network adoption), @QuasarBuilder, @primev_xyz, @nuconstruct, @blocknative, @fabric_ethereum, researchers from the @ethereumfndn, and many others are all coordinating. Many of us realized that they had to team up and work closely together to make our opinions stronger and speed up our ability to deliver. In the last 5 years @class_lambda built many of the most relevant zkVMs and Ethereum L2s. Last year we went all in on @ethereum L1. In a year we built @ethrex_client, one of the fastest production ready execution clients at about 60k lines of code. We're coordinating with big stakers and the MEV pipeline to grow its adoption while also shipping multiple products around it. We're building a new RISC-V zkVM with @alignedlayer and @3miLabs that I think will become a standard for L1 proving and @eth_proofs in the short term. We're working on @leanEthereum by building our own @ethlambda_lean client. We helped develop the @Commit_Boost sidecar that standardizes communication between validators and third party protocols. We're happy to have helped build a critical piece of software that is now running on 38% of the Ethereum network. We also started collaborating with @titanbuilderxyz on different things under @kubimensah's lead. Hopefully soon we'll be working closely with @alextes and @ultrasoundrelay too! There's more I want to share but can't yet. For now I can say that @class_lambda is part of the @blockspaceforum process and I'm very excited about what's coming. Keep an eye on what we will be doing, I promise it's gonna get interesting.

English
5
9
37
7.7K