
1/3 PropAMM liquidity is now fully operational on Ethereum mainnet! Three makers are live in every Titan block, and quotes are already consistently beating Binance VIP9 taker fees for retail orders (trades <$1k).
Fran Aligned 🟩
2.4K posts

@fran_aligned
Director of @AlignedFndn

1/3 PropAMM liquidity is now fully operational on Ethereum mainnet! Three makers are live in every Titan block, and quotes are already consistently beating Binance VIP9 taker fees for retail orders (trades <$1k).





LLMs now make critical decisions in hospitals, defense, banks, and governments. Yet nobody can verify which model actually ran, or whether the output was tampered with. A provider or middleman can swap weights, silently requantize the model, alter decoding, inject hidden prompts, do supply chain attacks, or change the deployment surface without the user knowing. This problem is already serious. It will become critical. We think this needs a practical solution, not just a theoretically clean one. CommitLLM is designed to be deployable on existing serving stacks now: the provider keeps the normal GPU serving path, does not need a proving circuit, does not need a kernel rewrite, and does not generate a heavy proof for every response. In practice, two families of approaches dominated the conversation before this work: fingerprinting, which can be gamed, and proof-based systems, which are theoretically strong but too expensive for production inference. We built CommitLLM to target the middle ground. The core idea is to keep the verification discipline of proof systems, but specialize it to open weight LLM inference. The cryptographic core is simple: Freivalds style randomized checks for the large linear layers, plus Merkle commitments for the traced execution. Then a lot of engineering work is needed to make that line up with real GPU inference. The key trick is this. A provider claims `z = W × x` for a massive weight matrix. Normally you would verify that by redoing the multiply. Instead, the verifier samples a secret random vector `r`, precomputes `v = rᵀ × W`, and later checks whether `v · x = rᵀ · z`. Two dot products instead of a full matrix multiply. In the current implementation, a wrong result passes with probability at most `1 / (2^32 - 5)` per check. A full matrix multiply, audited with two dot products. Most of the transformer can then be checked exactly or canonically from committed openings. Nonlinear operations such as activations and layer norms are canonically re executed by the CPU verifier. The one honest caveat is attention: native FP16/BF16 attention is not bit reproducible across hardware. CommitLLM verifies the shell around attention exactly, then independently replays attention and checks that the committed post attention output stays within a measured INT8 corridor. So attention is bounded and audited, not proved exactly. That means the protocol already gives very strong exact guarantees on the parts that matter operationally most. If an audited response used the wrong model, the wrong quantization/configuration, or a tampered input/deployment surface, the audit catches that exactly. That includes things like model swaps, silent requantization, and provider side prompt or system prompt injection. Today the implementation and measurements are strongest on Qwen and Llama. But the protocol itself is not meant to be Qwen or Llama specific: we expect it to generalize across open weight decoder only families. What still has to be done is the engineering work to integrate and validate more families explicitly, and we are already working on that. On the measured path, online generation overhead is about 12 to 14% with the provider staying on the normal GPU serving path. The heavier receipt finalization cost is separate and can be deferred off the user facing path. The main systems costs are RAM and bandwidth, not proof generation. The full response is always committed, but only a random fraction of responses are opened for audit. Individual audits are much larger, roughly 4 MB to 100 MB depending on audit depth. The important number is the amortized one: under a reasonable audit policy, the added bandwidth averages to roughly 300 KB per response. After too many weeks without sleep, I’m proud to show what I built with @diego_aligned: CommitLLM. Thanks Diego for your patience. I've been calling you at random hours. The code and paper still need some cleaning and formalization. We’re already in talks with multiple providers and teams that have cryptography related ideas on how to improve it even more. We’re really excited about this and we will continue doubling down on building products in AI, cryptography and security with my company @class_lambda. If governments, hospitals, defense and financial systems are going to run on LLMs, verifiable inference is not optional. It is infrastructure. I will be explaining this in more details in the days to come and I will show how to test it and run it.












Año 2050. Todos los argentinos manejan un uber. Vivimos adentro de nuestro auto esperando que alguien pida un viaje pero nadie necesita porque todos manejamos un uber.




A few weeks ago I said there was a seismic change in Ethereum's social layer about to happen. It's happening. Multiple big infrastructure players that most users don't know about are starting to collaborate and build together, doubling down on ETH and the Ethereum network. Follow @blockspaceforum to keep up with it. The biggest block builder (@titanbuilderxyz), the biggest relay (@ultrasoundmoney), @ETHGasOfficial, @Commit_Boost (38% network adoption), @QuasarBuilder, @primev_xyz, @nuconstruct, @blocknative, @fabric_ethereum, researchers from the @ethereumfndn, and many others are all coordinating. Many of us realized that they had to team up and work closely together to make our opinions stronger and speed up our ability to deliver. In the last 5 years @class_lambda built many of the most relevant zkVMs and Ethereum L2s. Last year we went all in on @ethereum L1. In a year we built @ethrex_client, one of the fastest production ready execution clients at about 60k lines of code. We're coordinating with big stakers and the MEV pipeline to grow its adoption while also shipping multiple products around it. We're building a new RISC-V zkVM with @alignedlayer and @3miLabs that I think will become a standard for L1 proving and @eth_proofs in the short term. We're working on @leanEthereum by building our own @ethlambda_lean client. We helped develop the @Commit_Boost sidecar that standardizes communication between validators and third party protocols. We're happy to have helped build a critical piece of software that is now running on 38% of the Ethereum network. We also started collaborating with @titanbuilderxyz on different things under @kubimensah's lead. Hopefully soon we'll be working closely with @alextes and @ultrasoundrelay too! There's more I want to share but can't yet. For now I can say that @class_lambda is part of the @blockspaceforum process and I'm very excited about what's coming. Keep an eye on what we will be doing, I promise it's gonna get interesting.
