Aligned

8.3K posts

Aligned banner
Aligned

Aligned

@alignedlayer

Aligned builds the tools that turn Ethereum into the world’s financial backend.

Ethereum Katılım Ocak 2024
33 Takip Edilen86.1K Takipçiler
Aligned
Aligned@alignedlayer·
it's a lifestyle.
Aligned tweet media
English
6
2
32
2K
Aligned
Aligned@alignedlayer·
Aligned tweet media
ZXX
14
9
130
6.5K
Aligned
Aligned@alignedlayer·
which fee makes you the most angry?
English
13
6
41
4.7K
Aligned
Aligned@alignedlayer·
green season.
English
51
15
193
9.5K
Aligned
Aligned@alignedlayer·
the exit was always there.
Aligned tweet media
English
36
17
199
10.4K
Aligned
Aligned@alignedlayer·
Align.
English
147
42
317
14.4K
Aligned
Aligned@alignedlayer·
a new paper by our co-founder @fede_intern and @diego_aligned. they introduced a new way to achieve practical verifiable ai. we are brainstorming if this could be added to improve or build new products to our stack. we also need a new whitepaper tshirt for diego :)
Fede’s intern 🥊@fede_intern

LLMs now make critical decisions in hospitals, defense, banks, and governments. Yet nobody can verify which model actually ran, or whether the output was tampered with. A provider or middleman can swap weights, silently requantize the model, alter decoding, inject hidden prompts, do supply chain attacks, or change the deployment surface without the user knowing. This problem is already serious. It will become critical. We think this needs a practical solution, not just a theoretically clean one. CommitLLM is designed to be deployable on existing serving stacks now: the provider keeps the normal GPU serving path, does not need a proving circuit, does not need a kernel rewrite, and does not generate a heavy proof for every response. In practice, two families of approaches dominated the conversation before this work: fingerprinting, which can be gamed, and proof-based systems, which are theoretically strong but too expensive for production inference. We built CommitLLM to target the middle ground. The core idea is to keep the verification discipline of proof systems, but specialize it to open weight LLM inference. The cryptographic core is simple: Freivalds style randomized checks for the large linear layers, plus Merkle commitments for the traced execution. Then a lot of engineering work is needed to make that line up with real GPU inference. The key trick is this. A provider claims `z = W × x` for a massive weight matrix. Normally you would verify that by redoing the multiply. Instead, the verifier samples a secret random vector `r`, precomputes `v = rᵀ × W`, and later checks whether `v · x = rᵀ · z`. Two dot products instead of a full matrix multiply. In the current implementation, a wrong result passes with probability at most `1 / (2^32 - 5)` per check. A full matrix multiply, audited with two dot products. Most of the transformer can then be checked exactly or canonically from committed openings. Nonlinear operations such as activations and layer norms are canonically re executed by the CPU verifier. The one honest caveat is attention: native FP16/BF16 attention is not bit reproducible across hardware. CommitLLM verifies the shell around attention exactly, then independently replays attention and checks that the committed post attention output stays within a measured INT8 corridor. So attention is bounded and audited, not proved exactly. That means the protocol already gives very strong exact guarantees on the parts that matter operationally most. If an audited response used the wrong model, the wrong quantization/configuration, or a tampered input/deployment surface, the audit catches that exactly. That includes things like model swaps, silent requantization, and provider side prompt or system prompt injection. Today the implementation and measurements are strongest on Qwen and Llama. But the protocol itself is not meant to be Qwen or Llama specific: we expect it to generalize across open weight decoder only families. What still has to be done is the engineering work to integrate and validate more families explicitly, and we are already working on that. On the measured path, online generation overhead is about 12 to 14% with the provider staying on the normal GPU serving path. The heavier receipt finalization cost is separate and can be deferred off the user facing path. The main systems costs are RAM and bandwidth, not proof generation. The full response is always committed, but only a random fraction of responses are opened for audit. Individual audits are much larger, roughly 4 MB to 100 MB depending on audit depth. The important number is the amortized one: under a reasonable audit policy, the added bandwidth averages to roughly 300 KB per response. After too many weeks without sleep, I’m proud to show what I built with @diego_aligned: CommitLLM. Thanks Diego for your patience. I've been calling you at random hours. The code and paper still need some cleaning and formalization. We’re already in talks with multiple providers and teams that have cryptography related ideas on how to improve it even more. We’re really excited about this and we will continue doubling down on building products in AI, cryptography and security with my company @class_lambda. If governments, hospitals, defense and financial systems are going to run on LLMs, verifiable inference is not optional. It is infrastructure. I will be explaining this in more details in the days to come and I will show how to test it and run it.

English
110
35
131
17K
Aligned
Aligned@alignedlayer·
the infrastructure of tomorrow, live today.
Aligned tweet media
English
32
19
158
12.7K
Aligned
Aligned@alignedlayer·
every game has a winner. ethereum already knows the ending.
English
20
14
92
7.6K
Aligned
Aligned@alignedlayer·
This is Aligned. Tap each image to reveal what we're building.
Aligned tweet mediaAligned tweet mediaAligned tweet mediaAligned tweet media
English
20
16
87
8.2K
Aligned
Aligned@alignedlayer·
Millions of users can't just move to Ethereum. This is how Aligned solves that. @rj_aligned on @DefiantNews.
English
5
5
46
5.9K
Aligned
Aligned@alignedlayer·
Aligned tweet media
ZXX
8
4
21
4.2K
Aligned
Aligned@alignedlayer·
You don't need to understand ZK proofs to use them. You just need Aligned.
English
1
1
23
4K
Aligned
Aligned@alignedlayer·
one stack. one mission: ethereum as the world's financial backend.
English
4
4
34
4.1K
Aligned
Aligned@alignedlayer·
Ethereum is ready for ZK. Aligned was built expecting that. RJ on @DefiantNews.
English
7
2
36
4.9K
Aligned
Aligned@alignedlayer·
our mission is to remove the complexity.
English
9
0
17
3.2K