Lancelot

2.9K posts

Lancelot banner
Lancelot

Lancelot

@wraitii

Building @hyli_org with @sylvechv, and on the weekends I work on @play0ad 🏛

Paris Katılım Eylül 2021
634 Takip Edilen1.8K Takipçiler
Lancelot retweetledi
matteo
matteo@mtteom_·
Introducing Zolt: the first pure-Zig zkVM Fully compatible with @a16zcrypto's Jolt, the entire cryptography is made from scratch in @ziglang , only using the stdlib! No arkworks FFI or other dependencies 🫡 The first benchmarks:
matteo tweet media
English
29
22
192
23.4K
Lancelot
Lancelot@wraitii·
@eniwhere_ Machine virtuelle pour preuve à divulgation nulle de connaissance gang
Français
0
0
5
127
eni 🍞
eni 🍞@eniwhere_·
the highest concentration of zkVM is located between metro bourse and grand boulevards in paris
English
1
1
24
2.3K
Lancelot
Lancelot@wraitii·
@LefKok @secparam Someone really ought write a blog post describing this tbh. As far as I know it’s not written anywhere in a simple manner.
English
0
0
0
32
Lefteris Kokoris-Kogias
@secparam Leader needs to convince 2f+1 that it is not skipping something finalized. From these 2f+1 at least one has voted on votes and will show the votes as proof. The showing the votes is not necessary it can bluntly reject. But it makes it simpler if they do.
English
3
0
3
75
Ian Miers
Ian Miers@secparam·
What's the intuitive reason many Byzantine consensus protocols vote on the votes? It's a common paradigm, but I've never seen anyone give a 2 sentence explosion for why it's sufficient under partial synchrony.
English
9
0
13
2.8K
Lancelot
Lancelot@wraitii·
@secparam The idea goes something like “when you agree to change leader, there is necessarily at least one honest replica that will say it saw a voted on message”, and so the protocol must reuse it.
English
0
0
0
29
Lancelot
Lancelot@wraitii·
@secparam Two round guarantees that if someone might have committed, you can write a view transition algorithm that will always commit the same message for that round. Otherwise you can’t. There’s literally no simple explanation.
English
1
0
0
52
Lancelot
Lancelot@wraitii·
Mr Fukuyama, end this history !
English
0
0
2
81
Lancelot retweetledi
Sylve
Sylve@sylvechv·
i love the future
Sylve tweet media
English
0
1
10
223
Lancelot
Lancelot@wraitii·
70s: asbestos in your lungs 00s: microplastics in your blood 30s: microslop in your brain
English
0
2
5
229
Lancelot
Lancelot@wraitii·
40% is larping. It’s 95% or you hire more people. Would short Blocks.
English
0
0
4
115
Lancelot
Lancelot@wraitii·
jane street m'a tuer
English
1
2
21
539
Lancelot
Lancelot@wraitii·
@Scav Wha is old is new again
English
0
0
0
5
Mattie Fairchild
Mattie Fairchild@Scav·
@wraitii Super underrated imo just how easy it’s gotten to now DOCUMENT an old codebase. And that’s before you get into harnesses that can spend days trying to replicate behavior thought locked in the lost source code
English
1
0
1
20
Lancelot
Lancelot@wraitii·
They need to port Sid Meiers pirates 2004 to modern systems and they need to do it now, or I’m gonna do it for them.
English
1
0
10
686
Lancelot
Lancelot@wraitii·
@banteg I think it’s simply that most rust written is replacing slower TS/python, where even slow rust is much much faster. Muddies the LLM training.
English
0
0
0
269
banteg
banteg@banteg·
seems like rust is too big brain since even the latest sota llms struggle with it. is the language too implicit? is the goal to make smart programmers feel good about themselves or to produce working programs?
English
31
1
101
17.1K
Francesco — oss/acc
Francesco — oss/acc@ceccon_me·
@banteg I noticed less clever rust with few type parameters is easier for LLMs. It still puts way too many clone everywhere though
English
1
0
1
426
Lancelot
Lancelot@wraitii·
American companies need to wake up and realise that when you work under a protofascist government, you are liable to get protofascisted. Anthropic is the first big domestic example I can remember but it won't be the last.
English
0
0
4
89
JMB 🧙‍♂️
JMB 🧙‍♂️@jmbollenbacher·
Been playing with Mercury 2 today and it's actually pretty decent. It's not frontier, but it's good enough for a lot of stuff that small models do. And at >1000tps and reasonable $/megatoken, it's pretty compelling. Maybe dLLMs are finally going to get their day in the sun.
Artificial Analysis@ArtificialAnlys

Inception Labs has launched Mercury 2, their next generation production-ready Diffusion LLM. Mercury 2 achieves >1,000 output tokens/s with significant gains in intelligence @_inception_ai's Diffusion LLMs (“dLLMs”) use a different architecture compared to autoregressive based LLMs. The Diffusion LLM generation process starts with noise and iteratively refines the output using a transformer model that can modify multiple tokens in parallel. This allows parallelization of output token generation, allowing faster output speeds because many output tokens are generated at the same time. Key takeaways: ➤ Amongst comparable size/price-class models, Mercury 2 performs competitively in intelligence vs. output speed. While it does not have leading intelligence, it’s output speed is more than 3X the next fastest model in this class (benchmarks based on first party endpoints or the median of providers serving the model where a first party endpoint is not available) ➤ Key strengths include agentic coding & terminal use and instruction following. Mercury 2 performs at similar level to Claude 4.5 Haiku on Terminal-Bench Hard and scores 70% on IFBench (Instruction Following), outperforming gpt-oss-120B, GPT-5.1 Codex mini, and GPT-5 nano Inception Labs background: This is the second release from Inception Labs. The founders were previously professors from Stanford, UCLA, and Cornell and have contributed to AI research & technologies including Flash Attention, Decision Transformers, and Direct Preference Optimization (DPO). See below for further analysis.

English
2
0
3
297
Lancelot
Lancelot@wraitii·
4 years of legendary Ukrainian bravery. Here's to hoping the 5th will be the last one they have to fight. 🇺🇦
English
7
1
20
261