N ๐ŸŸฉ

792 posts

N ๐ŸŸฉ banner
N ๐ŸŸฉ

N ๐ŸŸฉ

@corang_ee

All roads lead to Rome

Seoul, Korea ๊ฐ€์ž…์ผ Nisan 2025
814 ํŒ”๋กœ์ž‰1.2K ํŒ”๋กœ์›Œ
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
@mynameiskatekim @ethena ์ด๋”๋‚˜ ์—๋”๋‚˜ ๊ฐ€์ฆˆ์•„์•„์•„
ํ•œ๊ตญ์–ด
0
0
1
9
Kate
Kate@mynameiskatekimยท
Have a great April Foolsโ€™ @ethena Day!
English
2
0
9
187
RJ ๐ŸŸฉ
RJ ๐ŸŸฉ@rj_alignedยท
check out the latest amazing work by @fede_intern and @diego_aligned. they have found a way to make verifiable ai affordable and usable with open source models. of course already with a paper and code for anyone to review. i'm incredibly lucky to be able to work with this crazy and brilliant people. and there is more to come!
Fedeโ€™s intern ๐ŸฅŠ@fede_intern

LLMs now make critical decisions in hospitals, defense, banks, and governments. Yet nobody can verify which model actually ran, or whether the output was tampered with. A provider or middleman can swap weights, silently requantize the model, alter decoding, inject hidden prompts, do supply chain attacks, or change the deployment surface without the user knowing. This problem is already serious. It will become critical. We think this needs a practical solution, not just a theoretically clean one. CommitLLM is designed to be deployable on existing serving stacks now: the provider keeps the normal GPU serving path, does not need a proving circuit, does not need a kernel rewrite, and does not generate a heavy proof for every response. In practice, two families of approaches dominated the conversation before this work: fingerprinting, which can be gamed, and proof-based systems, which are theoretically strong but too expensive for production inference. We built CommitLLM to target the middle ground. The core idea is to keep the verification discipline of proof systems, but specialize it to open weight LLM inference. The cryptographic core is simple: Freivalds style randomized checks for the large linear layers, plus Merkle commitments for the traced execution. Then a lot of engineering work is needed to make that line up with real GPU inference. The key trick is this. A provider claims `z = W ร— x` for a massive weight matrix. Normally you would verify that by redoing the multiply. Instead, the verifier samples a secret random vector `r`, precomputes `v = rแต€ ร— W`, and later checks whether `v ยท x = rแต€ ยท z`. Two dot products instead of a full matrix multiply. In the current implementation, a wrong result passes with probability at most `1 / (2^32 - 5)` per check. A full matrix multiply, audited with two dot products. Most of the transformer can then be checked exactly or canonically from committed openings. Nonlinear operations such as activations and layer norms are canonically re executed by the CPU verifier. The one honest caveat is attention: native FP16/BF16 attention is not bit reproducible across hardware. CommitLLM verifies the shell around attention exactly, then independently replays attention and checks that the committed post attention output stays within a measured INT8 corridor. So attention is bounded and audited, not proved exactly. That means the protocol already gives very strong exact guarantees on the parts that matter operationally most. If an audited response used the wrong model, the wrong quantization/configuration, or a tampered input/deployment surface, the audit catches that exactly. That includes things like model swaps, silent requantization, and provider side prompt or system prompt injection. Today the implementation and measurements are strongest on Qwen and Llama. But the protocol itself is not meant to be Qwen or Llama specific: we expect it to generalize across open weight decoder only families. What still has to be done is the engineering work to integrate and validate more families explicitly, and we are already working on that. On the measured path, online generation overhead is about 12 to 14% with the provider staying on the normal GPU serving path. The heavier receipt finalization cost is separate and can be deferred off the user facing path. The main systems costs are RAM and bandwidth, not proof generation. The full response is always committed, but only a random fraction of responses are opened for audit. Individual audits are much larger, roughly 4 MB to 100 MB depending on audit depth. The important number is the amortized one: under a reasonable audit policy, the added bandwidth averages to roughly 300 KB per response. After too many weeks without sleep, Iโ€™m proud to show what I built with @diego_aligned: CommitLLM. Thanks Diego for your patience. I've been calling you at random hours. The code and paper still need some cleaning and formalization. Weโ€™re already in talks with multiple providers and teams that have cryptography related ideas on how to improve it even more. Weโ€™re really excited about this and we will continue doubling down on building products in AI, cryptography and security with my company @class_lambda. If governments, hospitals, defense and financial systems are going to run on LLMs, verifiable inference is not optional. It is infrastructure. I will be explaining this in more details in the days to come and I will show how to test it and run it.

English
4
2
10
1K
Pedgy Penguins
Pedgy Penguins@Pedgypenguinยท
For those who missed the first mint, another chance is coming. 1111 Penguin Z Free Mint soon on $ETH Drop your EVM wallets
Pedgy Penguins tweet media
English
486
174
790
8.7K
Wonnie๐Ÿ”บ
Wonnie๐Ÿ”บ@wonnieยท
GM Legends Have a lovely day
Wonnie๐Ÿ”บ tweet media
English
214
6
314
5.2K
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
ํ˜‘์ƒ ์ง„์ „, ๊ณต๊ฒฉ ์—ฐ๊ธฐ ํŠธ๋Ÿผํ”„: โ€œ๋ฏธ๊ตญ๊ณผ ์ด๋ž€์€ ์ง€๋‚œ ์ดํ‹€ ๋™์•ˆ ์ค‘๋™ ๋‚ด ์ ๋Œ€ ํ–‰์œ„๋ฅผ ์™„์ „ํžˆ ํ•ด์†Œํ•˜๊ธฐ ์œ„ํ•œ ์ƒ์‚ฐ์ ์ธ ๋…ผ์˜๋ฅผ ์ง„ํ–‰ํ•ด์™”์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ์ฃผ ํ˜‘์ƒ์ด ๊ณ„์†๋˜๋Š” ๊ฐ€์šด๋ฐ, ์ง„์ „์ด ์žˆ๋‹ค๋Š” ์ „์ œํ•˜์— ์ด๋ž€์˜ ์—๋„ˆ์ง€ ์ธํ”„๋ผ์— ๋Œ€ํ•œ ๋ชจ๋“  ๊ตฐ์‚ฌ ๊ณต๊ฒฉ์„ 5์ผ๊ฐ„ ์ค‘๋‹จํ•˜๋ผ๊ณ  ์ง€์‹œํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค. โ€” ๋„๋„๋“œ J. ํŠธ๋Ÿผํ”„ ๋Œ€ํ†ต๋ นโ€
N ๐ŸŸฉ tweet media
ํ•œ๊ตญ์–ด
0
0
0
212
Yura
Yura@namyura_ยท
Had an opportunity to speak about @SentientAGI ๐Ÿฉท Always making open-source AI win Thx for the invite @Unibase_AI @0xzoe_im ๐Ÿ™๐Ÿป
Yura tweet mediaYura tweet media
English
16
1
83
3.9K
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
์˜ค๋Š˜ ์ƒˆ๋ฒฝ ์ด๋ž€ ๋ฐฉ๊ณต๋ง์ด ์ด๋ž€ ์ค‘๋ถ€ ์ƒ๊ณต์—์„œ์˜ ์ „ํˆฌ ์ž„๋ฌด ์ค‘์ธ ๋ฏธ๊ตญ F-35๋ฅผ ๊ฒฉ์ถ”ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ œํŠธ๊ธฐ๋Š” ์†์ƒ๋˜์—ˆ์ง€๋งŒ ์ค‘๋™์˜ ๋ฏธ๊ตญ/๋™๋งน ๊ธฐ์ง€์—์„œ ๋น„์ƒ ์ฐฉ๋ฅ™์„ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์กฐ์ข…์‚ฌ๋Š” ์•ˆ์ „ํ•˜๋‹ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฒˆ ์ „์Ÿ์—์„œ ์ด๋ž€์ด ๋ฏธ๊ตญ ํ•ญ๊ณต๊ธฐ๋ฅผ ํƒ€๊ฒฉํ•œ ์ฒซ ๋ฒˆ์งธ ์‚ฌ๋ก€์ž…๋‹ˆ๋‹ค. #๋ฏธ๊ตญ #์ด๋ž€ #F35
ํ•œ๊ตญ์–ด
0
0
0
45
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
๋ชจ๊ฑด ์Šคํƒ ๋ฆฌ, ํ˜„๋ฌผ ๋น„ํŠธ์ฝ”์ธ ETF ์‹ ์ฒญ ์ œ์ถœ @MorganStanley SEC์— ํ˜„๋ฌผ ๋น„ํŠธ์ฝ”์ธ ETF๋ฅผ ์œ„ํ•œ S-1 ์‹ ์ฒญ์„œ๋ฅผ ์ œ์ถœํ•˜๊ณ  ์—…๋ฐ์ดํŠธํ–ˆ์œผ๋ฉฐ, ๋ชจ๊ฑด ์Šคํƒ ๋ฆฌ ๋น„ํŠธ์ฝ”์ธ ํŠธ๋Ÿฌ์ŠคํŠธ๋ฅผ ์ถœ์‹œํ•  ๊ณ„ํš์„ ์ถ”์ง„์ค‘. #๋ชจ๊ฑด์Šคํƒ ๋ฆฌ #ํ˜„๋ฌผ #๋น„ํŠธ์ฝ”์ธ
N ๐ŸŸฉ tweet media
ํ•œ๊ตญ์–ด
0
0
0
20
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
ํŠธ๋Ÿผํ”„๋Š” ์ง„์งœ ๋Œ€๋‹จํ•˜๋„ค์š”... ๐Ÿ‡ฏ๐Ÿ‡ต ์ผ๋ณธ ๊ธฐ์ž ์ด๋ž€๊ณผ์˜ ์ „์Ÿ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์™œ ์šฐ๋ฆฌ์—๊ฒŒ ๋งํ•˜์ง€ ์•Š์•˜๋‚˜์š”? ๐Ÿ‡บ๐Ÿ‡ธ @realDonaldTrump ์šฐ๋ฆฌ๋Š” ๊ธฐ์Šต์„ ์›ํ–ˆ์–ด์š”. ์ผ๋ณธ๋งŒํผ ๊ธฐ์Šต์— ๋Œ€ํ•ด ์ž˜ ์•„๋Š” ๋‚˜๋ผ๋Š” ๋˜ ์žˆ์„๊นŒ์š”? ์ง„์ฃผ๋งŒ์— ๋Œ€ํ•ด ์™œ ๋‚˜์—๊ฒŒ ๋งํ•˜์ง€ ์•Š์•˜๋‚˜์š”? ๋งˆ์ง€๋ง‰์— ๋‹ค์นด์ด์น˜ ํ‘œ์ •์ด ใ…‹ใ…‹ใ…‹ใ…‹ #ํŠธ๋Ÿผํ”„ #๋‹ค์นด์ด์น˜
ํ•œ๊ตญ์–ด
0
0
0
19
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
์˜ˆ์ธก์‹œ์žฅ ์นดํ…Œ๊ณ ๋ฆฌ๋ณ„ ๊ฑฐ๋ž˜๋Ÿ‰ (2์›” 16์ผ~3์›” 15์ผ) ์ด ๋ช…๋ชฉ ๊ฑฐ๋ž˜๋Ÿ‰: 191์–ต 8,000๋งŒ ๋‹ฌ๋Ÿฌ 1 ์Šคํฌ์ธ  53.2% 2, ์•”ํ˜ธํ™”ํ 19.9% 1 ์ •์น˜ 16.1% 2 ๊ธฐํƒ€ 5.3% 3 ๋ฌธํ™” 1.7% 4 ๋น„์ฆˆ๋‹ˆ์Šค 1.6% 5 ๊ฒฝ์ œ 1.1% 6 ๋‚ ์”จ 0.8% 7 ๊ธฐ์ˆ  0.3% #์˜ˆ์ธก์‹œ์žฅ #๊ฑฐ๋ž˜๋Ÿ‰
N ๐ŸŸฉ tweet media
ํ•œ๊ตญ์–ด
0
0
0
22
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
@IPALAU1 ์ข‹์•„์š”!
ํ•œ๊ตญ์–ด
0
0
0
14
MAGIC (theo/acc)๐Ÿ•ฏ๏ธ๐Ÿฆ…
์ธ์ƒ ํ•จ๋ฐ• ์Šคํ…Œ์ดํฌ ๋ง›์ง‘ ์ฐพ์Œ ์ผ๋ณธ์—์„œ ์‚ด๋‹ค์˜จ ์นœ๊ตฌ๊ฐ€ ์ผ๋ณธ์—์„œ ํ•ซํ•œ ํ•จ๋ฐ• ์ง‘์ด๋ผ๊ณ  ๊ฐ€์ž๊ณ ํ•จ. ๊ฐœ์ธ์ ์œผ๋กœ ํ•จ๋ฐ•์„ ๋ณ„๋กœ ์„ ํ˜ธํ•˜์ง€ ์•Š์•„์„œ ์‹œํฐ๋‘ฅ ํ–ˆ์ง€๋งŒ ์นœ๊ตฌํ”ฝ์„ ๋ฏฟ๋Š” ํŽธ์ด๋ผ ์†๋Š”์…ˆ์น˜๊ณ  ๊ฐ€๋ด„ ๊ทผ๋ฐ ์˜ค๋Š˜ ๋จน์–ด๋ณด๊ณ  ใ„นใ…‡ ๊ฐ๋™ํ•จ. ์šฐ๋ฆฌ๊ฐ€ ์ง€๊ธˆ๊นŒ์ง€ ๋จน์—ˆ๋˜๊ฑด ์ง„์งœ ํ•จ๋ฐ•์ด ์•„๋‹ˆ์—ˆ์Œ. ํ•œ๊ตญ์ธ๋“ค์€ ๊ทธ๋™์•ˆ ์‚ฌ๊ธฐ๋ฅผ ๋‹นํ–ˆ๋˜๊ฑฐ์ž„ ํ•œ์ค„์š”์•ฝ : ๊ผญ ๊ฐ€๋ณด์„ธ์š” ํžˆํ‚ค๋‹ˆ์ฟ ํ† ์ฝ”๋ฉ” ๋„์‚ฐ ์„œ์šธ ๊ฐ•๋‚จ๊ตฌ ์„ ๋ฆ‰๋กœ155๊ธธ 21 2์ธต naver.me/531ezIH6
MAGIC (theo/acc)๐Ÿ•ฏ๏ธ๐Ÿฆ… tweet media
ํ•œ๊ตญ์–ด
16
3
43
2.5K
Javi๐Ÿฅฅ.eth
Javi๐Ÿฅฅ.eth@jgonzalezferrerยท
I reply to every DM. Every day. It's exhausting. Sometimes painful But it's been one of the most valuable things I've done for building communities Ok, some days I rest and don't answer DMs But you get the point!
English
471
23
959
23K
Kate
Kate@mynameiskatekimยท
( หƒแท„หถหถฬซหถห‚แท… ) ๐Ÿ’—@ethena ํ™”์ดํŠธ๋ผ๋ฒจ! ์•ฝ 2๊ฐœ์›” ๋งŒ์— 3๊ฐœ์˜ Stablecoin ์ด ๊ณต๊ธ‰๋Ÿ‰ $140M+ ๋‹ฌ์„ฑ w/ @JupiterExchange @megaeth @SuiNetwork
Kate tweet media
ํ•œ๊ตญ์–ด
2
0
8
506
Kate
Kate@mynameiskatekimยท
๊ฐ•์•„์ง€ ํ† ์ด์Šคํ† ๋ฆฌ ์˜ท ์ž…ํ˜€๋ดค๋Š”๋ฐ ๊ท€์—ฝ๐Ÿถ ๋ฒ„์ฆˆ ์šฐ๋”” ์˜ท์„ ์‚ด๊ฑธ๊ทธ๋žฌ๋‚˜! ๐Ÿ‘€
GIF
Kate tweet media
ํ•œ๊ตญ์–ด
3
0
6
220
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
@mynameiskatekim ใ…‹ใ…‹ใ…‹ใ…‹ใ…‹ใ…‹ใ…‹ใ…‹ใ…‹ใ…‹
ํ•œ๊ตญ์–ด
0
0
0
14
Kate
Kate@mynameiskatekimยท
์ด์ œ USDe๋Š” @Compound_xyz ๋ฉ”์ธ๋„ท์˜ USDC ๋ฐ USDT Comet์—์„œ ๋‹ด๋ณด ์ž์‚ฐ์œผ๋กœ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค!
Kate tweet media
ํ•œ๊ตญ์–ด
3
0
19
797
N ๐ŸŸฉ
N ๐ŸŸฉ@corang_eeยท
@Pedgypenguin 0x26112ad891f1a64ba2722bae303b40dff85a843b Please Please Please pudgy Penguins ~~~
English
0
0
2
298
Pedgy Penguins
Pedgy Penguins@Pedgypenguinยท
4,444 Pedgys are coming to ETH for FREE drop your EVM wallets
Pedgy Penguins tweet media
English
3.4K
755
3.4K
165.1K