Boris

6.5K posts

Boris banner
Boris

Boris

@adimov

決定木委員会

Bangkok Katılım Ekim 2009
409 Takip Edilen443 Takipçiler
Boris
Boris@adimov·
@BLUECOW009 trying to formalize market primitives in R7Rs, so I could define higher order terms and derrivatives to create source of truth for agents to agree on.
English
0
0
0
14
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
What you working on today?
English
9
0
11
614
Boris
Boris@adimov·
Buy $OCT now, it’s a financial advice
English
0
0
0
54
͏
͏@max21e8·
homo is not an insult!
English
5
0
16
507
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
I have the startup idea, im still looking for co-founder
English
20
1
41
3.3K
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
i wanna start a lab, reply to be a candidate
English
66
3
142
7.5K
λ
λ@lambda0xE·
mini-update to the @octrascan website that will be useful for devs: you can now see the entire bytecode and exec history, as well as the full storage and memory
λ tweet mediaλ tweet mediaλ tweet media
London, England 🇬🇧 English
12
15
95
3K
Boris
Boris@adimov·
@octralex @octra if some amount of wOct were burnt and then in 3 mins same amount of wOct minted to another address that might look suspicious.
English
1
0
1
121
alex
alex@octralex·
Someone should build a hook that would bridge to @octra, encrypt, transfer and bridge back. Completely private transfer of a ERC20 token that can later be adapted to any existing token. This long route should take under 5 mins and still be faster than anything live onchain today.
octra labs@octralabs

Ethereum-side bridge flow is nearly done, with final tests ongoing: etherscan.io/tx/0x2bda4157e… Once live, you can bridge your $wOCT to $OCT on octra. @octra mainnet alpha operates on an encrypted and programmable state, with stealth transfers already faster than existing solutions.

English
13
8
89
6.2K
Roberto Rios
Roberto Rios@peruvian_bull·
The future economy
Roberto Rios tweet media
English
299
2.7K
15.4K
1M
chiefofautism
chiefofautism@chiefofautism·
wait until shape rotators will understand that words is shapes too and you can rotate them infinitely to get many combinations as u want
English
57
28
371
16K
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
yo @AnthropicAI @turinginst @AISecurityInst i think you might have forgotten “Someone” in your bibliography you know, the Someone who demonstrated this phenomenon in the field a year before this paper dropped might be worth a footnote!
Elias Al@iam_elias1

Anthropic: 250 Documents Can Permanently Corrupt Any AI Model Someone can permanently corrupt any AI model in the world right now. Not by hacking it. Not by breaking its security. By publishing 250 documents on the internet. That is the finding from Anthropic, the UK AI Security Institute, and the Alan Turing Institute — released in October 2025 as the largest data poisoning study ever conducted. Here is what data poisoning actually means. Every AI model learns from billions of documents scraped from the internet. If someone can plant corrupted documents in that pool before training begins, they can secretly teach the model to behave in specific harmful ways when it encounters a particular trigger phrase. The model learns the backdoor during training. It carries it forever. It does not know it is there. Researchers have known about this attack for years. The assumption was that it required controlling a large percentage of training data — millions of documents — to work on a big model. The bigger the model, the more poisoning you would need. This study proved that assumption completely wrong. The researchers trained models of four different sizes — from 600 million to 13 billion parameters. They slipped in either 100, 250, or 500 malicious documents. Each poisoned document looked like a normal web page at first — a short extract of legitimate text — and then contained a hidden trigger phrase followed by gibberish. 100 documents: insufficient. The backdoor did not reliably form. 250 documents: success. Every model, at every size, was permanently backdoored. 500 documents: same result as 250. The number was constant regardless of model size. A model trained on 260 billion tokens needed the same 250 poisoned documents as a model trained on 12 billion. Scale offered zero protection. Anthropic's own words: "This challenges the existing assumption that larger models require proportionally more poisoned data." Then came the sentence that should end every conversation about AI safety: "Training is easy. Untraining is impossible." Once a backdoor is in the model, it cannot be removed without starting training completely from scratch. You cannot identify which 250 documents caused it. You cannot surgically extract the corrupted behavior. You must rebuild the entire model from the beginning. Anyone can publish content to the internet. Academic papers. Blog posts. Forum discussions. Product descriptions. If even a small fraction of that content is deliberately corrupted before a training run begins, the model that learns from it carries the damage permanently and silently. GPT-5. Claude. Gemini. Every model trained on public internet data is exposed to this attack vector. The defense does not exist yet. The researchers published this not to cause panic — but to force the field to take it seriously before someone uses it. Source: Anthropic, UK AISI, Alan Turing Institute (2025) · anthropic.com/research/small… · aisi.gov.uk/blog/examining…

English
72
123
1.4K
94.1K
Boris
Boris@adimov·
@elonmusk Hmmm, what could possibly be a reason for it? 🤔
English
0
0
0
7
Raj.Eth
Raj.Eth@rajsh7894·
Remember when @octra tried to cash in on the ZAMA hype and announced an ICO via Echo at a $200M FDV? Anyone posting criticism back then got hit with racist replies from their team… funny how all of them disappeared now 😂 They launched quietly recently and still can’t even push past ~$81.6M FDV. Wanted to extract at the top… ended up being a clean save fr
Raj.Eth tweet mediaRaj.Eth tweet mediaRaj.Eth tweet media
English
19
5
62
15K
dasha
dasha@0xdasha·
@rajsh7894 @octra "remember when a team announced a sale that would be sold out, but then reconsidered as the market went to shit, and made a fair valuation launch instead..." maybe it's not racism, and iq just doesn't run in your family bro...
English
5
2
36
2K
͏
͏@max21e8·
and they clapped
English
8
1
36
2.8K
λ
λ@lambda0xE·
just stop storing your 1-to-1 dvns priv keys in .txt files on desktop
English
9
4
79
3.2K
Boris retweetledi
͏
͏@max21e8·
Bitcoin Core funder at Brink SLAMS Ethereum for having a roadmap and misspelling centralization
͏ tweet media
English
3
2
11
550
Boris
Boris@adimov·
@BLUECOW009 Still impressed how you figured out llms’ love to lisp 1.5 years before me
English
0
0
0
9
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
reply to this to get a follow back, must be following me
English
39
0
46
1.3K