CORE3

956 posts

CORE3 banner
CORE3

CORE3

@Core3io

Measure crypto risk so trust can exist Global self-regulatory platform for crypto From https://t.co/A6Wg236XcB's security rating to the probability of loss across 6 domains

เข้าร่วม Haziran 2018
216 กำลังติดตาม3.7K ผู้ติดตาม
ทวีตที่ปักหมุด
CORE3
CORE3@Core3io·
❓ What does a Probability of Loss (PoL) score actually tell you? A higher PoL = a greater likelihood of a project's failure. It's a quantitative measure of the project’s reality, based on public data. The score is built by evaluating over 100 metrics across 5 critical dimensions. Here are some examples of data layers used for PoL: 📍 Security: Audit coverage, bug bounty size, and whether the findings were fixed… 📍 Financial: Treasury asset quality, revenue dependency… 📍 Operational: Founder track record, GitHub activity, liquidity risks… 📍 Regulatory: Jurisdiction tier, team anonymity, compliance… 📍 Reputational: Past incident response, social media integrity… This scrutiny can't be gamed. The only way for a project to improve its score is to improve its risk practices and submit verifiable evidence of those improvements. 👉 Explore the complete framework of 100+ metrics here: bit.ly/4j3KUXe #POL #CORE3
CORE3 tweet media
English
1
3
26
1K
CORE3
CORE3@Core3io·
Why does nobody talk about the quality of 2/3 of digital assets? On some platforms, the ratio of listed assets to assets that would pass a basic quality threshold is closer to 3:1 than anyone in the market is publicly discussing. While assembling the new dataset to expand CORE3 coverage, we ran into a problem we didn't expect to be this big. We assessed how many assets it is realistically possible to index on risk exposure, and it appeared that 65% of assets listed on certain exchanges were low quality, or just inactive, returning a 98 probability of loss. When those assets sit in your dataset alongside projects that are actively building, the dead assets are the “dead weight” disrupting the signal for the rest of the market. For projects that invest in security, transparency, and operational maturity, this means competing for visibility against a market where the majority of listed assets were never built to last. But soon, they will get a way to stand out for just being safer than others ⌛
CORE3 tweet media
English
0
1
2
72
CORE3
CORE3@Core3io·
A question that comes up in every grants committee and listing review – how to compare risk? ⚖️ E.g., Two projects apply, both address the same narrative, both audited, TVL within a million of each other ($7M and $8M), and both are backed by mid-tier VC. How do you pick? Once we scored first data set we've spotted something: The surface metrics (TVL, audit badge, backer name) never predicted where risk was concentrated. One project had monitoring, key rotation, diversified treasury. Other routed most of its TVL through just one bridge and only audited the token contract (no protocol). Both have same narrative and same tier with different risk profiles. The standard evaluation process had no way to surface that difference, because nobody was measuring the same things. That's the gap Probability of Loss is designed to close. Not to replace judgment, but to make sure the judgment starts from the same data, where the gaps become visible, and building security pays off. Discover methodology 👉 docs.core3.io
CORE3 tweet media
English
0
1
5
310
CORE3
CORE3@Core3io·
Never do research on a crypto project's security and stop there. Here's why: 55% of projects we scored landed between 35 and 55 PoL – that's the moderate zone, not great, not terrible. What's interesting is how differently they got there. Three patterns produced the same middle PoL: 1. Regulatory coverage is near-perfect; operations and security are not. e.g. @OndoFinance scores 100% on regulatory and just 32% on operations. 2. Security is solid; compliance is near-absent. Ether.fi, @zksync , and $HYPE all built well and papered poorly. 3. Reputation covers weak operations. $BNB, $MANA, and $RENDER have strong brands that raise the PoL, yet they carry weaker operational foundations. The probability of loss is relatively similar, but the risks differ. That's the point of subscores on app.core3.io — they tell why and in which direction risk exposure is.
CORE3 tweet media
English
2
2
12
170
CORE3
CORE3@Core3io·
Even the lowest of PoLs is not a verdict. If you’re reading this — employ risk practice, submit proof, and see your risk score decrease. Your starting point & score breakdowns: app.core3.io
English
0
1
2
48
CORE3
CORE3@Core3io·
Memes, measured the same way as everything. $Pepe outscored multiple L2s, infrastructure projects, and an RWA protocol backed by a former US president. Turns out a frog with no roadmap can still have more transparent risk practices than projects raising from VCs ↓
CORE3 tweet media
English
1
1
2
93
CORE3
CORE3@Core3io·
We scored the risk of 49 crypto projects across 6 risk domains and 98+ assessments. Then we ranked them: L1 The gap between Ethereum and the #2 L1 ( $XMR) is 30 points. 👀 Risk data doesn’t put $BNB and #Ethereum in the same conversation. ↓
CORE3 tweet media
English
2
1
10
237
HackenProof
HackenProof@HackenProof·
Write your own version👇
HackenProof tweet media
English
18
3
39
2.9K
CORE3
CORE3@Core3io·
$100B lost in Web3 due to exploits. 785 tracked incidents since 2011. Yet, there is no shared measure of which project is more likely to fail than another. Until risk is quantified, crypto adoption is effectively impossible. Probability of loss: app.core3.io
English
1
1
5
172
CORE3
CORE3@Core3io·
@crynetio Yeah, but still crypto has a lot to learn from how people self-govern in Web2, which no DAO solved to date. Risk awareness could make $4B stolen number a bit less in the Web3.
English
0
0
0
15
Crynet
Crynet@crynetio·
@Core3io TradFi's ironclad "safety" is just risk dressed in a nicer suit, as history loves to remind us.
English
1
0
1
7
CORE3
CORE3@Core3io·
"Just use tradfi investments where security is already assessed and safe. Be happy with stable growth." Sure. Tell that to: • Lehman Brothers investors • SVB depositors • Credit Suisse bondholders "Assessed and safe" cost them billions.
CORE3 tweet media
English
3
1
5
122
CORE3
CORE3@Core3io·
But tradfi capital at least can be insured, thanks to commonly defined risk indicators. While crypto is not, because it has not. Use same risk language; fix the market.
English
0
0
0
45
CORE3
CORE3@Core3io·
According to Vitalik's vision, the probability of loss can be defined as an indicator of "what is the probability that system behavior will not meet user intent." While transaction simulation is not something new, we'll keep a close eye on the development of the trend.
vitalik.eth@VitalikButerin

How I think about "security": The goal is to minimize the divergence between the user's intent, and the actual behavior of the system. "User experience" can also be defined in this way. Thus, "user experience" and "security" are thus not separate fields. However, "security" focuses on tail risk situations (where downside of divergence is large), and specifically tail risk situations that come about as a result of adversarial behavior. One thing that becomes immediately obvious from the above definition, is that "perfect security" is impossible. Not because machines are "flawed", or even because humans designing the machines are "flawed", but because "the user's intent" is fundamentally an extremely complex object that the user themselves does not have easy access to. Suppose the user's intent is "I want to send 1 ETH to Bob". But "Bob" is itself a complicated meatspace entity that cannot be easily mathematically defined. You could "represent" Bob with some public key or hash, but then the possibility that the public key or hash is not actually Bob becomes part of the threat model. The possibility that there is a contentious hard fork, and so the question of which chain represents "ETH" is subjective. In reality, the user has a well-formed picture about these topics, which gets summarized by the umbrella term "common sense", but these things are not easily mathematically defined. Once you get into more complicated user goals - take, for example, the goal of "preserving the user's privacy" - it becomes even more complicated. Many people intuitively think that encrypting messages is enough, but the reality is that the metadata pattern of who talks to whom, and the timing pattern between messages, etc, can leak a huge amount of information. What is a "trivial" privacy loss, versus a "catastrophic" loss? If you're familiar with early Yudkowskian thinking about AI safety, and how simply specifying goals robustly is one of the hardest parts of the problem, you will recognize that this is the same problem. Now, what do "good security solutions" look like? This applies for: * Ethereum wallets * Operating systems * Formal verification of smart contracts or clients or any computer programs * Hardware * ... The fundamental constraint is: anything that the user can input into the system is fundamentally far too low-complexity to fully encode their intent. I would argue that the common trait of a good solution is: the user is specifying their intention in multiple, overlapping ways, and the system only acts when these specifications are aligned with each other. Examples: * Type systems in programming: the programmer first specifies *what the program does* (the code itself), but then also specifies *what "shape" each data structure has at every step of the computation*. If the two diverge, the program fails to compile. * Formal verification: the programmer specifies what the program does (the code itself), and then also specifies mathematical properties that the program satisfies * Transaction simulations: the user specifies first what action they want to take, and then clicks "OK" or "Cancel" after seeing a simulation of the onchain consequences of that action * Post-assertions in transactions: the transaction specifies both the action and its expected effects, and both have to match for the transaction to take effect * Multisig / social recovery: the user specifies multiple keys that represent their authority * Spending limits, new-address confirmations, etc: the user specifies first what action they want to take, and then, if that action is "unusual" or "high-risk" in some sense, the user has to re-specify "yes, I know I am doing something unusual / high-risk" In all cases, the pattern is the same: there is no perfection, there is only risk reduction through redundancy. And you want the different redundant specifications to "approach the user's intent" from different "angles": eg. action, and expected consequences, expected level of significance, economic bound on downside, etc This way of thinking also hints at the right way to use LLMs. LLMs done right are themselves a simulation of intent. A generic LLM is (among other things) like a "shadow" of the concept of human common sense. A user-fine-tuned LLM is like a "shadow" of that user themselves, and can identify in a more fine-grained way what is normal vs unusual. LLMs should under no circumstances be relied on as a sole determiner of intent. But they are one "angle" from which a user's intent can be approximated. It's an angle very different from traditional, explicit, ways of encoding intent, and that difference itself maximizes the likelihood that the redundancy will prove useful. One other corollary is that "security" does NOT mean "make the user do more clicks for everything". Rather, security should mean: it should be easy (if not automated) to do low-risk things, and hard to do dangerous things. Getting this balance right is the challenge.

English
0
1
2
114
CORE3
CORE3@Core3io·
@pashov Appreciate the discussion! Transparency costs nothing, gives you more eyes on the code, plus may attract a lurker to point vulnerability out. With closed code, risk is a black box. No bug bounty = vulnerability will be exploited instead of reported Collective security rules!
English
0
1
3
80
pashov
pashov@pashov·
I’ve personally audited a lot of both open and closed-source code in my life, and I don’t say this lightly: the last thing you should do about the security of your project is to make it “closed source”. In the context of security and “closed source”, you are making it harder for whitehats to review your code in bug bounty programs - the ones ran by Immunefi and HackenProof - which have helped protect projects for so many billions worth of hacks. If anything, the blackhats that have tools that scan the blockchain, do AI decompilation and try to exploit will gain an advantage this way. DO NOT DO IT. Let me remind you that in January we had two hacks on “unverified source” code projects - one for $26M and another for $3.1M. Closed source will not make you “ready for the level of threats coming”. What will help you, is to work with the best security researchers you have access to. Speak to them. Ask them questions. Treat them fairly. Pay out bug bounty rewards. Get audits (solo if you are low on budget). You wouldn’t believe what happens when a project pays out a big bug bounty. Now they are forever the “paying” project - so they get endless “free” audits from so many whitehat security researchers on their live code. It’s a beautiful incentives mechanism. Hari, I hope Cantina/Spearbit crushes it and you guys find all the vulnerabilities before blackhats. Me and my team and Pashov Audit Group have the same goal. We will never push towards “close sourcing your code” though - it’s a fallacy. Let’s do better together, web3 security can win this battle🫡
Hari@hrkrshnn

I've written a lot of open-source code in my life, and I don't say this lightly: close source your code this year. You are just not ready for the level of security threats this year. We all talk about vibe coding, but vibe cyberattacks are real. This doesn't mean closed source is safer; one of the most insane bugs our tool found was in a reverse-engineered codebase. That was a critical bug that no human was going to find. Instead, invest in hardening any code that touches money and sensitive infrastructure.

English
13
15
165
15.3K
CORE3
CORE3@Core3io·
@pashov Recently, we ran research from open sources to find proofs that vibe coding bears the same risks for Web3 as for Web2, and now this happens. Article TL;DR: nobody tested the code against Web3 vulnerabilities, but now we have a proof it is risky 🙃
English
0
0
3
1.1K
pashov
pashov@pashov·
🤯Update from dev: Code actually had unit/integration tests and actually passed a security audit
pashov tweet media
English
16
6
100
26.2K
pashov
pashov@pashov·
🚨Claude Opus 4.6 wrote vulnerable code, leading to a smart contract exploit with $1.78M loss cbETH asset's price was set to $1.12 instead of ~$2,200. The PRs of the project show commits were co-authored by Claude - Is this the first hack of vibe-coded Solidity code?
pashov tweet media
English
318
559
4.5K
1.7M
CORE3
CORE3@Core3io·
Just recently we had an article on the topic, and probably the first documented case of a vibe coding incident for Web3 happened 🤷‍♀️ The most interesting thing is that the code actually was audited. The article: x.com/Core3io/status…
pashov@pashov

🚨Claude Opus 4.6 wrote vulnerable code, leading to a smart contract exploit with $1.78M loss cbETH asset's price was set to $1.12 instead of ~$2,200. The PRs of the project show commits were co-authored by Claude - Is this the first hack of vibe-coded Solidity code?

English
1
1
10
273
CORE3
CORE3@Core3io·
Yet, we have no intent on launching AI-slop SocialFi Proof-of-opinion is our call on making information sharing responsible, and liable. Without depending on recommendation algorithms, which reward resonance, instead of deepness.
CORE3@Core3io

2/3 That's why CORE3 built Proof of Opinion — crypto project reviews from DYOR-certified researchers with on-chain credentials. If a call was wrong, it stays wrong in public. If someone spotted a risk early, that stays too.

English
0
1
8
168