Omega

1.6K posts

Omega banner
Omega

Omega

@OmegaOG_Gan

Katılım Eylül 2021
740 Takip Edilen268 Takipçiler
vitalik.eth
vitalik.eth@VitalikButerin·
How I think about "security": The goal is to minimize the divergence between the user's intent, and the actual behavior of the system. "User experience" can also be defined in this way. Thus, "user experience" and "security" are thus not separate fields. However, "security" focuses on tail risk situations (where downside of divergence is large), and specifically tail risk situations that come about as a result of adversarial behavior. One thing that becomes immediately obvious from the above definition, is that "perfect security" is impossible. Not because machines are "flawed", or even because humans designing the machines are "flawed", but because "the user's intent" is fundamentally an extremely complex object that the user themselves does not have easy access to. Suppose the user's intent is "I want to send 1 ETH to Bob". But "Bob" is itself a complicated meatspace entity that cannot be easily mathematically defined. You could "represent" Bob with some public key or hash, but then the possibility that the public key or hash is not actually Bob becomes part of the threat model. The possibility that there is a contentious hard fork, and so the question of which chain represents "ETH" is subjective. In reality, the user has a well-formed picture about these topics, which gets summarized by the umbrella term "common sense", but these things are not easily mathematically defined. Once you get into more complicated user goals - take, for example, the goal of "preserving the user's privacy" - it becomes even more complicated. Many people intuitively think that encrypting messages is enough, but the reality is that the metadata pattern of who talks to whom, and the timing pattern between messages, etc, can leak a huge amount of information. What is a "trivial" privacy loss, versus a "catastrophic" loss? If you're familiar with early Yudkowskian thinking about AI safety, and how simply specifying goals robustly is one of the hardest parts of the problem, you will recognize that this is the same problem. Now, what do "good security solutions" look like? This applies for: * Ethereum wallets * Operating systems * Formal verification of smart contracts or clients or any computer programs * Hardware * ... The fundamental constraint is: anything that the user can input into the system is fundamentally far too low-complexity to fully encode their intent. I would argue that the common trait of a good solution is: the user is specifying their intention in multiple, overlapping ways, and the system only acts when these specifications are aligned with each other. Examples: * Type systems in programming: the programmer first specifies *what the program does* (the code itself), but then also specifies *what "shape" each data structure has at every step of the computation*. If the two diverge, the program fails to compile. * Formal verification: the programmer specifies what the program does (the code itself), and then also specifies mathematical properties that the program satisfies * Transaction simulations: the user specifies first what action they want to take, and then clicks "OK" or "Cancel" after seeing a simulation of the onchain consequences of that action * Post-assertions in transactions: the transaction specifies both the action and its expected effects, and both have to match for the transaction to take effect * Multisig / social recovery: the user specifies multiple keys that represent their authority * Spending limits, new-address confirmations, etc: the user specifies first what action they want to take, and then, if that action is "unusual" or "high-risk" in some sense, the user has to re-specify "yes, I know I am doing something unusual / high-risk" In all cases, the pattern is the same: there is no perfection, there is only risk reduction through redundancy. And you want the different redundant specifications to "approach the user's intent" from different "angles": eg. action, and expected consequences, expected level of significance, economic bound on downside, etc This way of thinking also hints at the right way to use LLMs. LLMs done right are themselves a simulation of intent. A generic LLM is (among other things) like a "shadow" of the concept of human common sense. A user-fine-tuned LLM is like a "shadow" of that user themselves, and can identify in a more fine-grained way what is normal vs unusual. LLMs should under no circumstances be relied on as a sole determiner of intent. But they are one "angle" from which a user's intent can be approximated. It's an angle very different from traditional, explicit, ways of encoding intent, and that difference itself maximizes the likelihood that the redundancy will prove useful. One other corollary is that "security" does NOT mean "make the user do more clicks for everything". Rather, security should mean: it should be easy (if not automated) to do low-risk things, and hard to do dangerous things. Getting this balance right is the challenge.
English
618
278
1.7K
202.7K
Omega
Omega@OmegaOG_Gan·
@EliBenSasson A lot of talented builders stay in the shadows. Get your voice out there. Go to events. Talk to founders. Learn what others are building. Share your ideas publicly. Communication is a superpower.
English
0
0
1
7
Omega retweetledi
Omega
Omega@OmegaOG_Gan·
@VitalikButerin Can you please ask LLMs to make shorter text for you?
English
0
0
0
3
vitalik.eth
vitalik.eth@VitalikButerin·
Recently I have been starting to worry about the state of prediction markets, in their current form. They have achieved a certain level of success: market volume is high enough to make meaningful bets and have a full-time job as a trader, and they often prove useful as a supplement to other forms of news media. But also, they seem to be over-converging to an unhealthy product market fit: embracing short-term cryptocurrency price bets, sports betting, and other similar things that have dopamine value but not any kind of long-term fulfillment or societal information value. My guess is that teams feel motivated to capitulate to these things because they bring in large revenue during a bear market where people are desperate - an understandable motive, but one that leads to corposlop. I have been thinking about how we can help get prediction markets out of this rut. My current view is that we should try harder to push them into a totally different use case: hedging, in a very generalized sense (TLDR: we're gonna replace fiat currency) Prediction markets have two types of actors: (i) "smart traders" who provide information to the market, and earn money, and necessarily (ii) some kind of actor who loses money. But who would be willing to lose money and keep coming back? There are basically three answers to this question: 1. "Naive traders": people with dumb opinions who bet on totally wrong things 2. "Info buyers": people who set up money-losing automated market makers, to motivate people to trade on markets to help the info buyer learn information they do not know. 3. "Hedgers": people who are -EV in a linear sense, but who use the market as insurance, reducing their risk. (1) is where we are today. IMO there is nothing fundamentally morally wrong with taking money from people with dumb opinions. But there still is something fundamentally "cursed" about relying on this too much. It gives the platform the incentive to seek out traders with dumb opinions, and create a public brand and community that encourages dumb opinions to get more people to come in. This is the slide to corposlop. (2) has always been the idealistic hope of people like Robin Hanson. However, info buying has a public goods problem: you pay for the info, but everyone in the world gets it, including those who don't pay. There are limited cases where it makes sense for one org to pay (esp. decision markets), but even there, it seems likely that the market volumes achieved with that strategy will not be too high. This gets us to (3). Suppose that you have shares in a biotech company. It's public knowledge that the Purple Party is better for biotech than the Yellow Party. So if you buy a prediction market share betting that the Yellow Party will win the next election, on average, you are reducing your risk. Mathematical example: suppose that if Purple wins, the share price will be a dice roll between [80...120], and if Yellow wins, it's between [60...100]. If you make a size $10 bet that Yellow will win, your earnings become equivalent to a dice roll between [70...110] in both cases. Taking a logarithmic model of utility, this risk reduction is worth $0.58. Now, let's get to a more fascinating example. What do people who want stablecoins ultimately want? They want price stability. They have some future expenses in mind, and they want a guarantee that will be able to pay those expenses. But if crypto grows on top of USD-backed stablecoins, crypto is ultimately not truly decentralized. Furthermore, different people have different types of expenses. There has been lots of thinking about making an "ideal stablecoin" that is based on some decentralized global price index, but what if the real solution is to go a step further, and get rid of the concept of currency altogether? Here's the idea. You have price indices on all major categories of goods and services that people buy (treating physical goods/services in different regions as different categories), and prediction markets on each category. Each user (individual or business) has a local LLM that understands that user's expenses, and offers the user a personalized basket of prediction market shares, representing "N days of that user's expected future expenses". Now, we do not need fiat currency at all! People can hold stocks, ETH, or whatever else to grow wealth, and personalized prediction market shares when they want stability. Both of these examples require prediction markets denominated in an asset people want to hold, whether interest-bearing fiat, wrapped stocks, or ETH. Non-interest-bearing fiat has too-high opportunity cost, that overwhelms the hedging value. But if we can make it work, it's much more sustainable than the status quo, because both sides of the equation are likely to be long-term happy with the product that they are buying, and very large volumes of sophisticated capital will be willing to participate. Build the next generation of finance, not corposlop.
English
936
602
4.9K
939K
Globe Observer
Globe Observer@_GlobeObserver·
BREAKING: 🚨🇪🇺 Europe is moving to replace U.S. tech firms with local ones. An EU Parliament resolution urges digital sovereignty, cutting reliance on U.S. giants like AWS, Azure, and Google Cloud, and promoting a European “Eurostack.”
Globe Observer tweet mediaGlobe Observer tweet media
English
1K
1.8K
13.6K
1.5M
rip.eth
rip.eth@ripeth·
Solana validator count is collapsing toward zero 5k → 800 in one year running a validator keeps getting harder as hardware requirements keep rising “Solana is decentralized”
rip.eth tweet media
English
200
146
1.3K
455.7K
Omega retweetledi
Aleph Cloud
Aleph Cloud@aleph_im·
1/ Most web apps run on a handful of closed clouds. This centralization creates lock-in, opaque infrastructure & systemic risks. When AWS or Cloudflare hiccups, the entire internet feels it. — @ODesenfans
Powerhouse@PowerhouseDAO

Modern web apps run on a few closed clouds, creating outages, lock-in and opaque infra as systemic risks. At the Open Source Hub at Devconnect ARG, @odesenfans from @aleph_im explains why we need a decentralized cloud and how Aleph Cloud is building one. Full video 👇

English
3
16
31
1.5K
Omega
Omega@OmegaOG_Gan·
@shafu0x True, I know that @aleph_im are supporting Decentralized frontends
English
0
0
2
51
shafu
shafu@shafu0x·
nobody cares about decentralized frontends
English
83
9
241
28.3K
Olivier
Olivier@ODesenfans·
14h flight ahead, gonna take this bad boy for a spin
Olivier tweet media
English
1
0
5
157
Onchain Foundation
Onchain Foundation@OnchainHQ·
When done right, DePIN ecosystems are fundamentally primed to thrive: Token incentives → Supply → Users → Activity → Token value 🔁
Onchain Foundation tweet media
English
8
1
20
677
ceteris
ceteris@ceterispar1bus·
maybe the brainlets on ct will stop calling solana an aws chain now?
ceteris tweet media
English
22
31
183
25.8K
Aleph Cloud
Aleph Cloud@aleph_im·
@hackenai @Ubisoft Hey, We currently don't have any public bug bounty, but we will look into it for future upgrades. Security is one of our core mission, and we make sure that users data remains safe and private.
English
2
1
8
61
Hacken.AI
Hacken.AI@hackenai·
What alpha do our DYOR-certified analysts have for you this week? Trust Infrastructures: @BASCAN_io , @aleph_im , @duffleinc , @kadena_io . • One is building BNB's identity layer. • Another is a decentralized AWS. • The third promises a unified financial app. • The last is a decentralized experiment after the team ceased operations.. But which ones have what it takes to back their world-changing claims? 1/6 🧵👇
Hacken.AI tweet media
English
8
11
28
686
Aleph Cloud
Aleph Cloud@aleph_im·
Tag the applications that should migrate to Aleph Cloud 👇
English
8
4
29
1.2K
Omega
Omega@OmegaOG_Gan·
@aleph_im Love it lol and the image is on point 😂😂😂
English
0
0
0
14
Aleph Cloud
Aleph Cloud@aleph_im·
Reminder: Aleph Cloud is 80% cheaper than AWS.
Aleph Cloud tweet media
English
3
10
52
1.6K
Aleph Cloud
Aleph Cloud@aleph_im·
What happened yesterday with AWS? Pretty much everyone in the world witnessed the situation, which impacted companies using Amazon Web Services. After joining Aleph Cloud, @ODesenfans recently decided to leave and moved to California as a Senior Infrastructure Developer for AWS. Unfortunately, due to unforeseen circumstances, it didn’t work out for him, and he has now returned as CTO at Aleph Cloud today. Migrate to Aleph Cloud.
English
10
20
58
3.6K