Cem | Sovereign

4.1K posts

Cem | Sovereign banner
Cem | Sovereign

Cem | Sovereign

@cemozer_

CEO @sovereignxyz. previously: ethereum core dev @Teku_ConsenSys. simplicity maxi

Katılım Mayıs 2017
1.2K Takip Edilen6.5K Takipçiler
Cem | Sovereign retweetledi
Boyan Slat
Boyan Slat@BoyanSlat·
Three years ago I suddenly developed blurry vision in one of my eyes. Went to the GP. Eye drops. No effect. Went to a specialized eye hospital. No solution there either. A few weeks ago I tried something new: I asked an LLM. It analyzed my diet and suggested I might be omega-3 deficient. It also pointed me to studies showing that this can impair the meibomian glands (which produce the oily layer that smooths the surface of the cornea.) I started taking algae oil supplements. Two weeks later… my vision is perfectly restored! Honestly, I got a bit emotional. Banning AI from being used for medical questions is a terrible idea.
More Perfect Union@MorePerfectUS

A New York bill would ban AI from answering questions related to several licensed professions like medicine, law, dentistry, nursing, psychology, social work, engineering, and more. The companies would be liable if the chatbots give “substantive responses” in these areas.

English
414
1.3K
17.2K
958.5K
Viktor Bunin 🛡️🇺🇸
Viktor Bunin 🛡️🇺🇸@ViktorBunin·
@bryan_johnson I struggle with this because my social media presence gives me so much soft power and influence in my industry that it's hard to give it up without feeling like I'm losing a superpower. If it wasn't for that I would have been off X long ago. I closed my accounts on all others.
English
1
0
7
598
Bryan Johnson
Bryan Johnson@bryan_johnson·
Finished a seven day social media fast. It feels like the most effective longevity therapy I've done. Everything got better: mood, sleep, energy, presence, judgment, relationships, and optimism. Evidence shows a seven day fast produces a reduction of anxiety (16%), depression (25%) and insomnia (15%). The effects felt bigger. Conversely, dipping back in, I can viscerally feel that my body metabolizes social media similarly to a fast food meal, corrosive relationship, hangover, and sleep deprivation. My body hates it. After the previous fasts (40/hr and 70hr), I wrote that social media is pollution.  Not a vice or guilty pleasure. It’s closer to water toxins, air pollution and microplastics. This time, the major insight was that social media is a form of intoxication. Alcohol is honest intoxication. It clearly tells you what it's taking from you. Social media on the other hand does not disclose itself as an intoxicant. It produces the sensation of being informed, engaged, and connected while quietly evacuating your capacity for depth and independent thought. You don’t feel drunk, you feel current. But evidence shows that it causes your brain to shrink. The impairment is real by you can't feel it. Making it the more dangerous type. If you haven't tried it, I strongly encourage you to try a social media fast. Even if for one day.
English
416
520
7.5K
1.2M
Cem | Sovereign retweetledi
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
Hundreds of scientists, including 3/4 of the most cited living AI scientists, have said that AI poses a very real chance of killing us all. We're in uncharted waters, which makes the risk level hard to assess; but a pretty normal estimate is Jan Leike's "10-90%" of extinction-level outcomes. Leike heads Anthropic's alignment research team, and previously headed OpenAI's. This actually seems pretty straightforward. There's literally no reason for us to sleepwalk into disaster here. No normal engineering discipline, building a bridge or designing a house, would accept a 25% chance of killing a person; yet somehow AI's engineering culture has corroded enough that no one bats an eye when Anthropic's CEO talks about a 25% chance of research efforts killing every person. A minority of leading labs are dismissive of the risk (mainly Meta), but even the fact that “will we kill everyone if we keep moving forward?” is hotly debated among researchers seems very obviously like more than enough grounds for governments to internationally halt the race to build superintelligent AI. Like, this would be beyond straightforward in any field other than AI. Obvious question: How would that even work? Like, I get the argument in principle: “smarter-than-human AI is more dangerous than nukes, so we need to treat it similarly.” But with nukes, we have a detailed understanding of what’s required to build them, and it involves huge easily-detected infrastructure projects and rare materials. Response: The same is true for AI, as it’s built today. The most powerful AIs today rely on extremely specialized and costly hardware, cost hundreds of millions of dollars to build,¹ and rely on massive data centers² that are relatively easy to detect using satellite and drone imagery, including infrared imaging.³ Q: But wouldn’t people just respond by building data centers in secret locations, like deep underground? Response: Only a few firms can fabricate AI chips — primarily the Taiwanese company TSMC — and one of the key machines used in high-end chips is only produced by the Dutch company ASML. This is the extreme ultraviolet lithography machine, which is the size of a school bus, weighs 200 tons, and costs hundreds of millions of dollars.⁴ Many key components are similarly bottlenecked.⁵ This supply chain is the result of decades of innovation and investment, and replicating it is expected to be very difficult — likely taking over a decade, even for technologically advanced countries.⁶ This essential supply chain, largely located in countries allied to the US, provides a really clear point of leverage. If the international community wanted to, it could easily monitor where all the chips are going, build in kill switches, and put in place a monitoring regime to ensure chips aren’t being used to build toward superintelligence. (Focusing more efforts on the chip supply chain is also a more robust long-term solution than focusing purely on data centers, since it can solve the problem of developers using distributed training to attempt to evade international regulations.⁷) Q: But won’t AI become cheaper to build in the future? Response: Yes, but — (a) It isn’t likely to suddenly become dramatically cheaper overnight. If it becomes cheaper gradually, regulations can build in safety margin and adjust thresholds over time to match the technology. Efforts to bring preexisting chips under monitoring will progress over time, and chips have a limited lifespan, so the total quantity of unmonitored chips will decrease as well. (b) If we actually treated superintelligent AI like nuclear weapons, we wouldn’t be publishing random advances to arXiv, so the development of more efficient algorithms and more optimized compute would happen more slowly. Some amount of expected algorithmic progress would also be hampered by reduced access to chips. (c) You don’t need to ban superintelligence forever; you just need to ban it until it’s clear that we can build it without destroying ourselves or doing something similarly terrible. A ban could buy the world many decades of time. Q: But wouldn’t this treaty devastate the economy? A: It would mean forgoing some future economic gains, because the race to superintelligence comes with greater and greater profits until it kills you. But it’s not as though those profits are worth anything if we’re dead; this seems obvious enough. There’s the separate issue that lots of investments are currently flowing into building bigger and bigger data centers, in anticipation that the race to smarter-than-human AI will continue. A ban could cause a shock to the economy as that investment dries up. However, this is relatively easy to avoid via the Fed lowering its rates, so that a high volume of money continues to flow through the larger economy.⁸ Q: But wouldn’t regulating chips have lots of spillover effects on other parts of the economy that use those chips? A: NVIDIA’s H100 chip costs around $30,000 per chip and, due to its cooling and power requirements, is designed to be run in a data center.⁹ Regulating AI-specialized chips like this would have very few spillover effects, particularly if regulations only apply to chips used for AI training and not for inference.¹⁰ But also, again, an economy isn’t worth much if you’re dead. This whole discussion seems to be severely missing the forest for the trees, if it’s not just in outright denial about the situation we find ourselves in. Some of the infrastructure used to produce AI chips is also used in making other advanced computer chips, such as cell phone chips; but there are notable differences between these chips. If advanced AI chip production is shut down, it wouldn’t actually be difficult to monitor production and ensure that chip production is only creating non-AI-specialized chips. At the same time, existing AI chips could be monitored to ensure that they’re used to run existing AIs, and aren’t being used to train ever-more-capable models.¹¹ This wouldn't be trivial to do, but it's pretty easy relative to many of the tasks the world's superpowers have achieved when they faced a national security threat. The question is whether the US, China, and other key actors wake up in time, not whether they have good options for addressing the threat. Q: Isn't this totalitarian? A: Governments regulate thousands of technologies. Adding one more to the list won’t suddenly tip the world over into a totalitarian dystopia, any more than banning chemical or biological weapons did. The typical consumer wouldn’t even necessarily see any difference, since the typical consumer doesn’t run a data center. They just wouldn’t see dramatic improvements to the chatbots they use. Q: But isn’t this politically infeasible? A: It will require science communicators to alert policymakers to the current situation, and it will require policymakers to come together to craft a solution. But it doesn’t seem at all infeasible. Building superintelligence is unpopular with the voting public,¹² and hundreds of elected officials have already named this issue as a serious priority. The UN Secretary-General and major heads of state are routinely talking about AI loss-of-control scenarios and human extinction. At that point, the cat has already firmly left the bag. (And it's not as though there's anything unusual about governments heavily regulating powerful new technologies.) What's left is to dial up the volume on that talk, translate that talk into planning and fast action, and recognize that "there's uncertainty how much time we have left" makes this a more urgent problem, not less. Q: But if the US halts, isn’t that just ceding the race to authoritarian regimes? A: The US shouldn’t halt unilaterally; that would just drive AI research to other countries. Rather, the US should broker an international agreement where everyone agrees to halt simultaneously. (Some templates of agreements that would do the job have already been drafted.¹³) Governments can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path. Q: But surely there will be countries that end up defecting from such an agreement. Even if you’re right that it’s in no one’s interest to race once they understand the situation, plenty of people won’t understand the situation, and will just see superintelligent AI as a way to get rich quick. A: It’s very rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren't rare, it wouldn't be a big news story when it does happen!) If the whole world is racing to build superintelligence as fast as possible, then we’re very likely dead. Even if you think there's a chance that cautious devs could stay in control as AI starts to vastly exceed the intelligence of the human race (and no, I don't think this is realistic in the current landscape), that chance increasingly goes out the window as the race heats up, because prioritizing safety will mean sacrificing your competitive edge. If instead a tiny fraction of the world is trying to find sneaky ways to build a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation. By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all, and were instead facing a world where dozens or hundreds of nations possess nuclear weapons. When it comes to superintelligence, anyone building "god-like AI" is likely to get us all killed — whether the developer is a military or a company, and whether their intentions are good or ill. Going from "zero superintelligences" to "one superintelligence" is already lethally dangerous. The challenge is to block the construction of ASI while there's still time, not to limit proliferation after it already exists, when it's far too late to take the steering wheel. So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have enormous power. Q: But what about China? Surely they’d never agree to an arrangement like this. A: The CCP has already expressed interest in international coordination and regulation on AI. E.g., Reuters reported that Chinese Premier Li Qiang said, "We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."¹⁴ And, quoting The Economist:¹⁵ "But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants. "The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. A short time later the risks posed by AI, and how to control them, became a subject of study sessions for party leaders. A state body that funds scientific research has begun offering grants to researchers who study how to align AI with human values. [...] "In July, at a meeting of the party’s central committee called the 'third plenum', Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities. "More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should 'abandon uninhibited growth that comes at the cost of sacrificing safety', says the guide. Since AI will determine 'the fate of all mankind', it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive." The CCP is a US adversary. That doesn't mean they're idiots who will destroy their own country in order to thumb their nose at the US. If a policy is Good, that doesn't mean that everyone Bad will automatically oppose it. Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so clueful people on all sides will endorse those policies. The question, again, is just whether people will clue in to what's happening soon enough to matter. My hope, in writing this, is to wake people up a bit faster. If you share that hope, maybe share this post, or join the conversation about it; or write your own, better version of a "wake-up" warning. Don't give up on the world so easily.
Rob Bensinger ⏹️ tweet media
English
76
190
655
85.5K
Cem | Sovereign retweetledi
Sovereign
Sovereign@sovereignxyz·
Yes, you read that right. 250 microsecond execution latency @ 30k orders/sec. Unheard of performance for a DEX. Only possible on Sovereign SDK.
Tristan@Tristan0x

After two years of blood, sweat and pull requests @Bulletxyz mainnet is finally here. Absolutely stoked with what our team achieved. Getting down to an unmatched 250us order execution latency @ 30k orders/sec, unheard of for a perps DEX. Bullet's law seems to be playing out (order of magnitude speed increase every year), and the tech will only continues to get better. Admittedly mainnet took longer than I expected, but good things take time and there are no shortcuts for building on the cutting-edge. We'll be slowly rolling out in private beta to a select group of whitelisted traders, and continue making our way through the waitlist (DM me if you are interested in getting in early, high quality feedback appreciated and welcome). Job's not done, now we go hard on liquidity, expanding the product suite and user growth. If you see what we've achieved in just two years, imagine what we can achieve in the next two. Bullet Time.

English
3
3
16
1.7K
Cem | Sovereign retweetledi
Tristan
Tristan@Tristan0x·
After two years of blood, sweat and pull requests @Bulletxyz mainnet is finally here. Absolutely stoked with what our team achieved. Getting down to an unmatched 250us order execution latency @ 30k orders/sec, unheard of for a perps DEX. Bullet's law seems to be playing out (order of magnitude speed increase every year), and the tech will only continues to get better. Admittedly mainnet took longer than I expected, but good things take time and there are no shortcuts for building on the cutting-edge. We'll be slowly rolling out in private beta to a select group of whitelisted traders, and continue making our way through the waitlist (DM me if you are interested in getting in early, high quality feedback appreciated and welcome). Job's not done, now we go hard on liquidity, expanding the product suite and user growth. If you see what we've achieved in just two years, imagine what we can achieve in the next two. Bullet Time.
Tristan tweet media
Bullet@Bulletxyz

Perps Trading on Solana is about to change for the better. Mainnet is live. Here's how you can get in early 👇

English
44
23
205
21.7K
Cem | Sovereign
Cem | Sovereign@cemozer_·
we seem so close to automated AI researchers that would soon take us to superintelligence. no words.
English
0
0
5
1.3K
Cem | Sovereign retweetledi
Bullet
Bullet@Bulletxyz·
Perps Trading on Solana is about to change for the better. Mainnet is live. Here's how you can get in early 👇
English
46
59
292
89.7K
Cem | Sovereign
Cem | Sovereign@cemozer_·
Reading code is about to be like reading assembly. You might still need to in very particular cases, but it's no longer a generally useful skill.
English
2
0
9
587
Cem | Sovereign retweetledi
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
.@collision and I interviewed @elonmusk. 0:00:00 - Orbital data centers 0:36:46 - Grok and alignment 0:59:56 - xAI’s business plan 1:17:21 - Optimus and humanoid manufacturing 1:30:22 - Does China win by default? 1:44:16 - Lessons from running SpaceX 2:20:08 - DOGE 2:38:28 - TeraFab
English
1K
2.9K
17K
3.5M
Cem | Sovereign
Cem | Sovereign@cemozer_·
rollup research flourished because Ethereum hoped it would fix scaling after sharded execution was deemed too complex to ship. but after 4+ years building in this space, it’s clear to me: rollups aren’t primarily useful for "scaling" an ecosystem, unless scaling means running identical chains competing for the same use cases. they’re a new primitive for building new blockchains easily and cheaply. spinning up and maintaining a validator set is hard and expensive. if you’re building anything high-performance, each replica has high operational costs. and since it’s likely new software, you’ll need operators willing to deal with ongoing bugs. they’ll demand a premium. all of this costs money and attention. resources startups should be extremely careful with. rollups let you skip the validator set entirely. pay for data as you go. if any party wants full security or transparency on your chain’s operations, they just run a full node pointed at the same DA layer namespace. that’s it. if you’re building a new chain, let’s talk.
vitalik.eth@VitalikButerin

There have recently been some discussions on the ongoing role of L2s in the Ethereum ecosystem, especially in the face of two facts: * L2s' progress to stage 2 (and, secondarily, on interop) has been far slower and more difficult than originally expected * L1 itself is scaling, fees are very low, and gaslimits are projected to increase greatly in 2026 Both of these facts, for their own separate reasons, mean that the original vision of L2s and their role in Ethereum no longer makes sense, and we need a new path. First, let us recap the original vision. Ethereum needs to scale. The definition of "Ethereum scaling" is the existence of large quantities of block space that is backed by the full faith and credit of Ethereum - that is, block space where, if you do things (including with ETH) inside that block space, your activities are guaranteed to be valid, uncensored, unreverted, untouched, as long as Ethereum itself functions. If you create a 10000 TPS EVM where its connection to L1 is mediated by a multisig bridge, then you are not scaling Ethereum. This vision no longer makes sense. L1 does not need L2s to be "branded shards", because L1 is itself scaling. And L2s are not able or willing to satisfy the properties that a true "branded shard" would require. I've even seen at least one explicitly saying that they may never want to go beyond stage 1, not just for technical reasons around ZK-EVM safety, but also because their customers' regulatory needs require them to have ultimate control. This may be doing the right thing for your customers. But it should be obvious that if you are doing this, then you are not "scaling Ethereum" in the sense meant by the rollup-centric roadmap. But that's fine! it's fine because Ethereum itself is now scaling directly on L1, with large planned increases to its gas limit this year and the years ahead. We should stop thinking about L2s as literally being "branded shards" of Ethereum, with the social status and responsibilities that this entails. Instead, we can think of L2s as being a full spectrum, which includes both chains backed by the full faith and credit of Ethereum with various unique properties (eg. not just EVM), as well as a whole array of options at different levels of connection to Ethereum, that each person (or bot) is free to care about or not care about depending on their needs. What would I do today if I were an L2? * Identify a value add other than "scaling". Examples: (i) non-EVM specialized features/VMs around privacy, (ii) efficiency specialized around a particular application, (iii) truly extreme levels of scaling that even a greatly expanded L1 will not do, (iv) a totally different design for non-financial applications, eg. social, identity, AI, (v) ultra-low-latency and other sequencing properties, (vi) maybe built-in oracles or decentralized dispute resolution or other "non-computationally-verifiable" features * Be stage 1 at the minimum (otherwise you really are just a separate L1 with a bridge, and you should just call yourself that) if you're doing things with ETH or other ethereum-issued assets * Support maximum interoperability with Ethereum, though this will differ for each one (eg. what if you're not EVM, or even not financial?) From Ethereum's side, over the past few months I've become more convinced of the value of the native rollup precompile, particuarly once we have enshrined ZK-EVM proofs that we need anyway to scale L1. This is a precompile that verifies a ZK-EVM proof, and it's "part of Ethereum", so (i) it auto-upgrades along with Ethereum, and (ii) if the precompile has a bug, Ethereum will hard-fork to fix the bug. The native rollup precompile would make full, security-council-free, EVM verification accessible. We should spend much more time working out how to design it in such a way that if your L2 is "EVM plus other stuff", then the native rollup precompile would verify the EVM, and you only have to bring your own prover for the "other stuff" (eg. Stylus). This might involve a canonical way of exposing a lookup table between contract call inputs and outputs, and letting you provide your own values to the lookup table (that you would prove separately). This would make it easy to have safe, strong, trustless interoperability with Ethereum. It also enables synchronous composability (see: ethresear.ch/t/combining-pr… and ethresear.ch/t/synchronous-… ). And from there, it's each L2's choice exactly what they want to build. Don't just "extend L1", figure out something new to add. This of course means that some will add things that are trust-dependent, or backdoored, or otherwise insecure; this is unavoidable in a permissionless ecosystem where developers have freedom. Our job should make to make it clear to users what guarantees they have, and to build up the strongest Ethereum that we can.

English
8
19
96
25.6K
Ansem
Ansem@blknoiz06·
would truly be a gift if the overspending on capex somehow aligns w/ a brief period where the ai advancements aren't as noticeable for couple years & we get pullback before re-acceleration of trends towards agi
Just Another Pod Guy@TMTLongShort

I talk a lot about the bull case re AI because I am fundamentally a maxi and recognize that Street has failed to frame the abstractions properly… mostly due to intellectual shortcomings around non-linearity. Let me play the bear for a second. Normally mgmt is incentivized to take rational risks around ROIC and when the marginal capex doesn’t pencil you return it to shareholders. But what is different this time is the variance of return has gone up dramatically as you start making preemptive bets on infra well ahead of demand. We haven’t reached this point yet. The math still pencils even if scaling laws cease holding after the current model runs. Wall Street doesn’t understand this yet but every dollar of existing spend is “money good” solely based on the progress made so far. Thats why the maxi trade to date has been a cake walk. But what’s next is a series of upsizing of capex numbers for 2027 then 2028 that will be A LOT HIGHER than current consensus. And that’s when you start approaching the point of risk. Because as the scramble to lock-in supply further intensifies we will start running out of “stuff” and that will push hyperscalers further out on the risk curve. They will start betting beyond the knowable horizon… which for them is roughly the duration where you know scaling laws still hold x utility of that specific-level of peak intelligence x decay rate of cost-per-tokens at that point. Anything beyond that is mostly speculative. Currently they are only building to demand if scaling laws ceased holding sometime next year. But soon they’ll be betting multiple years out due to game theory. And normally the rational thing to do is to pull back the way MSFT tried to do last year. But the problem is there are too many irrational actors incentivized to push beyond the bounds of traditional IRR. Musk is a zealot Dario is a zealot. Sundar told you he’d rather go bankrupt before losing the AGI race. Larry and Sergey are zealots. Larry is in his 80s and knows this is his one shot at immortality. If they bet wrong and spend too much their stocks may crater on overbuild and excess capacity but each one of these people has enough money to live happily ever after. Hell if I had a billion dollars I’d honestly give away 90% of it in exchange for a 20% chance to see ASI in my lifetime. It’s Pascals Wager. So the bear case isn’t that hyperscalers are going to spend too much in 2026. It’s that we are at the start of a rapid increase in capex estimates for 2028 and when shareholders revolt mgmt is gonna tell them to go fuck themselves. 🫡

English
60
3
148
36.1K
Cem | Sovereign retweetledi
Dante
Dante@CamutoDante·
I'm stepping down from @ezklxyz. For the past 3.5 years I've had the honor of working with the most principled, hard-working, interesting people. I'm incredibly grateful to everyone that has believed in me. I have some new projects cooking :) I'll see all of y'all around <3
English
6
1
63
4.2K
Cem | Sovereign
Cem | Sovereign@cemozer_·
@tio_bera if you’d like to reduce inflation further by removing the need for a validator set, would be happy to chat!
English
0
0
8
691
tio bera 🐻⛓
tio bera 🐻⛓@tio_bera·
Every chain has a balance sheet. Ours is emissions. Today we’re announcing upcoming changes to PoL: • BGT inflation ↓ (8% → 5%) • Rewards Vault consolidation • Clearer criteria for Rewards Vaults Bera builds businesses. Full details 👇 forum.berachain.com/t/pol-update-i…
English
48
26
165
28.5K
Cem | Sovereign retweetledi
Peter | Relay
Peter | Relay@ptrwtts·
Blockchains and stablecoins are powerful primitives. But there’s a need for an interop layer that makes them *just work*. Relay is that layer. This is one of the most interesting challenges in crypto right now. Real demand, huge impact. Come work with us!
Relay@RelayProtocol

Announcing Relay's $17M Series B, led by @archetypevc & @USV. And we're launching the Relay Chain — purpose-built infrastructure for instant crosschain settlement ⛓️ Any asset, any chain, instantly.

English
14
2
100
6.9K
Cem | Sovereign
Cem | Sovereign@cemozer_·
$20B in volume. 100M+ requests. 85 chains. OpenSea, Phantom, MetaMask as customers. Relay is the crosschain payments layer you've been using without knowing it. Now they're building Relay Chain, a dedicated settlement layer for instant crosschain payments, using Sovereign SDK.
Relay@RelayProtocol

Announcing Relay's $17M Series B, led by @archetypevc & @USV. And we're launching the Relay Chain — purpose-built infrastructure for instant crosschain settlement ⛓️ Any asset, any chain, instantly.

English
11
15
91
7.5K