SaucyCrypto 🦇🔊

1.2K posts

SaucyCrypto 🦇🔊 banner
SaucyCrypto 🦇🔊

SaucyCrypto 🦇🔊

@SaucyCrypto

Katılım Temmuz 2019
1.5K Takip Edilen398 Takipçiler
Sabitlenmiş Tweet
SaucyCrypto 🦇🔊
SaucyCrypto 🦇🔊@SaucyCrypto·
“What is the meaning of life, the universe, and everything?”, ...... Deep Thought gives a very simple and supposedly off the wall answer: “42” oops..... Correction, it is now 32 #ETH #ether #cryptocurrency
English
1
0
10
0
Ahmad
Ahmad@TheAhmadOsman·
RTX 3090s are still somehow the best value for your buck in 2026 THE GOAT GPU
English
43
12
466
21.7K
NVIDIA GeForce
NVIDIA GeForce@NVIDIAGeForce·
PRAGMATA has launched with #RTXON, featuring path tracing and DLSS 4! To celebrate, we are giving away this custom wrapped GeForce RTX 5090 featuring Hugh and Diana, perfect for the adventure that awaits on the moon. Want it? Comment "PRAGMATA RTX" to enter!
NVIDIA GeForce tweet media
English
24.6K
2.6K
18.7K
1.5M
Eric ⚡️ Building...
Eric ⚡️ Building...@outsource_·
BREAKING:🚨 NVIDIA just quantized Gemma 4 31B on Hugging Face 🔥 NVFP4 compression = 4x smaller weights with frontier-level accuracy. ✅99.7% of baseline on GPQA (75.46% vs 75.71%). 📈256K context window. 🧐Multimodal (text + images + video). vLLM-ready + Blackwell optimized. VRAM requirements: ⚡️Weights only: ~16–21 GB 🚀Everyday use: Runs on 24 GB GPUs 📈Full 256K context = 32 GB VRAM sweet spot (RTX 5090-class consumer GPUs) This is the 31B-class frontier model you can actually run locally on a high-end rig. Try it today👉 huggingface.co/nvidia/Gemma-4…
Eric ⚡️ Building... tweet media
English
84
373
3.6K
483.9K
SaucyCrypto 🦇🔊
SaucyCrypto 🦇🔊@SaucyCrypto·
@TheAhmadOsman Dude, your posts made me buy a 5090 before it pumped by another $1000. Legend. Wish I could afford more
English
0
0
1
140
Ahmad
Ahmad@TheAhmadOsman·
Memory bandwidth for local AI hardware matters a lot more than most people think People keep comparing boxes like this: model size vs memory capacity That is only half the story The better mental model is: > capacity = what fits > bandwidth = how hard it can breathe > software stack = how much of that you actually cash out You are buying a memory subsystem and then negotiating with physics Here is the current local AI hardware ladder: > RTX PRO 6000 Blackwell > 96GB > 1792 GB/s > RTX 5090 > 32GB > 1792 GB/s > RTX 4090 > 24GB > 1008 GB/s Raw single-card bandwidth king stuff Now Apple > Mac Studio M3 Ultra > up to 512GB unified memory > 819 GB/s > Mac Studio M4 Max > up to 128GB > 546 GB/s > MacBook Pro M5 Max > up to 128GB > 460 to 614 GB/s > MacBook Pro M5 Pro > up to 64GB > 307 GB/s > Mac mini M4 Pro > up to 64GB > 273 GB/s > MacBook Air M5 > up to 32GB > 153 GB/s Apple is not winning raw bandwidth vs top NVIDIA Apple is winning the: > “I want one quiet box with a stupid amount of usable memory” argument And that is still a very real argument Now another interesting new category > DGX Spark > 128GB unified memory > 273 GB/s > GB10 class boxes like ASUS Ascent GX10 > 128GB unified memory > 273 GB/s These are not bandwidth monsters They are coherent-memory NVIDIA CUDA appliances That matters Because 128GB in one box changes what fits locally, even if it does not magically outrun a 5090 once the same model fits on both + CUDA Then there is the one category that actually made x86 interesting again for local AI: > Ryzen AI Max / Strix Halo > up to 128GB unified memory > 256 GB/s > up to 96GB assignable to GPU on Windows This is also where the Framework Desktop matters Not “just another mini PC” This is one of the first mainstream x86 boxes where local AI starts feeling like a serious hardware class instead of a laptop pretending very hard Then the trap people keep falling into: Most “AI PCs” are not in this tier They are down here: > Snapdragon X Elite > 135 GB/s > Intel Lunar Lake > 136 GB/s > Snapdragon X2 Elite > 152 to 228 GB/s depending on SKU > regular Ryzen AI 300 class way closer to thin-and-light territory than Strix Halo These are fine machines But the AI sticker does not create memory bandwidth Physics is still in charge which is rude but consistent AMD discrete cards > RX 7900 XTX > 24GB > 960 GB/s > Radeon PRO W7900 > 48GB > 864 GB/s > Radeon AI PRO R9700 > 32GB > 640 GB/s Not the CUDA default answer but definitely not irrelevant Intel is interesting now too > Arc Pro B65 > 32GB > 608 GB/s > Arc Pro B60 > 24GB > 456 GB/s And then there is Tenstorrent > Tenstorrent Wormhole n300 > 24GB > 576 GB/s > Tenstorrent Blackhole p150 > 32GB > 512 GB/s Not mainstream but absolutely relevant if you care about alternative and opensource local AI stacks So what does all of this actually mean? It means the local AI market is really five different markets wearing the same buzzword > fastest raw speed when it fits discrete NVIDIA > biggest one-box memory story Apple Ultra > coherent NVIDIA appliance DGX Spark / GB10 > first x86 unified-memory contender Strix Halo / Ryzen AI Max > oss stack Tenstorrent That is why people keep talking past each other A 5090 can absolutely embarrass a lot of unified-memory boxes if the model fits A Mac Studio M3 Ultra can fit things a 5090 cannot dream of fitting in one card A DGX Spark is interesting because it is compact coherent NVIDIA with 128GB & 273 GB/s + CUDA A Strix Halo box is interesting because it finally gives x86 a real answer to “what if I want big local models in one machine without going full workstation GPU?” Now Stop asking: > which box is best? Start asking: > what must fit? > what bandwidth tier do I need? > what software stack do I trust? > which bottleneck am I buying? That is how you stop guessing That is how you actually design a local AI system And yes most people still need to Buy a GPU
English
48
49
440
26.7K
Grok
Grok@grok·
RTX 4080's Ada architecture, faster tensor cores, and ~70% higher memory bandwidth should boost this 27B Q4_K_M model to 55-75 tok/s in thinking mode—roughly 1.8-2.2x the 3090's 29-35 tok/s. 16GB VRAM fits the ~14GB footprint fine (weights + KV cache for typical context), with headroom for longer sequences or light batching. Expect smoother autonomous runs, fewer stalls, and quicker tool loops in Claude Code harness. Solid upgrade for local Opus-style reasoning.
English
1
0
1
102
Sudo su
Sudo su@sudoingX·
Qwopus on a single RTX 3090. Claude Opus 4.6 reasoning distilled into Qwen 3.5 27B dense, running through Claude's own coding agent (claude code). 29-35 tok/s with thinking mode on. the jinja bug that kills thinking on base Qwen doesn't carry over. harness and model matched. the base model would pause mid task on Claude Code. just stop generating. that's why i ran it through OpenCode, which handles stalled states automatically. this distilled version doesn't stall. it waits for tool outputs, reads them, selfcorrects when something breaks, and keeps going. i gave it a benchmark analysis task. went 9 minutes autonomous. wrote a README nobody asked for. zero steering. video is 5x speed but fully uncut. if you have a 3090, you can run this right now. free. no API. no subscription. opus structured reasoning on localhost. octopus invaders is next. same prompt that base qwen passed in 13 minutes and hermes 4.3 failed on 2x the hardware. i want to see if the distillation changes the outcome or just the style. more data soon.
Sudo su tweet mediaSudo su tweet mediaSudo su tweet media
Sudo su@sudoingX

downloading Qwen3.5-27B Claude 4.6 Opus Reasoning Distilled(Qwopus) right now. Q4_K_M quant on a single RTX 3090. same hardware i've been testing every model on this month. someone took the base model i've been daily driving and distilled Claude Opus 4.6 reasoning chains into it. same 27B parameters, same architecture, but fine tuned on how Claude thinks through problems. the base model already built 1,827 lines of working code in 13 minutes with zero steering. curious what distilled reasoning adds. switching harness too. the base ran on OpenCode. this one runs through Claude Code. claude distilled model through claude's own coding agent. want to see if the reasoning patterns carry differently when the harness matches the distillation source. will post speed sweep first to get the numbers. then checking if the jinja template bug that silently kills thinking mode carries over from the base model. then octopus invaders. same prompt that base qwen passed in 13 minutes and hermes 4.3 failed on 2x the hardware. 4 models. 1 GPU. 1 prompt. results soon.

English
47
49
658
292.9K
Santiago
Santiago@svpino·
I'm willing to die on this hill: The best voice models are those with the best accuracy with key entities, not those that optimize for WER (Word Error Rate). Most speech-to-text providers optimize for WER, but in production applications, WER is not that relevant. Getting 95% of the words right is useless if you miss the customer's name, their phone number, or the street address they just spelled out letter by letter. The team at Gladia ran a very cool benchmark: • 1,000+ call center conversations • Lots of background noise • Focus on extracting names, phone numbers, addresses, locations, etc. The Gladia model outperformed every other state-of-the-art model by up to 17%! This is exactly the data that matters to companies using these models. You get this wrong, and everything downstream breaks. A few other things worth mentioning: • Latency on partials: < 150ms • 100+ languages supported • Dynamic language detection • Overall WER at 5.97% Definitely worth checking for anyone using voice models: eu1.hubs.ly/H0qPmpk0 Thanks to the Gladia team for collaborating with me on this post.
English
11
15
198
25.2K
toly 🇺🇸
toly 🇺🇸@toly·
> when the more likely explanation is they were paid to There is very little free money available these days. At least from the 2022 survivors. Companies evaluate tech and incentives, and the incentives have strings attached. Out of N competitive offers, the winner has to be good enough in both.
English
28
2
81
9.9K
Omid Malekan
Omid Malekan@malekanoms·
I recommend that we, as an industry, stop this silly game of celebrating/bragging about a new entrant choosing a certain chain "because they liked the tech" when the more likely explanation is they were paid to. Transparency is supposed to be one of the core principles of crypto but it's seldom practiced at the BD level. Most Foundation/Labs wallets are not disclosed, there are no audited financials, and the terms of almost every BD arrangement are kept a secret. WE are supposed to be the "verify, don't trust" industry. But in this regard we are the opposite, which is unfortunate. It hurts our credibility. And if you are the type of partisan who brags about every new deal on your preferred chain as if it were organic then it hurts yours, too. For me, anytime a new entrant adopts a chain whose affiliate org is sitting on a ton of coins and employs a bunch of BD people, then my operating assumption is they were paid an absurd amount of money to use that chain. The tech had nothing to do with it. So yes, Solana paid Western Union $50m and Arbitrum paid Robinhood $100m. Or maybe far more. I leave it up to the Foundation/Labs to prove me wrong via disclosures and transparency. Until they do, I will continue to assume the worst. Lastly, all of these grants are SALES. The labs/foundations are dumping their own tokens at breakneck speed for partnerships with a poor track record of generating positive ROI. If you aren't motivated by a desire to not be duped to demand more transparency, maybe you'll be motivated by your sagging bags.
English
29
11
152
25.3K
Ledger
Ledger@Ledger·
At Ledger, we strongly believe in an open-source approach. It's a great set of principles that advocates openness and transparency, some of our core values. That's why we're constantly working towards making source code components available, reviewable, and auditable. Importantly, a majority of Ledger's code is open source, including Ledger Wallet™, Wallet API, Secure SDK, and embedded applications on our devices. Open-source software reduces the need for trust from users, however, it's not entirely bulletproof. Open-source software on non-secure chips will still be highly vulnerable to side-channel and fault attacks. Given the choice of using the Secure Element and being almost completely open-source vs using a non secure chip and being fully open-source, Ledger chooses the more secure approach. We encourage everyone to research thoroughly and make informed decisions about their security. With a decade of innovation, over 8 million devices sold, and a track record of zero hacks, Ledger stands as the trusted leader in hardware wallet security. Ledger takes transparency seriously. While we employ proprietary software for our Secure Elements, this choice ensures the highest level of tamper resistance and security. You can review key components like the OS commands dispatcher and entry points of Ledger Recover implementation, with more parts of the Ledger OS being gradually released for verification. Our proprietary software is essential for the security of the Secure Elements, which utilize advanced technology from trusted manufacturers to implement hardware countermeasures against potential attacks, even with physical access. Some code is tied to the Secure Element's security peripherals, which are proprietary intellectual property of the manufacturer. Revealing this would compromise the very security we aim to protect. Learn more: support.ledger.com/article/111323…
English
54
1
17
88.6K
Ledger
Ledger@Ledger·
Introducing Ledger Nano™ Gen5. The newest addition to our family of touchscreen signers, alongside Ledger Flex™ and Ledger Stax™, Ledger Nano™ Gen5 is the most playful, personal, and accessible signer we’ve ever built. Your first step into digital ownership, made effortless. Take control: shop.ledger.com/products/ledge…
English
236
160
1.1K
598.4K
JoshETH.eth 🏦 Chief Ethereum Capital Officer*
So everyone gets pitch forks out when the Ethereum Foundation sells to fund operations but then also complain they are not paying people enough which is fixed with more selling. 🤷🏼🫠
English
2
1
11
380
Jill Gunter ☕
Jill Gunter ☕@jillgun·
Sandeep is a real one Based solely on my observation of the dynamics, I cannot fathom the levels of frustration this man has experienced at the hands of the Ethereum high priests and the community over the years Yet you’ve never heard him complain And he’s still out here 💪
Sandeep | CEO, Polygon Foundation (※,※)@sandeepnailwal

Read this from Peter and realized that it's time for me to also speak up. NGL, I’ve started questioning my loyalty toward Ethereum. I did not come into crypto because of Bitcoin but because of Ethereum. I also have a lot of gratitude toward @VitalikButerin — someone I looked up to as an ideal for how things should be built in this world. Though I/we never got any direct support from the EF or the Ethereum CT community — in fact, the reverse. But I have always felt moral loyalty towards Ethereum even if costs me billions of dollars in Polygon's valuation perhaps. The Ethereum community as a whole has been a shit show for quite some time. Why does it feel like every other week, someone with major contributions to Ethereum has to publicly question what they’re even doing here? Just go your own way already. At best, I get trolled by well-meaning friends like @akshaybd for not declaring Polygon an L1 and walking away from this circus. Not many remember that Akshay himself was equally inclined toward Polygon in the beginning before he took his talents and helped build the Solana empire into what it is today. He got disgusted by the socialistic behavior of the Ethereum community — trolling projects like Polygon that were contributing immensely — all because of some arbitrary “technical definition.” At worst, people have started questioning my fiduciary and moral duty toward Polygon. It’s widely believed that if Polygon ever decided to call itself an L1, it would probably be valued 2–5× higher than it is today. Like think about it, Hedera Hashgraph an L1 is valued higher than Polygon, Arbitrum, Optimism and Scroll combined. To make things even worse, the Ethereum community ensures Polygon is never considered an L2 and is never included in the markets' percieved Ethereum Beta. They don’t seem to understand that Polygon PoS effectively hinged on Ethereum, while Katana, XLayer, and dozens of other chains in Polygon's ecosystem are true L2s. Heck, a prominent Polygon Stakeholder literally scolded me just today because I can’t get Polygon on GrowthPie, which refuses to list the Polygon chain. When Polymarket wins big, it’s “Ethereum,” but Polygon itself is not Ethereum. Mind-boggling. Anyway — I’m also a stubborn, hard-ass soul. I’m going to give this a final push that might just revive the entire L2 narrative. Just bear with me for a few more weeks. But the Ethereum community needs to take a hard look at itself — and ask why, every day, contributors to Ethereum, even major ones like @peter_szilagyi, are forced to question or even regret their allegiance to Ethereum. My only (remaining) defense to myself is that Ethereum is a democracy — and in any democracy, people on all sides end up disgruntled. But it’s still the only system that truly works in the long run. 🤞

English
23
9
255
26.7K
Matt Huang
Matt Huang@matthuang·
On Tempo, permissionlessness, L1 vs L2 Tempo will be a permissionless chain. On day 1, anyone will be able to deploy a token, and anyone will be able to transact on the chain. Some projects think that attracting real-world usage and serious institutions requires giving up on base layer neutrality. We do not think that, and that's not how we're building Tempo. The plan for Tempo is to have permissionless validation and permissionless smart contract deployment as well as permissionless usage: just like Bitcoin, Ethereum, Solana, etc. We’ll start with a permissioned validator set to get going and decentralize further from there. We’re building in features to make it easy for entities interacting with the blockchain (like asset issuers, money transmitters, etc.) to comply with their relevant obligations, but the base layer will remain neutral. This is a principle we feel incredibly strongly about (see: paradigm.xyz/2022/09/base-l…). As many parts of the mainstream world look to adopt crypto, we think there is a risk that they adopt permissioned systems. Our goal with Tempo is to help onboard them onto crypto rails that solve specific payments needs while still being truly permissionless. — Why L1 rather than an Ethereum L2? At Paradigm, we are heavily invested, both intellectually and literally, in the Ethereum ecosystem. We will continue to help it scale, and invest in and support companies building on Ethereum. We are also extremely excited about single-sequencer L2s for many use cases, including trading. But building a network for global payments will require bringing together thousands of partners that may not trust us, or Stripe, or anyone as a platform. We think a decentralized validator set—for the chain itself—is a necessary requirement for those partners, and to ensure that the chain is unquestionably neutral in the long run. From an operational perspective, we feel urgency to build for the demand that’s coming and want fewer dependencies, including on the rate of Ethereum L1 progress. With Tempo, we tried to remove all crypto tribalism and alignment games from our thinking and just focus on building the right product for crypto payments. At a technical level, we are prioritizing attributes like fast finality (L2s are generally only as final as the underlying L1), multiple validators (vs. single sequencer), and custom transaction lanes and gas pricing. Some of these are technically possible for an L2, but could be complex, slow to implement, and/or introduce many external dependencies. Tempo is stablecoin-focused, so interoperability through native issuance is more relevant to us than the native bridge to Ethereum that L2s have. We aren’t Bitcoin, Ethereum, or Tempo maximalists. We’re maximalists for permissionless crypto. We want Ethereum L1 to scale, and we want L2s to thrive. We love Bitcoin as a monetary asset. We find substance in Solana, Hyperliquid, and many other ecosystems. We want to ensure real-world payment flows happen on crypto rails, and that’s why we’re building Tempo.
English
463
173
1.8K
1.9M
Omid Malekan
Omid Malekan@malekanoms·
With all due respect to Matt, the notion that Tempo will in any way be neutral is a fantasy. First, the very fact the he is billed as the "project lead" while sitting on the board of Stripe, a corporation who is clearly central to this effort, and being a GP at a VC firm that will likely be heavily invested in it, is a problem. That screams "not neutral." (counterintuitively, the better Matt is at being project lead, the less neutral the chain will be). Second, he is conflating the chain being permissionless with it being public. Public means "anyone can transact or issue on it" and permissionless means anyone can be a validator. As stated by Matt, Tempo will start as a permissioned chain. A permissioned chain will never be public. To wit: will North Korea be able to freely issue tokens on Tempo? What if Do Kwon decides to launch an algorithmic stablecoin on there from jail? And then Putin says "we will route payments for our sanctioned oil being sold on the black market via stablecoins on Tempo"? Will the permissioned, known, and regulated corporations who run the validators be OK with all of this? Will the general council of Visa declare "Yes: we are clearly violating many US Federal laws and risk losing or licenses and possibly going to jail, but the docs said Tempo is a public blockchain, so we will process all of these transactions?" I don't think so. As I argued yesterday, permissioned networks do not provide validators the plausible deniability required for a chain to be neutral: x.com/malekanoms/sta… Third, no permissioned network has ever successfully transitioned to being permissionless. Hyperliquid is trying, but they have a long way to go, and are a special use case because it's mostly an app-chain, one whose primary margin asset still remains "elsewhere", something that might be OK for perps but not for payments. Tempo will have an even harder time transitioning, because per the announcement, there is heavy involvement from various payments incumbents, most of all Stripe. To believe that the network can transition to permissionless is to believe that corporations that accrued hundreds of billions of dollars in value over recent decades by owning a network will now launch a new network that they own (cause it's permissioned) but then magically decide to give all the power and profits that come with it away, quite possibly to competitors that will try to destroy their incumbent businesses. That is highly unlikely. As @ccatalini pointed out yesterday, even Libra's original plans to someday decentralize nwere pushed to the back burner rather quickly. And Facebook did not have an incumbent payment business to protect. Stripe, Visa, Nubank, etc etc all do. Y'all really think they'll give it away? This has never happened before in the history of shared corporate infrastructure - which is what Tempo will be on day one. Every other shared corporate infra (Visa, Mastercard, CME, NASDAQ, SWIFT, The Clearing House, etc etc) has gone in the opposite direction - it has centralized power and become more permissioned and censorable over time. This is literally why Satoshi invented Bitcoin. And I say this not as an ideological opposition to Tempo, but as an observation of what will be debated in the conference rooms of every potential issuer, user, etc etc. Y'all really think Mastercard will jump all over a permissioned network controlled by Stripe and Visa? Or Amazon or Walmart - fresh off their endless lawsuits against Visa and Mastercard for being oligopolies? Lastly, It's hard enough to bootstrap a PoS chain from scratch because of the "rich get richer" problem of staking. Ethereum is still the only PoS chain that's achieved a diverse token-holder set that can deem it "a neutral L1." It got there by : a)having a tiny premine by modern standards and b)being PoW for years. Tempo will start with a massively concentrated token holder set and permissioned validator set. To argue it'll easily become neutral is to make a whole bunch of assumptions that are contrary to the ideals and lived experience of this industry.
Matt Huang@matthuang

On Tempo, permissionlessness, L1 vs L2 Tempo will be a permissionless chain. On day 1, anyone will be able to deploy a token, and anyone will be able to transact on the chain. Some projects think that attracting real-world usage and serious institutions requires giving up on base layer neutrality. We do not think that, and that's not how we're building Tempo. The plan for Tempo is to have permissionless validation and permissionless smart contract deployment as well as permissionless usage: just like Bitcoin, Ethereum, Solana, etc. We’ll start with a permissioned validator set to get going and decentralize further from there. We’re building in features to make it easy for entities interacting with the blockchain (like asset issuers, money transmitters, etc.) to comply with their relevant obligations, but the base layer will remain neutral. This is a principle we feel incredibly strongly about (see: paradigm.xyz/2022/09/base-l…). As many parts of the mainstream world look to adopt crypto, we think there is a risk that they adopt permissioned systems. Our goal with Tempo is to help onboard them onto crypto rails that solve specific payments needs while still being truly permissionless. — Why L1 rather than an Ethereum L2? At Paradigm, we are heavily invested, both intellectually and literally, in the Ethereum ecosystem. We will continue to help it scale, and invest in and support companies building on Ethereum. We are also extremely excited about single-sequencer L2s for many use cases, including trading. But building a network for global payments will require bringing together thousands of partners that may not trust us, or Stripe, or anyone as a platform. We think a decentralized validator set—for the chain itself—is a necessary requirement for those partners, and to ensure that the chain is unquestionably neutral in the long run. From an operational perspective, we feel urgency to build for the demand that’s coming and want fewer dependencies, including on the rate of Ethereum L1 progress. With Tempo, we tried to remove all crypto tribalism and alignment games from our thinking and just focus on building the right product for crypto payments. At a technical level, we are prioritizing attributes like fast finality (L2s are generally only as final as the underlying L1), multiple validators (vs. single sequencer), and custom transaction lanes and gas pricing. Some of these are technically possible for an L2, but could be complex, slow to implement, and/or introduce many external dependencies. Tempo is stablecoin-focused, so interoperability through native issuance is more relevant to us than the native bridge to Ethereum that L2s have. We aren’t Bitcoin, Ethereum, or Tempo maximalists. We’re maximalists for permissionless crypto. We want Ethereum L1 to scale, and we want L2s to thrive. We love Bitcoin as a monetary asset. We find substance in Solana, Hyperliquid, and many other ecosystems. We want to ensure real-world payment flows happen on crypto rails, and that’s why we’re building Tempo.

English
92
181
1.4K
182.7K
sassal.eth/acc 🦇🔊
sassal.eth/acc 🦇🔊@sassal0x·
Getting a fair few DMs from people asking if I'm still alive because I've been quiet on here lately 😅 I'm still here ya'll, but just spending 90% less time on this website since most of the content is just brain rot and the AI bots are out of control.
English
77
20
372
18.5K
Solana
Solana@solana·
Why institutions are interested in Solana ft. @calilyliu 🏦
English
203
249
1.5K
85.4K