sigmoid

16 posts

sigmoid banner
sigmoid

sigmoid

@sigmoidwtf

coming soon

cosmos Katılım Şubat 2024
0 Takip Edilen448 Takipçiler
sigmoid retweetledi
Olas (formerly Autonolas)
Olas (formerly Autonolas)@autonolas·
Olas Staking continues to advance, as the DAO proposal put forth by @sigmoidwtf has passed! Sigmoid aims to bring a liquid staking token (LST) solution to Olas Staking. Check out the Olas Currents replay below and hear directly from Sigmoid about this upcoming development.
Valory is hiring@valoryag

New Olas Currents this Thursday 🏄🏻‍♂️ In light of the recently passed proposal by @sigmoidwtf to bring an LST solution to Olas Staking, the Sigmoid team will drop by to share their plans with @david_enim and @rpahlmeyer. Set a Reminder: twitter.com/i/spaces/1OyKA…

English
2
1
25
5.6K
sigmoid
sigmoid@sigmoidwtf·
The @autonolas DAO has voted to accept the Sigmoids proposal. We are grateful to all those who participated for their overwhelming support and are excited to bring our unique LST and agent operation solution to the Autonolas community. snapshot.org/#/autonolas.et… Sigmoid 🤝 Autonolas
English
1
5
29
6.1K
sigmoid retweetledi
Olas (formerly Autonolas)
Olas (formerly Autonolas)@autonolas·
The voting period for the proposal to introduce a Lido-esque LST staking solution, facilitated by @sigmoidwtf, to Olas Staking is coming to a close! Voting closes on June 4th at 3:28 UTC, so make sure to cast your vote now: snapshot.org/#/autonolas.et… Short summary below 👇
Olas (formerly Autonolas)@autonolas

A new proposal is live today: Should @sigmoidwtf build a liquid staking solution for OLAS that would aim to be "Like Lido for ETH, but for Olas staking"? Read on for a short summary of the proposal.

English
4
7
29
6.7K
sigmoid
sigmoid@sigmoidwtf·
Sigmoid is pleased to announce that our proposal to build a flagship LST solution for the @Autonolas ecosystem is now live. SigOLAS will be the first LST for AI networks utilizing TSS technology alongside shared economic security to stake OLAS and mint SigOLAS. We firmly believe the future is agent-centric. The first iteration of the Sigmoid agent orchestration protocol will allow users to delegate to Sigmoid, whereby we will operate and maintain agents on users' behalf. The second iteration will focus on democratizing access to agent infrastructure. We would appreciate it if you could show your support by voting on the proposal: snapshot.org/#/autonolas.et...
Olas (formerly Autonolas)@autonolas

A new proposal is live today: Should @sigmoidwtf build a liquid staking solution for OLAS that would aim to be "Like Lido for ETH, but for Olas staking"? Read on for a short summary of the proposal.

English
0
8
20
3.6K
sigmoid retweetledi
Olas (formerly Autonolas)
Olas (formerly Autonolas)@autonolas·
A new proposal is live today: Should @sigmoidwtf build a liquid staking solution for OLAS that would aim to be "Like Lido for ETH, but for Olas staking"? Read on for a short summary of the proposal.
Olas (formerly Autonolas) tweet media
English
5
11
54
36.6K
sigmoid
sigmoid@sigmoidwtf·
Would you like some tea with your CAKE? Introducing 'ChaAI' by Sigmoid: Empowering Chain Abstraction for AI Networks In a growing ecosystem of decentralized AI (DeAI) with over 200 live projects, whether we find ourselves in a landscape with 10 dominant DeAI projects or 100, one thing is certain: these projects will require an integrated layer to ensure UX and liquidity across several networks. Chain abstraction is pivotal in blockchain-based AI networks, especially since most AI networks do not have smart contracts enabled, making automation impossible. Chain abstraction doesn’t just solve a UX problem but an additional layer on top can enable new behavior that wasn’t possible without Chain Abstraction through a Credible Commitment Machine (CCM) like Sigmoid. Resource locks can be used to create bilateral agreements between users to stake and mint LSTs and also between AI node operators and potential AI networks participants to guarantee payments via these credible commitments One-click node orchestration and deployment are key features of Sigmoid's 'ChaAI' methodology, democratizing access to AI network infrastructure. Our goal is to not just abstract away interactions with different AI networks but also to abstract away participation in them. This accessibility even for a non-technical user encourages higher participation rates and promotes decentralization within AI networks, ensuring scalability without sacrificing performance. Sigmoid's unified node coordination layer, hosting protocol, and liquidity hub serve as a comprehensive solution, from deploying AI nodes and minting derivatives of staked tokens to managing delegated AI positions—all within the Sigmoid ecosystem. We closely follow ongoing developments in the chain abstraction space. @FrontierDotTech's 'CAKE' framework proposes layers to handle user transactions across different chains, including permission, solver, and settlement layers. Similarly, @SocketProtocol's 'MOFA' introduces marketplace concepts for chain-abstracted transactions, simplifying asset swaps across diverse blockchains, while @NEARProtocol aims to enable users to transact on any blockchain through a single account and interface. Sigmoid joins this league with 'ChaAI': one chain, one wallet, and one unified liquidity hub, eliminating the need for managing multiple wallets or liquidity pools across different networks. Stay tuned as Sigmoid continues to innovate!
sigmoid tweet media
English
4
8
20
2.3K
sigmoid
sigmoid@sigmoidwtf·
Exploring the intersection of the Cosmos SDK and Eigenlayer: The Sigmoid AVS At Sigmoid, we're building a CometBFT chain for seamless coordination and liquid staking across diverse AI networks. Inspired by @sreeramkannan's vision of ‘AVS as the new SaaS’, we believe @eigencloud is making significant advancements in decentralized cloud with features like dual staking and enhanced security measures, going beyond traditional approaches seen in technologies like ZK/FHE/TEE co-processors, restaked rollups, oracles, and bridges. Sigmoid is a high performance MPC network, utilizing restaked ETH security for trustless actions across chains. There's precedence for this approach, as seen with projects like @sommfinance pioneering coprocessor concepts long before zk coprocessing became viable. How does Sigmoid stand out? First and foremost, Sigmoid combines the flexibility offered by the Cosmos SDK with the decentralization benefits of Eigenlayer. You don't want the MPC nodes to directly or indirectly collude. The vast number of geographically distributed EigenLayer operators makes this easy! Secondly, the MPC nodes hold shares to critical keys. Together they hold access to users' deposits across chains. You want to make sure they are live when required and don't maliciously sign incorrect data. The economic security required to secure large deposits and prevent malice can't simply be subject to the price volatility of any native token. ETH proves to be a hard asset to guarantee the required security. Think of us as the safe haven for all your AI assets, specializing in managing stake on AI networks, issuing LSTs for these networks, and orchestrating nodes to support decentralized AI ecosystems. Exciting times ahead! Join us as we pioneer the implementation of EigenLayer's AVS and bring the Cosmos to Ethereum.
sigmoid tweet media
English
1
2
12
1.6K
sigmoid retweetledi
Mikerah
Mikerah@badcryptobitch·
Mikerah tweet media
ZXX
8
4
48
6.3K
sigmoid retweetledi
Marlin
Marlin@MarlinProtocol·
coordination networks like @sigmoidwtf are inevitable as Decentralized AI protocols gain traction. if you're a skilled node operator, marlin would love to provide the compute substrate to run your operations on sigmoid.
sigmoid@sigmoidwtf

Decentralized AI is the start of a decade-long societal shift. Democratizing access to training data and computational power, allowing model developers and users to be rewarded for their contributions, and ensuring safe and censorship-resistance access to these powerful bots clearly have their merits. Several high-quality teams, such as @ritualnet, @opentensor, @marlinprotocol, @autonolas, @sentient_agi, and @morpheusAIs, are working on making open-source AI a reality. However, there are a few key challenges: 1) AI networks are usually more complex than standard PoS blockchains, with more node types and roles. 2) Every AI network requires participation from a decentralized community. These participants or node runners need to have devops knowledge, access to compute and potentially even ML knowledge to be competitive on such networks. This increases the barrier to entry for these AI protocols. 3) Token holders may be required to bridge from one chain to another, convert one token to another and lock them up to secure these networks without access to liquidity. Sigmoid is here to change that. Sigmoid acts as an integrated coordination layer between AI networks, node operators, and potential node owners who want to run infrastructure for AI networks. It abstracts away the complexity of running these networks by allowing node operators to offer their expertise in running nodes on AI networks to potential users who want to pay for such nodes. We call these users node owners. For example, teams like @architex_ai, @ionet, @akashnet, @AethirCloud and @nirmaanai can use Sigmoid to provide distribution for their node operation services. Node owners or users don't need to worry about payments and quality of service as Sigmoid acts as the coordination layer between them and their service providers. Now, the natural question is, why would you trust Sigmoid? Sigmoid itself is a decentralized blockchain built using the cosmos SDK with security bootstrapped using ETH on @eigencloud. Payments sent by node owners to node operators will only be streamed if the node operators comply with certain quality of service requirements. This makes Sigmoid a trustless coordination layer between node owners and operators. Over time, Sigmoid's goal is to abstract away all of the complexity in running crypto-specific node infrastructure so that competent folks from AI can come and contribute to large decentralized AI networks as node operators and domain experts. We, as the crypto community, can support them by providing capital. Sigmoid continues beyond node orchestration, though. We understand that liquidity is a critical component of any crypto network, and decentralized AI networks are no different. Sigmoid allows users to permissionlessly stake their AI tokens from one unified user interface and receive sigLSTs tokens on the other end. As large AI networks launch, Sigmoid will integrate them all to become the staking hub for AI projects. Through its monitoring of node operators, it can guarantee quality of service for stakers, and sigLST's capital efficiency serves as an extra incentive for users to come and stake with Sigmoid. Sigmoid uses TSS to enable secure signing and management of assets on each AI network. This is especially important since many large AI networks don't have smart contracts enabled and aren't expected to have a native DeFi economy. If you are a cryptography wizard interested in building a high-performance MPC network that can scale to a large number of nodes, DM us now!

English
6
13
56
6.9K
sigmoid
sigmoid@sigmoidwtf·
Decentralized AI is the start of a decade-long societal shift. Democratizing access to training data and computational power, allowing model developers and users to be rewarded for their contributions, and ensuring safe and censorship-resistance access to these powerful bots clearly have their merits. Several high-quality teams, such as @ritualnet, @opentensor, @marlinprotocol, @autonolas, @sentient_agi, and @morpheusAIs, are working on making open-source AI a reality. However, there are a few key challenges: 1) AI networks are usually more complex than standard PoS blockchains, with more node types and roles. 2) Every AI network requires participation from a decentralized community. These participants or node runners need to have devops knowledge, access to compute and potentially even ML knowledge to be competitive on such networks. This increases the barrier to entry for these AI protocols. 3) Token holders may be required to bridge from one chain to another, convert one token to another and lock them up to secure these networks without access to liquidity. Sigmoid is here to change that. Sigmoid acts as an integrated coordination layer between AI networks, node operators, and potential node owners who want to run infrastructure for AI networks. It abstracts away the complexity of running these networks by allowing node operators to offer their expertise in running nodes on AI networks to potential users who want to pay for such nodes. We call these users node owners. For example, teams like @architex_ai, @ionet, @akashnet, @AethirCloud and @nirmaanai can use Sigmoid to provide distribution for their node operation services. Node owners or users don't need to worry about payments and quality of service as Sigmoid acts as the coordination layer between them and their service providers. Now, the natural question is, why would you trust Sigmoid? Sigmoid itself is a decentralized blockchain built using the cosmos SDK with security bootstrapped using ETH on @eigencloud. Payments sent by node owners to node operators will only be streamed if the node operators comply with certain quality of service requirements. This makes Sigmoid a trustless coordination layer between node owners and operators. Over time, Sigmoid's goal is to abstract away all of the complexity in running crypto-specific node infrastructure so that competent folks from AI can come and contribute to large decentralized AI networks as node operators and domain experts. We, as the crypto community, can support them by providing capital. Sigmoid continues beyond node orchestration, though. We understand that liquidity is a critical component of any crypto network, and decentralized AI networks are no different. Sigmoid allows users to permissionlessly stake their AI tokens from one unified user interface and receive sigLSTs tokens on the other end. As large AI networks launch, Sigmoid will integrate them all to become the staking hub for AI projects. Through its monitoring of node operators, it can guarantee quality of service for stakers, and sigLST's capital efficiency serves as an extra incentive for users to come and stake with Sigmoid. Sigmoid uses TSS to enable secure signing and management of assets on each AI network. This is especially important since many large AI networks don't have smart contracts enabled and aren't expected to have a native DeFi economy. If you are a cryptography wizard interested in building a high-performance MPC network that can scale to a large number of nodes, DM us now!
sigmoid tweet media
English
15
14
37
11.8K
sigmoid
sigmoid@sigmoidwtf·
@hosseeb Great Post !! @sigmoidwtf is building an integrated layer to enable users to participate in decentralized AI networks.
English
0
0
0
178
Haseeb >|<
Haseeb >|<@hosseeb·
Don’t trust, verify: An Overview of Decentralized AI Inference Say you want to run a large language model like Llama2-70B. A model this massive requires more than 140GB of memory, which means you can’t run the raw model on your home machine. What are your options? You might jump to a cloud provider, but you might not be too keen on trusting a single centralized company to handle this workload for you and hoover up all your usage data. Then what you need is decentralized inference, which lets you run ML models without relying on any single provider. (Note: this is about an 8 minute read. 👇) The Trust Problem In a decentralized network, it's not enough to just run a model and trust the output. Let’s say I ask the network to analyze a governance dilemma using Llama2-70B. How do I know it’s not actually using Llama2-13B, giving me worse analysis, and pocketing the difference? In the centralized world, you might trust that companies like OpenAI are doing this honestly because their reputation is at stake (and to some degree, LLM quality is self-evident). But in the decentralized world, honesty is not assumed—it is verified. This is where verifiable inference comes into play. In addition to providing a response to a query, you also prove it ran correctly on the model you asked for. But how? The naive approach would be to run the model as a smart contract on-chain. This would definitely guarantee the output was verified, but this is wildly impractical. GPT-3 represents words with an embedding dimension of 12,288. If you were to do a single matrix multiplication of this size on-chain, it would cost about $10 billion at current gas prices—the computation would fill every block for about a month straight. So, no. We’re going to need a different approach. After observing the landscape, it's clear to me that three main approaches have emerged to tackle verifiable inference: zero-knowledge proofs, optimistic fraud proofs, and cryptoeconomics. Each has its own flavor of security and cost implications. 1. Zero-Knowledge Proofs (ZK ML): Imagine being able to prove you ran a massive model, but the proof is effectively a fixed size regardless of how large the model is. That's what ZK ML promises, through the magic of ZK-SNARKs. While it sounds elegant in principle, compiling a deep neural network into zero-knowledge circuits which can then be proven is extremely difficult. It’s also massively expensive—at minimum, you’re likely looking at 1000x cost for inference and 1000x latency (the time to generate the proof), to say nothing of compiling the model itself into a circuit before any of this can happen. Ultimately that cost has to be passed down to the user, so this will end up very expensive for end users. ["Chapter 5: The Cost of Intelligence" by Modulus bit.ly/3xl8wms] On the other hand, this is the only approach that cryptographically guarantees correctness. With ZK, the model provider can’t cheat no matter how hard they try. But it does so at huge costs, making this impractical for large models for the foreseeable future. Examples: @ezklxyz - ezkl.xyz @ModulusLabs - modulus.xyz @gizatechxyz - gizatech.xyz 2. Optimistic Fraud Proofs (Optimistic ML): The optimistic approach is to trust, but verify. We assume the inference is correct unless proven otherwise. If a node tries to cheat, “watchers” in the network can call it out the cheater and challenge them using a fraud proof. These watchers have to be watching the chain at all times and re-running the inferences on their own models to ensure the outputs are correct. These fraud proofs are Truebit-style interactive challenge-response games, where you repeatedly bisect the model execution trace on-chain until you find the error. ["Truebit - promise that was never delivered" by German Nikolishin - bit.ly/4axdb2W] If this ever actually happens it’s incredibly costly, since these programs are massive and have huge internal states—a single GPT-3 inference costs about 1 petaflop (10^15 floating point operations). But the game theory suggests this should almost never happen (fraud proofs are also notoriously difficult to code correctly, since the code almost never gets hit in production). [GitHub - "AI and Memory Wall" by Amir Gholami bit.ly/4at29fh] The upside is optimistic ML is secure so long as there’s a single honest watcher who’s paying attention. The cost is cheaper than ZK ML, but remember that each watcher in the network is rerunning every query themselves. At equilibrium, this means that if there are 10 watchers, that security cost must be passed on to the user, so they will have to pay more than 10x the inference cost (or however many watchers there are). The downside, as with optimistic rollups generally, is that you have to wait for the challenge period to pass before you’re sure the response is verified. Depending on how that network is parameterized though, you might be waiting minutes rather than days. Examples: @OraProtocol - ora.io @gensynai (although currently underspecified) - gensyn.ai 3. Cryptoeconomics (Cryptoeconomic ML): Here we drop all the fancy techniques and do the simple thing: stake-weighted voting. A user decides how many nodes should run their query, they each reveal their responses, and if there's a discrepancy among responses, the odd one out gets slashed. Standard oracle stuff—it's a more straightforward approach that lets users set their desired security level, balancing cost and trust. If Chainlink were doing ML, this is how they’d do it. The latency here is fast—you just need a commit-reveal from each node. If this is getting written to a blockchain, then technically this can happen in two blocks. ["Commit-Reveal scheme in Solidity" by Srinivas Joshi bit.ly/3xdZcRp] The security however is the weakest. A majority of nodes could rationally choose to collude if they were wily enough. As a user, you have to reason about how much these nodes have at stake and what it would cost them to cheat. That said, using something like Eigenlayer restaking and attributable security, the network could effectively provide insurance in the case of a security failure. ["Introducing Programmable Trust + EigenLayer Roadmap" by Sreeram Kannan youtu.be/-aK6VrmK0yk?t=…] But the nice part of this system is that the user can specify how much security they want. They could choose to have 3 nodes or 5 nodes in their quorum, or every node in the network—or, if they want to YOLO, they could even choose n=1. The cost function here is simple: the user pays for however many nodes they want in their quorum. If you choose 3, you pay 3x the inference cost. The tricky question here: can you make n=1 secure? In a naive implementation, a lone node should cheat every time if no one is checking. But I suspect if you encrypt the queries and do the payments through intents, you might be able to obfuscate to the node that they’re actually the only one responding to this task. In that case you might be able to charge the average user less than 2x inference cost. Ultimately, the cryptoeconomic approach is the simplest, the easiest, and probably the cheapest, but it’s the least sexy and in principle the least secure. But as always, the devil is in the details. Examples: @ritualnet (although currently underspecified) - ritual.net @Atoma_Network - atoma.network Why Verifiable ML is Hard You might wonder why we don’t have all this already? After all, at bottom, machine learning models are just really large computer programs. Proving that programs were executed correctly has long been the bread and butter of blockchains. This is why these three verification approaches mirror the ways that blockchains secure their block space—ZK rollups use ZK proofs, optimistic rollups use fraud proofs, and most L1 blockchains use cryptoeconomics. It’s no surprise that we arrived at basically the same solutions. So what makes this hard when applied to ML? ML is unique because ML computations are generally represented as dense computation graphs that are designed to be run efficiently on GPUs. They are not designed to be proven. So if you want to prove ML computations in a ZK or optimistic environment, they have to be recompiled in a format that makes this possible—which is very complex and expensive. The second fundamental difficulty with ML is nondeterminism. Program verification assumes that the outputs of programs are deterministic. But if you run the same model on different GPU architectures or CUDA versions, you’ll get different outputs. Even if you have to force each node to use the same architecture, you still have the problem of randomness used in algorithms (the noise in diffusion models, or token sampling in LLMs). You can fix that randomness by controlling the RNG seed. But even with all that, you’re still left with the final menacing problem: the nondeterminism inherent in floating point operations. [Wiki: Random number generation bit.ly/3xbJ5nn] Almost all operations in GPUs are done on floating point numbers. Floating points are finicky because they’re not associative—that is, it’s not true that (a + b) + c is always the same as a + (b + c) for floating points. Because GPUs are highly parallelized, the ordering of additions or multiplications might be different on each execution, which can cascade into small differences in output. This is unlikely to affect the output of an LLM given the discrete nature of words, but for an image model, it may result in subtly different pixel values, leading two images to not match perfectly. [Stack Overflow: Is floating-point addition and multiplication associative? bit.ly/4auhxb7] This means you either need to avoid using floating points, which means an enormous blow to performance, or you need to allow some laxity in comparing outputs. Either way, the details are fiddly, and you can’t exactly abstract them away. (This is why, it turns out, the EVM doesn’t support floating point numbers, although some blockchains like NEAR do.) [Ethereum Stack Exchange: Why was support for floating point numbers not natively added to Solidity bit.ly/43DyKge] [GitHub: README WebAssembly on NEAR bit.ly/3TA0BcC] In short, decentralized inference networks are hard because all the details matter, and reality has a surprising amount of detail. In Conclusion Right now blockchains and ML clearly have a lot to say to each other. One is a technology that creates trust, and the other is a technology in sore need of it. While each approach to decentralized inference has its own tradeoffs, I’m very interested in seeing what entrepreneurs do with these tools to build the best network out there. But I did not write this piece to be the last word—I’m thinking about these ideas a lot in real time and having a lot of vibrant debates with people. I’ve always found writing is the best way to test my ideas. If you’re building something in this space, reach out! I’d always love to learn what you’re working on—and if you can prove me wrong, all the better. Thanks to @ilblackdragon, @caseykcaruso, @sreeramkannan, and @cheryldchan for reviewing drafts of this piece. Disclaimer: This article represents the subjective views of the author and are not the views of Dragonfly or its affiliates. Funds managed by Dragonfly may have invested in some of the protocols and cryptocurrencies mentioned herein. This article is not investment advice and should not be used as the basis for any investment or relied upon in evaluating the merits of any investment.
YouTube video
YouTube
Haseeb >|< tweet mediaHaseeb >|< tweet mediaHaseeb >|< tweet media
English
73
174
684
259.6K