The Interfold (formerly Enclave)

881 posts

The Interfold (formerly Enclave) banner
The Interfold (formerly Enclave)

The Interfold (formerly Enclave)

@theInterfold

Private Inputs. Collective Outcomes. A distributed network for Confidential Coordination.

EVM Katılım Ağustos 2024
139 Takip Edilen1.8K Takipçiler
Sabitlenmiş Tweet
The Interfold (formerly Enclave)
Enclave is now The Interfold. What we built isn’t a hardware enclave, but a distributed network for confidential coordination. The Interfold names that network. 🌐
The Interfold (formerly Enclave) tweet media
English
1
14
39
3K
The Interfold (formerly Enclave)
Private voting fails if you can still prove your choice. The Interfold uses vote masking to break that proof. Read the technical deep-dive on vote masking and receipt-freeness by @ctrlc03
The Interfold (formerly Enclave) tweet media
English
1
4
11
301
The Interfold (formerly Enclave)
Why is cross-institutional data sharing so hard? Because sharing data = losing control over it. So organizations don’t collaborate. Interfold's infrastructure flips the model: → Data stays private → Computation is shared → Results are provably correct
English
0
0
3
104
The Interfold (formerly Enclave)
Privacy isn’t a feature at the Interfold - it’s the execution environment. 🖥️💫🖥️
English
0
1
6
127
The Interfold (formerly Enclave)
We talk about privacy as exposure. But execution concentration may be the bigger issue. Change execution, change coordination.
English
0
1
7
249
The Interfold (formerly Enclave) retweetledi
The Interfold (formerly Enclave)
Enclave Team Reads 📘 "Laws of (New) Media" by @amicusadastra Reminder that every technology enhances something, obsolesces something, retrieves something from the past, and eventually reverses into its opposite. A useful lens for thinking about encrypted execution and how confidential coordination reshapes trust.
Andrew McLuhan@amicusadastra

many focusing on ai amplifying and not on what it obsolesces, retrieves, reverses. a16z.news/p/laws-of-new-…

English
0
0
7
234
The Interfold (formerly Enclave)
Interesting architecture for what AI-mediated governance could look like. Additional thing worth highlighting: private aggregation infrastructure doesn't just enable voting. The same pattern supports sealed auctions, data pools, collaborative AI training, and other forms of confidential coordination.
Auryn@auryn_macmillan

## AI voting proxies require two layers: private inference and private aggregation. AI voting proxies require two layers: private inference and private aggregation. Vitalik recently wrote about AI "shadows" — LLM proxies trained on your own corpus that vote on your behalf with near-100% participation. He's right that cryptography is the key enabler. But I want to make that concrete by separating it into these two layers. But first, a diagnosis from personal experience. * * * ## The DAO governance trap I'm a DAO advocate. I've spent more than a decade building and participating in them. But I've also watched the same failure mode play out repeatedly — and it stems not from the DAO model itself, but from a specific dogma within it: the wholesale rejection of hierarchy. DAOs are structurally resistant to hierarchy. The ethos is flat, everyone votes on everything. In theory this is beautiful. In practice it's paralyzing. What actually happens: DAOs end up voting only on large budget items and protocol upgrades, while deferring entirely to labs and core teams for day-to-day decisions. The org atrophies into a rubber stamp with a treasury. The deeper irony is that rejecting hierarchy doesn't eliminate it. It just makes it informal and unaccountable. Power concentrates anyway, without the transparency that explicit structure provides. "Everyone votes on everything" isn't a bad governance philosophy. It was always a hardware problem. Human bandwidth is finite. Attention is scarce. Flat structures collapsed back into de facto hierarchies because they had no other choice. AI shadows change this calculus. * * * ## Continuous governance If your vote is cast by a proxy that is trained or tuned on your values, your priorities, and your evolving perspective, and casting it costs you nothing, then the participation constraint is radically changed. Votes don't need to be reserved for high-stakes moments. They can be granular, frequent, and fast. Imagine a DAO where operational decisions resolve in minutes, not weeks. Where the organisation's direction is a continuous live aggregate of its members' preferences, not a series of discrete referendums that most people ignore. Where "everyone votes on everything" is not an aspirational gesture but a literal description of how the org runs day to day. This connects to something Karl Popper and David Deutsch identified as the core measure of good governance: not who holds power, but how easily bad policy can be corrected and bad leadership removed. Good governance is a process of conjecture and the elimination of error. The question is not "did we get the right answer?" but "can we fix it quickly when we get the wrong one?" Frequent, fine-grained, direct participation is the most natural expression of this principle. The faster errors surface and the lower the cost of correcting them, the healthier the system. A governance layer where votes resolve in minutes, on granular day-to-day decisions, is structurally more Popperian than one where correction requires waiting four years for an election or months for a DAO proposal to pass quorum. This is the Open Society, operationalised. * * * ## The cryptographic stack that makes it private For this to work at scale, and especially on sensitive decisions, inference and aggregation have to be private. Here's what that architecture looks like. ### Phase 1: Private inference Your AI shadow needs to run somewhere. You probably don't have the hardware to run a frontier model locally. Several options exist, each with a different trust model: * **"Trust-me-bro" remote models.** You send your data to a provider and trust them not to look. Simple, fast, available today. The weakest privacy guarantee. * **Local models.** You run the model yourself. Personal privacy, no third-party trust required. Constrained by your hardware and the quality of models that fit on it. * **TEEs (Trusted Execution Environments).** The provider runs the model inside a hardware enclave. You trust the chip manufacturer rather than the provider. Stronger than pure trust, weaker than cryptographic guarantees. * **MPC (Multi-Party Computation).** Inference is split across multiple parties who collectively compute the result without any single party seeing the full picture. Strong guarantees, significant coordination overhead. * **FHE (Fully Homomorphic Encryption).** Your data stays encrypted throughout inference. The provider computes over ciphertext and returns a result only you can read. The strongest cryptographic guarantee, and the most computationally expensive. These approaches will compete. The right choice depends on the user's trust model, hardware constraints, and tolerance for latency and cost. That choice belongs entirely to the user and has no bearing on the aggregation layer. The decrypt moment is where periodic human review naturally lives. You're not approving every vote — that would defeat the purpose. Rather, you're occasionally checking in with your shadow, discussing how your priorities, perspectives, and values have evolved, correcting it where its actions have diverged from your preferences, and letting those corrections inform its future behaviour. It's less a checkpoint and more a feedback loop. The proxy itself improves through error correction over time. ### Phase 2: Private aggregation Your proxy encrypts your vote and contributes it to a collective computation. This is where something like [The Interfold](theinterfold.com) comes in. The Interfold performs the computation over encrypted inputs and produces publicly verifiable outputs. All inputs are encrypted. The only valid output proof requires consuming every encrypted input and running the expected computation. A bonded threshold committee of node operators coordinates to decrypt the final output. The result: mathematically guaranteed correct aggregate output. No party — not the inference provider, not other voters, not the aggregation layer — ever sees individual inputs. The two phases are cryptographically decoupled. The handoff is simple: decrypt locally, re-encrypt under the aggregation scheme. The plaintext moment is brief and intentional. You are the trusted party in possession of your own data. User ↓ AI shadow inference ↓ human audit loop ↓ encrypted preference signal ↓ collective computation ↓ public result * * * ## Why this matters for Vitalik's chaotic era Vitalik argues that in a chaotic era, democratic tools shouldn't try to bind decisions. They should find consensus and give distributed groups a voice that hard-power actors can listen to. AI shadows with private aggregation serve this goal. But they go further. They don't just make participation possible at scale. They make granular, continuous participation possible. The bandwidth constraint that has always forced democratic systems toward blunt, infrequent, high-stakes votes is lifted. * * * ## Parallel societies It would be naive to expect existing institutions to adopt any of this soon. Nation-states, corporations, and legacy DAOs all have strong incentives to preserve existing power structures. Waiting for them to change is not a strategy. But there is no reason to wait. These tools can be deployed in parallel, now, within communities that choose to use them. The history of institutional change is largely a history of parallel structures that proved their worth and were eventually copied or absorbed. The goal is not to replace existing institutions overnight. It is to demonstrate, at small scale, that a more participatory and error-correcting form of governance is not only possible but practical. At @web3privacy's CC2 and @EFDevcon last year, @ArnaudS proposed his personal litmus test for Ethereum, and both @jarradhope_ and @satorinakamoto spoke about the promise of parallel societies. These two ideas have stayed with me. My litmus test for Ethereum is this: its real-world capacity to bring about the flourishing of parallel societies, ultimately in pursuit of the Open Society. Private, direct, participatory democratic systems seem like a significant step in that direction. The cryptographic foundation exists today. The question is whether we build and use it. theinterfold.com | docs.theinterfold.com

English
0
0
6
228
The Interfold (formerly Enclave) retweetledi
Web3Privacy Now
Web3Privacy Now@web3privacy·
Manifest yourself for a public good
Auryn@auryn_macmillan

## AI voting proxies require two layers: private inference and private aggregation. AI voting proxies require two layers: private inference and private aggregation. Vitalik recently wrote about AI "shadows" — LLM proxies trained on your own corpus that vote on your behalf with near-100% participation. He's right that cryptography is the key enabler. But I want to make that concrete by separating it into these two layers. But first, a diagnosis from personal experience. * * * ## The DAO governance trap I'm a DAO advocate. I've spent more than a decade building and participating in them. But I've also watched the same failure mode play out repeatedly — and it stems not from the DAO model itself, but from a specific dogma within it: the wholesale rejection of hierarchy. DAOs are structurally resistant to hierarchy. The ethos is flat, everyone votes on everything. In theory this is beautiful. In practice it's paralyzing. What actually happens: DAOs end up voting only on large budget items and protocol upgrades, while deferring entirely to labs and core teams for day-to-day decisions. The org atrophies into a rubber stamp with a treasury. The deeper irony is that rejecting hierarchy doesn't eliminate it. It just makes it informal and unaccountable. Power concentrates anyway, without the transparency that explicit structure provides. "Everyone votes on everything" isn't a bad governance philosophy. It was always a hardware problem. Human bandwidth is finite. Attention is scarce. Flat structures collapsed back into de facto hierarchies because they had no other choice. AI shadows change this calculus. * * * ## Continuous governance If your vote is cast by a proxy that is trained or tuned on your values, your priorities, and your evolving perspective, and casting it costs you nothing, then the participation constraint is radically changed. Votes don't need to be reserved for high-stakes moments. They can be granular, frequent, and fast. Imagine a DAO where operational decisions resolve in minutes, not weeks. Where the organisation's direction is a continuous live aggregate of its members' preferences, not a series of discrete referendums that most people ignore. Where "everyone votes on everything" is not an aspirational gesture but a literal description of how the org runs day to day. This connects to something Karl Popper and David Deutsch identified as the core measure of good governance: not who holds power, but how easily bad policy can be corrected and bad leadership removed. Good governance is a process of conjecture and the elimination of error. The question is not "did we get the right answer?" but "can we fix it quickly when we get the wrong one?" Frequent, fine-grained, direct participation is the most natural expression of this principle. The faster errors surface and the lower the cost of correcting them, the healthier the system. A governance layer where votes resolve in minutes, on granular day-to-day decisions, is structurally more Popperian than one where correction requires waiting four years for an election or months for a DAO proposal to pass quorum. This is the Open Society, operationalised. * * * ## The cryptographic stack that makes it private For this to work at scale, and especially on sensitive decisions, inference and aggregation have to be private. Here's what that architecture looks like. ### Phase 1: Private inference Your AI shadow needs to run somewhere. You probably don't have the hardware to run a frontier model locally. Several options exist, each with a different trust model: * **"Trust-me-bro" remote models.** You send your data to a provider and trust them not to look. Simple, fast, available today. The weakest privacy guarantee. * **Local models.** You run the model yourself. Personal privacy, no third-party trust required. Constrained by your hardware and the quality of models that fit on it. * **TEEs (Trusted Execution Environments).** The provider runs the model inside a hardware enclave. You trust the chip manufacturer rather than the provider. Stronger than pure trust, weaker than cryptographic guarantees. * **MPC (Multi-Party Computation).** Inference is split across multiple parties who collectively compute the result without any single party seeing the full picture. Strong guarantees, significant coordination overhead. * **FHE (Fully Homomorphic Encryption).** Your data stays encrypted throughout inference. The provider computes over ciphertext and returns a result only you can read. The strongest cryptographic guarantee, and the most computationally expensive. These approaches will compete. The right choice depends on the user's trust model, hardware constraints, and tolerance for latency and cost. That choice belongs entirely to the user and has no bearing on the aggregation layer. The decrypt moment is where periodic human review naturally lives. You're not approving every vote — that would defeat the purpose. Rather, you're occasionally checking in with your shadow, discussing how your priorities, perspectives, and values have evolved, correcting it where its actions have diverged from your preferences, and letting those corrections inform its future behaviour. It's less a checkpoint and more a feedback loop. The proxy itself improves through error correction over time. ### Phase 2: Private aggregation Your proxy encrypts your vote and contributes it to a collective computation. This is where something like [The Interfold](theinterfold.com) comes in. The Interfold performs the computation over encrypted inputs and produces publicly verifiable outputs. All inputs are encrypted. The only valid output proof requires consuming every encrypted input and running the expected computation. A bonded threshold committee of node operators coordinates to decrypt the final output. The result: mathematically guaranteed correct aggregate output. No party — not the inference provider, not other voters, not the aggregation layer — ever sees individual inputs. The two phases are cryptographically decoupled. The handoff is simple: decrypt locally, re-encrypt under the aggregation scheme. The plaintext moment is brief and intentional. You are the trusted party in possession of your own data. User ↓ AI shadow inference ↓ human audit loop ↓ encrypted preference signal ↓ collective computation ↓ public result * * * ## Why this matters for Vitalik's chaotic era Vitalik argues that in a chaotic era, democratic tools shouldn't try to bind decisions. They should find consensus and give distributed groups a voice that hard-power actors can listen to. AI shadows with private aggregation serve this goal. But they go further. They don't just make participation possible at scale. They make granular, continuous participation possible. The bandwidth constraint that has always forced democratic systems toward blunt, infrequent, high-stakes votes is lifted. * * * ## Parallel societies It would be naive to expect existing institutions to adopt any of this soon. Nation-states, corporations, and legacy DAOs all have strong incentives to preserve existing power structures. Waiting for them to change is not a strategy. But there is no reason to wait. These tools can be deployed in parallel, now, within communities that choose to use them. The history of institutional change is largely a history of parallel structures that proved their worth and were eventually copied or absorbed. The goal is not to replace existing institutions overnight. It is to demonstrate, at small scale, that a more participatory and error-correcting form of governance is not only possible but practical. At @web3privacy's CC2 and @EFDevcon last year, @ArnaudS proposed his personal litmus test for Ethereum, and both @jarradhope_ and @satorinakamoto spoke about the promise of parallel societies. These two ideas have stayed with me. My litmus test for Ethereum is this: its real-world capacity to bring about the flourishing of parallel societies, ultimately in pursuit of the Open Society. Private, direct, participatory democratic systems seem like a significant step in that direction. The cryptographic foundation exists today. The question is whether we build and use it. theinterfold.com | docs.theinterfold.com

English
0
1
7
698
The Interfold (formerly Enclave) retweetledi
Auryn
Auryn@auryn_macmillan·
## AI voting proxies require two layers: private inference and private aggregation. AI voting proxies require two layers: private inference and private aggregation. Vitalik recently wrote about AI "shadows" — LLM proxies trained on your own corpus that vote on your behalf with near-100% participation. He's right that cryptography is the key enabler. But I want to make that concrete by separating it into these two layers. But first, a diagnosis from personal experience. * * * ## The DAO governance trap I'm a DAO advocate. I've spent more than a decade building and participating in them. But I've also watched the same failure mode play out repeatedly — and it stems not from the DAO model itself, but from a specific dogma within it: the wholesale rejection of hierarchy. DAOs are structurally resistant to hierarchy. The ethos is flat, everyone votes on everything. In theory this is beautiful. In practice it's paralyzing. What actually happens: DAOs end up voting only on large budget items and protocol upgrades, while deferring entirely to labs and core teams for day-to-day decisions. The org atrophies into a rubber stamp with a treasury. The deeper irony is that rejecting hierarchy doesn't eliminate it. It just makes it informal and unaccountable. Power concentrates anyway, without the transparency that explicit structure provides. "Everyone votes on everything" isn't a bad governance philosophy. It was always a hardware problem. Human bandwidth is finite. Attention is scarce. Flat structures collapsed back into de facto hierarchies because they had no other choice. AI shadows change this calculus. * * * ## Continuous governance If your vote is cast by a proxy that is trained or tuned on your values, your priorities, and your evolving perspective, and casting it costs you nothing, then the participation constraint is radically changed. Votes don't need to be reserved for high-stakes moments. They can be granular, frequent, and fast. Imagine a DAO where operational decisions resolve in minutes, not weeks. Where the organisation's direction is a continuous live aggregate of its members' preferences, not a series of discrete referendums that most people ignore. Where "everyone votes on everything" is not an aspirational gesture but a literal description of how the org runs day to day. This connects to something Karl Popper and David Deutsch identified as the core measure of good governance: not who holds power, but how easily bad policy can be corrected and bad leadership removed. Good governance is a process of conjecture and the elimination of error. The question is not "did we get the right answer?" but "can we fix it quickly when we get the wrong one?" Frequent, fine-grained, direct participation is the most natural expression of this principle. The faster errors surface and the lower the cost of correcting them, the healthier the system. A governance layer where votes resolve in minutes, on granular day-to-day decisions, is structurally more Popperian than one where correction requires waiting four years for an election or months for a DAO proposal to pass quorum. This is the Open Society, operationalised. * * * ## The cryptographic stack that makes it private For this to work at scale, and especially on sensitive decisions, inference and aggregation have to be private. Here's what that architecture looks like. ### Phase 1: Private inference Your AI shadow needs to run somewhere. You probably don't have the hardware to run a frontier model locally. Several options exist, each with a different trust model: * **"Trust-me-bro" remote models.** You send your data to a provider and trust them not to look. Simple, fast, available today. The weakest privacy guarantee. * **Local models.** You run the model yourself. Personal privacy, no third-party trust required. Constrained by your hardware and the quality of models that fit on it. * **TEEs (Trusted Execution Environments).** The provider runs the model inside a hardware enclave. You trust the chip manufacturer rather than the provider. Stronger than pure trust, weaker than cryptographic guarantees. * **MPC (Multi-Party Computation).** Inference is split across multiple parties who collectively compute the result without any single party seeing the full picture. Strong guarantees, significant coordination overhead. * **FHE (Fully Homomorphic Encryption).** Your data stays encrypted throughout inference. The provider computes over ciphertext and returns a result only you can read. The strongest cryptographic guarantee, and the most computationally expensive. These approaches will compete. The right choice depends on the user's trust model, hardware constraints, and tolerance for latency and cost. That choice belongs entirely to the user and has no bearing on the aggregation layer. The decrypt moment is where periodic human review naturally lives. You're not approving every vote — that would defeat the purpose. Rather, you're occasionally checking in with your shadow, discussing how your priorities, perspectives, and values have evolved, correcting it where its actions have diverged from your preferences, and letting those corrections inform its future behaviour. It's less a checkpoint and more a feedback loop. The proxy itself improves through error correction over time. ### Phase 2: Private aggregation Your proxy encrypts your vote and contributes it to a collective computation. This is where something like [The Interfold](theinterfold.com) comes in. The Interfold performs the computation over encrypted inputs and produces publicly verifiable outputs. All inputs are encrypted. The only valid output proof requires consuming every encrypted input and running the expected computation. A bonded threshold committee of node operators coordinates to decrypt the final output. The result: mathematically guaranteed correct aggregate output. No party — not the inference provider, not other voters, not the aggregation layer — ever sees individual inputs. The two phases are cryptographically decoupled. The handoff is simple: decrypt locally, re-encrypt under the aggregation scheme. The plaintext moment is brief and intentional. You are the trusted party in possession of your own data. User ↓ AI shadow inference ↓ human audit loop ↓ encrypted preference signal ↓ collective computation ↓ public result * * * ## Why this matters for Vitalik's chaotic era Vitalik argues that in a chaotic era, democratic tools shouldn't try to bind decisions. They should find consensus and give distributed groups a voice that hard-power actors can listen to. AI shadows with private aggregation serve this goal. But they go further. They don't just make participation possible at scale. They make granular, continuous participation possible. The bandwidth constraint that has always forced democratic systems toward blunt, infrequent, high-stakes votes is lifted. * * * ## Parallel societies It would be naive to expect existing institutions to adopt any of this soon. Nation-states, corporations, and legacy DAOs all have strong incentives to preserve existing power structures. Waiting for them to change is not a strategy. But there is no reason to wait. These tools can be deployed in parallel, now, within communities that choose to use them. The history of institutional change is largely a history of parallel structures that proved their worth and were eventually copied or absorbed. The goal is not to replace existing institutions overnight. It is to demonstrate, at small scale, that a more participatory and error-correcting form of governance is not only possible but practical. At @web3privacy's CC2 and @EFDevcon last year, @ArnaudS proposed his personal litmus test for Ethereum, and both @jarradhope_ and @satorinakamoto spoke about the promise of parallel societies. These two ideas have stayed with me. My litmus test for Ethereum is this: its real-world capacity to bring about the flourishing of parallel societies, ultimately in pursuit of the Open Society. Private, direct, participatory democratic systems seem like a significant step in that direction. The cryptographic foundation exists today. The question is whether we build and use it. theinterfold.com | docs.theinterfold.com
vitalik.eth@VitalikButerin

## Egalitarianism and pluralism One underlying ideology of democratic things is a strong version of egalitarianism: the idea that we are all equal, not just in some Christian metaphysical sense of having equal dignity under God, but in some more concrete sense of all having equally valuable things to say and deserving an equal voice in the world. It is sometimes considered impolite to disagree with this directly. But at the same time, all major political tribes have their rhetoric for rejecting it. Some believe not in egalitarianism, but in _meritocracy_: inequality that comes from differences in performance, effort and skill is acceptable, inequality that comes from inherited title is not. Others believe in "expertise" and denounce "populism". Years ago, there was an abortive trend to re-embrace credentialism (eg. I remember the attempt to push people to call Jill Biden "Dr. Jill Biden"). And still others don't give a crap about even the pretense of egalitarianism, and seek to build 150-meter statues to ancient Roman and Greek gods and express pride in unbridled domineering masculinity. I think it is true that some have more expertise than others, and this expertise should be listened to. And even the "second line of defense" comfortable fiction - that people who are higher on some skills and virtues might be lower on other more subtle and immeasurable ones - is on average false. But democratic things are very valuable despite this, for two reasons: * **Egalitarianism as a floor, not as an absolute**. If you take the above arguments too seriously, you run into the problem that you leave many people with no voice at all. This is a dangerous position: it means that there is no disincentive at all to impose ruinous outcomes on them. Chickens are far stupider than humans. But if I could give each chicken even 0.01 votes on agriculture law, in some way that effectively captures their preferences, would I? Hell yes. * **Pluralism**. Democratic things (as well as eg. ideas such as free speech) are not just about providing a floor at the bottom, they are also about diversifying the top. A goal is to create space for alternative groups of elites, that are able to challenge existing elites. This is where pluralistic voting models, that focus on finding "consensus across difference" are so valuable: they inherently empower diverse viewpoints, and prevent an intellectual or decision-making ecosystem from being overly dominated by monoculture. See also vitalik.eth.limo/general/2021/0… , where I argue that the Gini is a bad inequality index because it ultimately conflates two very different problems (floor too low, top too concentrated), and actually we need to treat both separately. Also, see also this piece from Ruxandra: writingruxandrabio.com/p/equality-as-… ## AI The main challenge in building new institutions of any type, is that people are lazy to change their habits. Even existing nation-state voting only survives because (i) it's only one bit of info per four years, and (ii) it has hundreds of years of historical legitimacy. This makes a lot of work more difficult. For example, an alternative approach to dealing with the chaotic era is to find "islands of stability", and build more holistic institutions at smaller scales, with the goal of copying or adapting them to larger contexts later. The problem is, even there, getting these institutions to succeed enough that others want to copy them takes too long, compared to a fast-moving world. So what do we do? One benefit of AI is that it potentially allows us to make much higher-bandwidth provision of input literally zero-cost. LLM "shadows" of ourselves, fine-tuned on our corpus of both public and private actions, can make decisions on our behalf. This opens the door not just to higher-bandwidth feedback with near-100% participation rates (if done as a software default), but also fundamentally new possibilities. For example, a weakness of distributed decision making is that it cannot take into account secret information. This is often a justification for centralizing key decisions. In a chaotic era, the set of such situations is magnified. But LLM shadows of ourselves *could* make votes based off of private information, thanks to the magic of cryptography. ## Conclusions Today's disillusionment with democratic things is real. But what is also becoming real very rapidly is disillusionment with the alternative, where various groups of elites visibly and openly don't care about the effects of their actions on regular people. Where "you can just do things" slides into "you can just bomb people", or "you can just openly talk proudly about how superintelligent AI you are building will bring unemployment to everyone", or... And so we need to start the next round of this cycle sooner. It needs to start on realistic principles, learning from the failures of the previous era, but it does need to happen.

English
2
4
25
4.2K
The Interfold (formerly Enclave)
Join us for 𝙼𝚞𝚕𝚝𝚒𝚙𝚕𝚊𝚢𝚎𝚛 𝙿𝚛𝚒𝚟𝚊𝚌𝚢 - a conversation on privacy-first infrastructure We’ll discuss: - messaging without metadata leakage - analytics without tracking users - infrastructure built for privacy from the start with @Cryptic_cm, co-founder of Session @session_app and @auryn_macmillan from The Interfold Tune in tomorrow 🎙️ x.com/i/spaces/1AxRn…
The Interfold (formerly Enclave) tweet media
English
2
6
21
5.5K
The Interfold (formerly Enclave) retweetledi
Auryn
Auryn@auryn_macmillan·
̶E̶n̶c̶l̶a̶v̶e̶ ⟶ The Interfold When explaining to people what Enclave is, we kept running into this issue where the word "enclave" is kind of used interchangeably with TEE and HSM, which lead many people to misunderstand what our protocol actually does and how it works. The Interfold offers fundamentally different trust properties to hardware-based privacy solutions, rooting trust in cryptography and economics, rather than trusted vendors. The new name draws from ideas of folded space and hidden dimensionality. We took inspiration Dune's "foldspace", the "pocket universes" in Death’s End, and from manifold geometry. The core idea The Interfold allows us to co-create and collectively utilize ephemeral spaces for encrypted execution; it allows many independent parties to collaboratively compute outputs from confidential inputs, with strong guarantees that the inputs and intermediate states remain private.
The Interfold (formerly Enclave)@theInterfold

Enclave is now The Interfold. What we built isn’t a hardware enclave, but a distributed network for confidential coordination. The Interfold names that network. 🌐

English
2
5
19
1.1K
The Interfold (formerly Enclave)
Enclave is now The Interfold. What we built isn’t a hardware enclave, but a distributed network for confidential coordination. The Interfold names that network. 🌐
The Interfold (formerly Enclave) tweet media
English
1
14
39
3K