Sabitlenmiş Tweet
Nesa
379 posts

Nesa
@nesaorg
The World's AI Blockchain https://t.co/5Yp4GWD32j
Katılım Ocak 2024
2 Takip Edilen331.1K Takipçiler

Developers building on Nesa now have access to over 100k AI models to build on and experiment with.
From testing in the Playground to production-ready applications, developers have all the tools they need to explore, compare, and deploy across a massive AI landscape.
One ecosystem. Endless opportunities.
English

In early 2023, a group of Samsung engineers accidentally leaked sensitive internal information by pasting code and technical documents into ChatGPT while troubleshooting problems. The engineers were simply trying to work faster by using the model to review source code, summarize internal documents, and help debug issues. But in doing so, proprietary data was entered into an external system that the company did not fully control, and that did not guarantee total encryption of user data.
At first glance this looked like a simple user mistake. But this incident exposes a deeper structural issue with how most AI systems operate today. Modern AI systems typically require data to be visible in plaintext during execution. Even when data is encrypted in storage and during transmission, it is usually decrypted when the model processes it.
In many consumer AI tools, those environments are operated by third parties. The system may log inputs for debugging, retain data for improvement, or process prompts within shared infrastructure. Even when strong policies exist, organizations must ultimately trust that sensitive information will not be exposed or reused.
For enterprises handling proprietary code, research data, or confidential documents, that trust boundary is difficult to accept.
This is why incidents like the Samsung case happen. The problem is not simply that employees used AI tools. It is that the underlying architecture requires sensitive data to become readable during execution. This trust and security problem is why AI has not reached its maximum potential, particularly within enterprise settings.
Nesa was built to solve this.
On Nesa AI, privacy is enforced at the execution layer through Equivariant Encryption. Computation can occur on encrypted data, reducing the visibility surface during runtime. Sensitive inputs and models do not need to be exposed in plain text to infrastructure operators in order for inference to occur. Therefore, instead of relying entirely on privacy policies and user behavior, the architecture itself eliminates the risk of exposure.

English

AI capability is advancing faster than ever.
Models are becoming more capable and efficient every day, benchmarks continue to improve, and new applications are emerging constantly.
But despite this, organizations are still cautious about deploying AI at scale.
Not because the models aren’t capable, but because questions remain around whether they can be trusted with sensitive information and user data.
Enterprises need systems that guarantee security, privacy, and governance before AI can truely deploy at scale.
That guarantee is what Nesa provides.
English

Exploring the Playground? Running nodes? Building new DAIs?
If you’re interested in what’s happening across the Nesa ecosystem, join the Discord to get all the latest info👇
discord.com/invite/nesa

English

The best way to understand a new ecosystem is to explore it.
The Nesa Playground gives you access to 100,000+ models and hundreds of DAIs built on our infrastructure.
Test prompts, compare outputs and experiment with different AI systems, all protected by our proprietary Equivariant Encryption technology.
Start exploring the Nesa ecosystem today 👉 beta.nesa.ai
English

In traditional AI stacks, sensitive user data becomes readable during inference, therefore introducing the potential for widespread exposure from a single infrastructure level breach.
On Nesa, Equivariant Encryption keeps data protected even during execution, therefore significantly reducing the potential blast radius of a breach by eliminating the risk of widespread exposure.
This is the future of security.

English

In 2023, Okta, a widely used enterprise identity and access management platform, suffered a significant breach when attackers accessed user data through a compromised support account.
From there, the attackers were able to view customer-uploaded troubleshooting files, some of which contained sensitive session tokens that could be used to impersonate users and move downstream into other accounts and systems.
The incident exposed an uncomfortable reality that exists across much of modern enterprise infrastructure.
Sensitive data may appear protected from the outside, but it often becomes visible inside operational environments. Support teams, monitoring tools, and debugging systems are granted access so they can diagnose issues and maintain uptime. That visibility is often operationally necessary, but it also introduces meaningful risk.
As seen in the Okta incident, when a privileged operational account is compromised, the exposure surface can be massive. Because sensitive information is often readable during runtime or within support environments, the breach can extend beyond the compromised account itself and expose data across the broader ecosystem.
This pattern appears across industries. Encryption at rest and in transit is standard practice. But once data enters an execution or support environment in plaintext, it becomes subject to the access controls and trust assumptions of that layer.
The risk is not simply poor password hygiene or a single compromised credential. It is architectural. If sensitive information must be readable in order for systems to function, then privileged access becomes a systemic boundary to true privacy and security.
At Nesa, we're working to eliminate that boundary. Privacy is enforced at the execution layer through Equivariant Encryption, which allows computation to occur on encrypted data. This reduces the need for sensitive information to be exposed in plaintext during runtime. Platforms can retain expected uptime guarantees and performance, but operators do not automatically gain unilateral visibility into the underlying data.
This does not eliminate credential compromise or misconfiguration. No system can prevent every breach scenario. What it can do is reduce trust concentration and significantly narrow the surface where sensitive data is readable.
In practical terms, this means that even if an operational account were compromised, the amount of accessible plaintext data would be materially reduced. The potential blast radius becomes smaller because confidentiality is enforced structurally, not solely through policy or access controls.

English

AI capability is not the bottleneck anymore.
Models are improving at an unprecedented rate, performance benchmarks are broken every few months, and new tools ship faster than most organizations can evaluate them.
Technical progress is not what’s slowing universal AI adoption.
Security and data privacy are.
Enterprises are not asking whether AI can generate insights. They are asking whether sensitive data will remain protected during execution. They are asking who can see what. They are asking what happens under audit. They are asking how risk is contained when systems fail.
Until those questions have structural answers, AI will remain confined to pilots and low-risk environments.
Universal adoption requires stronger guarantees and confidence, not more blind power.
That is why the next phase of AI will not be defined by capability alone, but by infrastructure that can enforce privacy and control by design.
On Nesa**,** privacy is enforced at the execution layer through Equivariant Encryption, meaning computation can occur on encrypted data, hence reducing exposure during runtime and lowering the trust placed on operators.
This, is the next phase of AI adoption.
English

To all the builders, operators, and early adopters.
The Nesa community is growing.
Join our Discord and get involved 👇
discord.com/invite/nesa

English

In 2025, the AgentFlayer exploit highlighted a new category of risk in AI systems.
It was not a traditional breach involving stolen credentials or broken encryption. Instead, it demonstrated how an autonomous AI agent could be manipulated into executing unintended actions by processing malicious instructions embedded inside content it automatically processes.
The incident did not expose a flaw in one specific integration. It revealed a structural weakness in how many modern AI agents are built.
Today’s agents are no longer passive language models. They read documents automatically, scan emails, connect to SaaS tools, access cloud storage, and execute actions across multiple systems. To be useful, they are granted meaningful permissions. That capability creates value, but it also expands the attack surface.
Most agent environments operate in a trusted, plaintext execution model. Data is encrypted at rest and in transit, but it is typically decrypted during inference so the model can process it. That runtime visibility is where potential risk lies.
In a zero-click scenario like AgentFlayer, an attacker can embed hidden instructions inside a document that the AI processes automatically. Because the agent may have access to connected systems such as Google Drive, Slack, or GitHub, it can potentially be influenced to retrieve sensitive information or perform unintended actions. The user does not need to click a malicious link or approve a suspicious request. Therefore, the core issue is that during execution, the system may have access to sensitive data and broad privileges, meaning whoever controls the execution environment ultimately controls access to that data.
Now consider a different architectural approach.
If a system is designed so that data remains protected during execution, the risk profile changes. On Nesa, privacy is enforced at the execution layer through Equivariant Encryption. Computation can occur on encrypted data, reducing the visibility surface during runtime. Sensitive inputs and models do not need to be exposed in plain text to infrastructure operators for inference to occur.
This does not eliminate prompt injection, logic manipulation, or tool misuse. Encryption alone cannot prevent an agent from being instructed to take an unintended action if it has been granted that permission.
What it does do is materially reduce confidentiality risk. By limiting access to readable sensitive data during execution and reducing unilateral visibility at the infrastructure layer, the potential blast radius of a successful manipulation attempt is constrained.
As AI agents become more autonomous and embedded into enterprise workflows, security must move deeper into architecture. The goal is not to claim invulnerability. It is to reduce trust concentration and contain systemic exposure when failures occur.
AgentFlayer was not simply a one-off exploit. It was a reminder that in autonomous systems, execution-layer design determines how risk propagates.
English

Want to get more out of Nesa?
Why not try Nesa Pro?
Enjoy unrestricted access to the playground, including:
⬟ Access to all models
⬟ Ability to create your own models
⬟ Increased testnet faucet claim
Try it today or join the waitlist at beta.nesa.ai
English


