Papyboo

3.3K posts

Papyboo banner
Papyboo

Papyboo

@davidbm

Just an old 🤡 lost in web3.

Paris, France Katılım Nisan 2008
2.8K Takip Edilen249 Takipçiler
Papyboo retweetledi
Nesa
Nesa@nesaorg·
Something is coming sooner than you think…
English
228
165
279
7.9K
Papyboo retweetledi
Papyboo retweetledi
Nesa
Nesa@nesaorg·
AI capability is advancing faster than ever. Models are becoming more capable and efficient every day, benchmarks continue to improve, and new applications are emerging constantly. But despite this, organizations are still cautious about deploying AI at scale. Not because the models aren’t capable, but because questions remain around whether they can be trusted with sensitive information and user data. Enterprises need systems that guarantee security, privacy, and governance before AI can truely deploy at scale. That guarantee is what Nesa provides.
English
241
149
302
7.5K
Papyboo retweetledi
Nesa
Nesa@nesaorg·
AI is not limited by capability anymore. It’s limited by trust. When data and models remain protected during execution, experimentation becomes safer, audits can occur without threat to user data, and real-world applications can scale with confidence. This is true adoption.
English
196
134
268
10K
Papyboo retweetledi
Nesa
Nesa@nesaorg·
Who’s AI journey started with @OpenAI? Let me know, I’m testing something 👇
English
247
149
287
10.5K
Papyboo retweetledi
Nesa
Nesa@nesaorg·
Question for the builders out there. What matters the most to you when choosing the infrastructure you build on? Let us know 👇
Nesa tweet media
English
290
155
294
6.9K
Papyboo retweetledi
Nesa
Nesa@nesaorg·
While a standard language model generates text in response to a prompt, an agent can execute tools, install extensions, access files, and take actions across systems. As agent ecosystems grow, we are seeing more examples of malicious or poorly designed “skills” that can install malware or exfiltrate data once a user manually approves execution. This is not necessarily the result of bad actors alone. It reflects a structural shift in how AI systems operate. Agents combine language reasoning with tool execution and often run with meaningful system privileges. When extensions or skills are added to these environments, they may inherit broad access to files, credentials, APIs, or other sensitive resources. Even if execution requires user approval, most users cannot realistically audit complex commands in real time, so the approval step becomes procedural rather than protective. The result is a new attack surface specific to agent-driven environments. A malicious or compromised extension does not simply produce incorrect outputs, it can trigger real actions inside the system where the agent is deployed. As agents become more autonomous and persistent, the potential consequences of a single failure increase. This is where infrastructure design becomes critical. On Nesa, privacy and control are enforced at the execution layer through Equivariant Encryption. Computation can occur on encrypted data, reducing data visibility during runtime. Sensitive user data and AI models do not need to be exposed to infrastructure operators for agents to function. This does not eliminate the possibility of malicious code or software bugs, no infrastructure can make that guarantee. What it does is reduce trust concentration and limit unilateral access to data during execution. In agent environments where actions are continuous and autonomous, that architectural separation meaningfully lowers systemic exposure risk. As AI agents become more capable, security considerations must move from prompt-level safeguards to infrastructure design. How tasks are executed will increasingly determine how effectively risk is contained.
English
329
188
341
21.3K
Papyboo retweetledi
Nesa
Nesa@nesaorg·
Almost every AI system has one thing in common… They all need access to your data during computation. So while many might claim your data to be “encrypted” during rest and in transit, that doesn't apply during execution, hence leaving your information vulnerable. Nesa solves this through our proprietary Equivariant Encryption technology. It allows computation to be performed on encrypted data, meaning sensitive inputs and models do not need to be exposed during execution. Therefore, instead of relying on privacy policies or operator integrity, privacy is guaranteed by design, even during execution. As AI moves from experimentation to production at scale, that difference becomes essential.
English
295
158
291
16.6K
Papyboo retweetledi
Nesa
Nesa@nesaorg·
Miner Mondays: Built To Scale ⬟ As the Nesa ecosystem expands, so does the need for reliable nodes. Run a node to contribute compute and support overall network reliability, all while being exclusively rewarded with Miner Points. Now with Nesa Bootstrap, it’s easier than ever to get setup! Get started below ↓ 📚 Docs: #quickstart" target="_blank" rel="nofollow noopener">github.com/nesaorg/bootst… ❓ FAQ: github.com/nesaorg/bootst… 🧰 Custom scripting: @nesaorg/ai-sdk" target="_blank" rel="nofollow noopener">npmjs.com/package/@nesao… 💬 Need help? Join Discord: discord.gg/nesa
English
323
178
352
14.9K
Nesa
Nesa@nesaorg·
Models will evolve. Benchmarks will shift. Narratives will change. But the infrastructure you build on determines whether your system survives scale. So choose one built for sustainability, not hype.
English
323
183
339
22.2K