Tweet épinglé
𝓚𝓸𝓻ę𝔁 📊 🎯
5.7K posts

𝓚𝓸𝓻ę𝔁 📊 🎯
@fun2hate
Forex × Web3 |Charts by day on-chain by night__Risk manager | Alpha hunter | Liquidity Chaser…Tweets #NFA
Inscrit le Ocak 2024
670 Abonnements786 Abonnés
𝓚𝓸𝓻ę𝔁 📊 🎯 retweeté

@GloriousGod01 @spoooder_m Sefanru,waptrick,wapdan didn’t see google play store coming 🤭
English

Twitter didn't see X coming.
Boomplay didn't see Audiomack coming.
Audiomack didn't see Spotify coming.
Jumia didn't see Temu coming.
PDP didn't see APC coming.
G-Wagon didn't see the Cybertruck coming.
Android didn't see iPhone coming.
2go didn't see WhatsApp coming.
Blackberry didn't see Android coming.
Yahoo mail didn't see Google coming.
What's next?
Above all, love God.
English
𝓚𝓸𝓻ę𝔁 📊 🎯 retweeté
𝓚𝓸𝓻ę𝔁 📊 🎯 retweeté

This is either ignorance or deliberate deception.
The Barbary slave system and American chattel slavery are not remotely comparable, and pretending they are is intellectual fraud.
In North Africa, many captives were ransomed, absorbed, or eventually reintegrated into society. Their children were not automatically born into permanent, inheritable slavery. They never lost their legal personhood in the same way.
In the United States, slavery was industrialized, racialized, and hereditary. If you were enslaved, your children were enslaved. Their children were enslaved. Generation after generation, treated legally as property, not people. No exit. No reset. No humanity restored. That system lasted for over 200 years.
It functioned like livestock ownership: you own a dog, the dog gives birth, the puppies belong to you. They breed, their offspring belong to you. Ten generations later, long after you are gone, that entire bloodline still belongs to your descendants. That is how American slavery worked.
That is the difference between captivity and chattel slavery.
One was a brutal system of forced labor and ransom.
The other was a permanent, multigenerational economic machine built on dehumanization.
So no, you cannot casually equate them to dismiss the historical and economic consequences. That comparison collapses under even basic historical scrutiny.
GIF
English
𝓚𝓸𝓻ę𝔁 📊 🎯 retweeté

Traditional AI risk was about inaccurate outputs.
Agent-based AI introduces real operational risk.
Permissions, integrations, and system access significantly increase the potential impact of a security failure.
Nesa addresses this at the infrastructure level by eliminating all visibility opportunities via our Equivariant Encryption technology, therefore enabling the continued development of agents that can scale without systemic privacy risks.
English
𝓚𝓸𝓻ę𝔁 📊 🎯 retweeté

One of the most significant AI security incidents in the past year did not involve stolen passwords, malware, or even user error. It involved an AI assistant doing exactly what it was designed to do.
In 2025, researchers showed that Microsoft Copilot could be manipulated into leaking sensitive enterprise data without any user interaction. A malicious email or document contained hidden instructions, which Copilot automatically processed. Instead of simply summarising the content, it was guided to retrieve additional information from connected systems and include it in its output.
This included emails, internal documents, and broader organisational context. There was no breach of the network or compromise of credentials. The system behaved as intended, but the trust model itself became the vulnerability.
This is what makes this vulnerability so significant. It highlights a new class of risk that does not exist in traditional software systems. Modern AI agents are deeply integrated into workflows and have access to multiple data sources. The more useful they become and the more context and permissions they are given, the larger the attack surface becomes.
The core issue is not just access, but what happens once access is granted. Most AI systems operate on data in plaintext during execution. When the model is running, it can read sensitive information directly. If it can read that data, it can be manipulated into sharing it.
Therefore, this vulnerability is not just a prompt injection problem. It is a problem of user and system data being unencrypted during plaintext execution.
Now consider how this changes under a different architecture. If an AI system is designed so that data remains encrypted end to end, including during computation, the risk profile shifts. The model can still process inputs and generate outputs, but it does not have access to raw, readable data. Decryption keys are not exposed within the runtime environment, and sensitive information is never available in plain form.
In this context, an attack may still attempt to influence the model’s behaviour, but it cannot extract meaningful data. The output remains constrained by the underlying encryption, and what could have been a silent data exfiltration event becomes a contained issue.
This is the shift that Nesa is built to enable. Rather than assuming systems will never be attacked, the focus is on limiting the consequences when they are.
English









