Kayli Lewis @ MailSPEC

53.3K posts

Kayli Lewis @ MailSPEC banner
Kayli Lewis @ MailSPEC

Kayli Lewis @ MailSPEC

@mailspec

Director of compliance strategy here at MailSPEC. We provide AI governance of communications for regulated industries. Posts on compliance and data privacy.

Internet Katılım Ağustos 2021
522 Takip Edilen5.8K Takipçiler
Kayli Lewis @ MailSPEC
Every AI system you interact with was trained on data. Some of that data was yours. Your messages. Your behavior. Your preferences. Your personal disclosures made to systems you thought were just helping you. In April 2025, the EDPB confirmed that personal data used to train AI models requires a separate lawful basis for the training activity itself. That means that consent to use a service does not constitute consent to have your personal data used as training inputs, and that legitimate interest assessments must specifically address the training purpose rather than the original collection purpose. Most organizations using cloud AI services have never completed that assessment, but they accepted the cloud provider’s terms of service and assumed the compliance obligations were covered. The regulator who asks what legal basis covered your data being used to train their model won’t accept the “cloud provider’s terms” as an answer. When you use an AI tool, do you feel like the customer or the "raw material" used to make it smarter?
English
0
1
0
18
Kayli Lewis @ MailSPEC
Every time you typed to a company’s support service team, you thought you were asking for help. But…you were also training their AI. Your words, tone, and personal circumstances were disclosed in a moment of frustration. All of that fed into a machine learning model hosted on a cloud platform, used to make the AI smarter, retained in training datasets that the company’s own legal team has never formally reviewed for GDPR compliance. You didn’t consent to being a training data point, and the privacy policy you never read probably said you did. The EDPB’s April 2025 Report explicitly confirmed that organizations deploying cloud-based AI services must conduct comprehensive legitimate interests assessments for every processing activity involved in AI training, and that the use of personal data for AI model training is a distinct processing purpose that requires its own documented legal basis, separate from the purpose for which the data was originally collected. Using your customer service messages to train an AI is not covered by the legal basis that justified collecting those messages in the first place. It is a new purpose. It requires a new legal basis. Most organizations have never documented one, because their cloud AI service agreement never told them they needed to. When you chat with a support bot, do you assume your data is being used to train the AI?
English
0
1
1
48
Reclaim The Net
Reclaim The Net@ReclaimTheNetHQ·
Families suing OpenAI want a court to force ChatGPT to verify every user's ID, build a dedicated team to refer customers to police, and retain every chat as evidence. Edelson, their own lawyer, admits this needs a full-time referral squad. Think about what that infrastructure does to everyone else. reclaimthenet.org/openai-lawsuit…
English
32
69
169
13.8K
Reclaim The Net
Reclaim The Net@ReclaimTheNetHQ·
Hawley's GUARD Act just passed committee 22-0. Every American would have to upload a government ID or submit to a face scan to use an AI chatbot. Even for asking for algebra help or fixing a billing issue. The framing is child safety but the result is a national ID system for talking to a computer. reclaimthenet.org/senate-panel-b…
English
880
1.9K
3.1K
215.9K
Reclaim The Net
Reclaim The Net@ReclaimTheNetHQ·
Roblox lost 20 million daily users since it started demanding facial scans and ID uploads to access basic features. Half the platform is now stuck in a degraded version where the path back runs through biometric data. People are voting with their absence. reclaimthenet.org/roblox-loses-1…
English
90
384
1.9K
67.8K
Reclaim The Net
Reclaim The Net@ReclaimTheNetHQ·
Australia banned teenagers from social media. Four months later, 73% are still on it, often with a parent's help signing them back up. The popular kids stayed. The quiet ones obeyed. Albanese built a law that sorts children by status and punishes the ones who do as they're told. reclaimthenet.org/australias-und…
English
21
71
236
5.9K
Privacy International
Privacy International@privacyint·
The prominent role of data and tech in elections can lead to a chilling of political participation as well as raising privacy concerns, particularly for minoritised groups. Find out more by reading our latest piece on disenfrachisement & privacy: privacyinternational.org/long-read/5762…
Privacy International tweet media
English
2
4
7
475
Proton
Proton@ProtonPrivacy·
A free press is the cornerstone of a democracy. We believe the best way to protect press freedom is to give journalists tools that make them harder to target & easier to trust. To all those shining a light on necessary truths, we salute you 🫡 Happy World Press Freedom Day.
Proton tweet media
English
16
60
393
17K
Proton
Proton@ProtonPrivacy·
It takes surprisingly little data to create a deepfake. Sometimes just a few photos or a short video. Your online footprint matters more than you think. Here’s how to limit what AI can use to clone your identity 👇
English
8
18
145
11.7K
Tuta
Tuta@TutaPrivacy·
On Press Freedom Day, let's remind everyone that journalists need protection, also online. 📰 🌐 Share this & Spread the word! #PressFreedomDay26
Tuta tweet media
English
23
737
2.2K
28K
Tuta
Tuta@TutaPrivacy·
Countries that already have ID checks for social media in 2026: 1. Australia 2. Greece 3. Brazil 4. Turkey...See more
English
14
27
224
11.6K
Naomi Brockwell priv/acc
Naomi Brockwell priv/acc@naomibrockwell·
How did we get to the egregious surveillancescape we have today? Because of things like the Databroker Loophole and the 3rd-Party Doctrine. The Surveillance Accountability Act closes both. Call your reps. SurveillanceAccountability.com
Naomi Brockwell priv/acc tweet media
English
17
98
285
4.4K
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
Unpopular opinion: to reduce AI anthropomorphism, the law should require AI companies to give their AI models descriptive technical names. "Claude" is a human name and would NOT be acceptable. Anthropic has a serious anthropomorphism problem and should be held accountable.
Luiza Jarovsky, PhD@LuizaJarovsky

If you scroll over the 1000+ answers to my post, you'll see that a strange AI cult is emerging. Many people believe today's AI is conscious. Sometimes it feels like a collective AI psychosis. In a few years, this will be a MAJOR issue. It should be dealt with today. Read:

English
175
46
250
26.9K
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
Everybody: AI will transform the internet The most popular AI search engine in 2026:
Luiza Jarovsky, PhD tweet media
English
12
7
64
7.4K
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
These constant comparisons between AI and aliens are a DISSERVICE to AI literacy. They also lead more people to believe things like "AI is conscious." AI is not like an alien, and it's not conscious It's a computer system; a regulated product that should be properly governed.
Luiza Jarovsky, PhD tweet media
English
24
9
44
2.2K