Sonomos

40 posts

Sonomos banner
Sonomos

Sonomos

@sonomos_ai

Privacy at the point of creation.

San Diego, CA Katılım Eylül 2025
64 Takip Edilen31 Takipçiler
Sabitlenmiş Tweet
Sonomos
Sonomos@sonomos_ai·
Sonomos is now a @Clio Partner! 🤝 Clio didn’t just build practice management software — they built the operating system for the modern law firm. But AI adoption inside those firms has opened a new frontier: what happens when someone on your team pastes a client file into an AI tool. Sonomos is the answer. Real-time PII detection. Local-only. Pre-send masking. No cloud. No exposure. No tradeoff. The modern law firm runs on Clio, and it stays protected with Sonomos. 🔗 sonomos.ai #LegalTech #Clio #DataPrivacy #AICompliance #PrivacyTech #LawFirms
Sonomos tweet media
English
0
1
1
19
Sonomos
Sonomos@sonomos_ai·
3/ Sonomos keeps data off the wire. Local detection. Local masking. Nothing sensitive reaches a cloud tool, so nothing can leak from one. Vercel isn't the exception. It's the template.
English
0
0
0
22
Sonomos
Sonomos@sonomos_ai·
2/ Same attack, different payload for your firm. Privileged comms. SSNs. Matter notes. Sitting inside whatever cloud AI your associate signed up for last Tuesday. ABA 1.6(c). SEC Reg S-P. HIPAA. All three treat this as foreseeable now.
English
1
0
0
5
Sonomos
Sonomos@sonomos_ai·
1/ One click. $2M on the dark web. Plaintext API keys. Database strings. Signing keys. A Vercel employee connected an AI tool to Google Workspace. The tool got popped. The OAuth token did the rest.
English
1
0
1
18
Sonomos
Sonomos@sonomos_ai·
3/ Every cloud AI tool has the same exposure shape. What you paste in is what leaks when the platform fails. Sonomos keeps it off the wire. Local detection. Local masking. If it never reaches the tool, the tool can't leak it. Vercel, and now Lovable. This is a pattern, not coincidence. Full story: theregister.com/2026/04/20/lov…
English
0
0
0
22
Sonomos
Sonomos@sonomos_ai·
2/ One API endpoint skipped the "do you own this?" check. Every project before Nov 2025 is exposed. Reported 48 days ago. Still unpatched. The worst part is what's in those chats: emails, DOBs, Stripe IDs, API keys, credentials. All pasted mid-prompt. All readable.
English
1
0
0
7
Sonomos
Sonomos@sonomos_ai·
1/ One account. Every chat. Every credential. A researcher made a free Lovable account and read other users' source code, database credentials, AI chat histories, and customer data. Not stolen. Readable. By any free account.
English
1
0
1
5
Sonomos
Sonomos@sonomos_ai·
Our co-founder, Nathaniel Shalev, has been named to the Power 20: Twenty Outstanding Business Leaders in Their Twenties (2026) by the San Diego Business Journal (@SDbusiness)! This recognition highlights young leaders driving real impact through innovation, leadership, and community involvement. At Sonomos, Nate leads from the front by turning complex challenges into practical, forward-looking solutions, while bringing integrity and thoughtfulness that elevates everyone around him. This reflects not just what he’s accomplished, but how he leads every day. Proud to be building alongside him. Congratulations, Nate. 🔗Link to the SDBJ's Power 20: sdbj.com/issues/leaders… #Startups #Entrepreneurship #Leadership #Founders #Business #Innovation #Tech #AI #FutureOfWork #SanDiego #YoungLeaders #NextGenLeaders #BusinessLeadership #EntrepreneurLife #TechLeaders
Sonomos tweet media
English
1
1
2
25
Sonomos
Sonomos@sonomos_ai·
@peaxe001 @Sigma_Browser Totally agree! Local uncensored AI in-browser is huge for real privacy & control. At @sonomos_ai, we extend that protection: even when you're outside fully local setups or using other browsers/apps. Check us out! 🔗 sonomos.ai
English
0
0
1
9
Sonomos
Sonomos@sonomos_ai·
@Sigma_Browser Love this! At @sonomos_ai, we're taking this principle past local setups & other browsers/apps to keep your sensitive data safe by masking it locally at the input stage. Zero exposure, anywhere. Complements Sigma perfectly. 🔗 sonomos.ai #LocalAI #Privacy #AI
English
0
0
0
53
Sigma Browser
Sigma Browser@Sigma_Browser·
Uncensored AI. Running locally. Inside your browser. In Sigma you can chat with your own local LLM directly in the browser. The model runs entirely on the user's machine and is fully open-source. No external APIs, no cloud processing - all interactions stay local. Both censored and uncensored versions are available. Try it out now sigmabrowser.com/local-ai-chat
English
32
23
106
4.5K
Omri Dan
Omri Dan@OmriBuilds·
Pitch your startup in 3 words
English
547
4
192
24.1K
Sonomos
Sonomos@sonomos_ai·
@BitGrateful Or you can use sonomos.ai to automatically mask all your sensitive data...which also runs completely local on your device
English
0
0
0
6
Abhijit
Abhijit@abhijitwt·
Drop your project URL Let’s drive some traffic
English
350
5
174
15.5K
Sonomos
Sonomos@sonomos_ai·
Every AI DLP tool requires a compliance officer to babysit it. We're the one that writes its own reports. sonomos.ai
English
0
1
1
12
Sonomos
Sonomos@sonomos_ai·
@alex_prompter The two-tier system is the real story here. If you're a solo practitioner, a small law firm, or a startup -- you expected to have consumer-tier privacy with enterprise-tier risk. That's the gap nobody's filling yet.
English
0
0
0
16
Alex Prompter
Alex Prompter@alex_prompter·
🚨 Holy shit… Stanford just exposed that every major AI company is using your private conversations to train their models by default. They analyzed the privacy policies of OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon. Reviewed 28 separate documents across all 6 companies. The findings are worrisome. Every prompt you type. Every file you upload. Every personal detail you share. All of it feeds directly into model training the moment you hit send. That health question you asked ChatGPT at 2am? Training data. Legal situation you described to Claude? Training data. The photo you uploaded to Gemini? Training data. Some companies retain your conversations INDEFINITELY. Amazon, Meta, and OpenAI have no confirmed deletion timeline for certain chat data. Your most private conversations could sit on their servers forever. It gets worse for kids. Four out of six companies allow children aged 13-18 to use their chatbots, and most don’t treat children’s data any differently. Kids’ conversations are likely getting fed into model training by default. Kids who can’t legally consent to it. Here’s something most people missed: enterprise customers are opted OUT of training by default. You, the consumer paying $20/month? Opted IN. Companies paying thousands? Protected automatically. There’s a two-tiered privacy system and you’re on the wrong side of it. OpenAI even frames the opt-in with guilt. Their settings page says “Improve the model for everyone.” Stanford’s researchers flagged this as a textbook dark pattern designed to make you feel bad for protecting your own data. Meta’s contractors told reporters they routinely see identifiable personal information in the chat data they review. Journalists were able to positively identify at least one real person from chat transcripts shared with them. The privacy policies themselves? Stanford had to dig through 6 separate documents just for OpenAI alone. Most real disclosures were buried in sub-policies no normal person would ever find. The researchers said it was challenging for THEM to piece it together. For consumers? “Practically impossible.” Only Microsoft explicitly stated they try to remove personal data like names, phone numbers, and addresses before training. The rest are either vague about it or completely silent.
Alex Prompter tweet media
English
299
3.9K
7.7K
332.6K
Sonomos
Sonomos@sonomos_ai·
@heygurisingh "You are not the customer. You are the curriculum." That's the line. And the enterprise/consumer two-tier system is the quiet part out loud. If you pay enough, your data stays private. If you don't, it's training data. The opt-out maze isn't a bug, it's a retention strategy.
English
0
0
0
3
Guri Singh
Guri Singh@heygurisingh·
🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.
Guri Singh tweet media
English
328
3.9K
8.5K
1.7M