strongwall.ai

62 posts

strongwall.ai banner
strongwall.ai

strongwall.ai

@StrongwallAi

Private AI. No Surveillance. Full Control.

Katılım Kasım 2025
18 Takip Edilen240 Takipçiler
Sabitlenmiş Tweet
strongwall.ai
strongwall.ai@StrongwallAi·
Introducing Strongwall AI, the world’s first privacy AI by design. If you knew what companies like Open AI we’re doing with your data, you’d switch over to us instantly.
English
13
26
78
137.6K
RYAN SΞAN ADAMS - rsa.eth 🦄
AI KYC is here. New claude subscribers asked for gov ID & photo. Not even a regulatory requirement - Anthropic just doing it because they want to. But regulatory is coming Next up will be laws: No AI without gov-issued ID All AI use tracked to individual - no private AI
RYAN SΞAN ADAMS - rsa.eth 🦄 tweet mediaRYAN SΞAN ADAMS - rsa.eth 🦄 tweet media
English
213
193
1.1K
154.8K
strongwall.ai
strongwall.ai@StrongwallAi·
Strongwall stores none of your conversations, because we don't want or need to "target" our customers. We make money from subscriptions and don't have delusions of grandeur about summoning a digital god to control society.
Sen. Bernie Sanders@SenSanders

I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.

English
0
0
1
149
strongwall.ai
strongwall.ai@StrongwallAi·
Strongwall never reads your stored chats, because we don't have them. The only copy lives on your device, fully encrypted and inaccessible. x.com/i/status/20278…
Rohan Paul@rohanpaul_ai

Stanford researchers checked 6 major AI companies and found they all use your chats to train models. Users unknowingly hand over highly sensitive medical or personal details that become permanent parts of future AI brains. The problem with standard privacy rules is that they scatter important details across multiple files so people cannot find them. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. In many cases these chats, get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. ---- Paper Link – arxiv. org/abs/2509.05382 Paper Title: "User Privacy and LLMs: An Analysis of Frontier Developers' Privacy Policies"

English
0
1
1
244
strongwall.ai
strongwall.ai@StrongwallAi·
Would sure be handy to have a lawyer you could ask about the specific procedural protections you may or may not have in your communications with AI - but it's a lot easier to just use a platform that doesn't have anything to turn over.
Moish Peltz@mpeltz

A recent decision complicates the picture on AI privilege waiver. In Warner v. Gilbarco (E.D. Mich., Feb. 10, 2026), defendants tried to force a pro se plaintiff to hand over everything related to her use of AI tools in the litigation. Judge Patti shut it down entirely. The court's reasoning has two layers. First, relevance. The court held the AI materials were "not relevant, or, even if marginally relevant, not proportional" under Rule 26(b)(1), noting that this is a civil case, and not a criminal one (like Heppner), so different rules apply. Defendants had zero evidence plaintiff uploaded anything confidential to an AI platform. The court told defendants, bluntly, that their "preoccupation with Plaintiff's use of AI needs to abate." Second, on work product, Defendants argued that sharing prompts and outputs with ChatGPT waived work product protection. Judge Patti said no. The reasoning: work product waiver requires disclosure to an adversary, not just any third party. And ChatGPT "and other generative AI programs are tools, not persons, even if they may have administrators somewhere in the background." The court agreed with plaintiff that accepting defendants' theory "would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed." So does this contradict Judge Rakoff's Heppner ruling? Not necessarily. Attorney-client privilege and work product doctrine have fundamentally different waiver standards. Privilege can be destroyed by voluntary disclosure to any third party. Work product requires disclosure to an adversary or in a way likely to reach one. AI platforms aren't adversaries. This means it's entirely possible to lose privilege on your AI conversations while retaining work product protection over the same materials. Different doctrines, different triggers, different facts, different outcomes. I don't think it's realistic for everyone to understand exactly which protection applies, how each can be waived, and how the specific AI platform's terms and privacy policies affect the analysis. People should migrate to defensible positions, no matter the circumstance, and the enterprise agreement point I made after Heppner still stands. We're watching this area of law develop in real time, and the courts aren't going to agree with each other for a while. Buckle up. storage.courtlistener.com/recap/gov.usco…

English
0
3
5
1.6K
strongwall.ai
strongwall.ai@StrongwallAi·
@jerryGolddd We're working on offering modular jailbreaks to our users in a way that keeps model intelligence intact. Until then, you can manually use any jailbreak for Kimi-K2.5 .
English
0
0
0
60
jerryGolddd
jerryGolddd@jerryGolddd·
@StrongwallAi I used some racial slurs. And I used all racial slurs. For all races. And it denied. 😢
English
1
0
0
14
strongwall.ai
strongwall.ai@StrongwallAi·
CTO built a harness for locating implicit references between two document dumps that were drawn up by overlapping groups of people around the same time. Because of the length involved and the fact that it's generating summaries and referring back and forth, it would be completely impractical to feed to something like Claude - literally $20 just to read one of the dumps, without even doing anything with it. And it benefits from a smart model because the matching improves with more world-knowledge and reasoning ability, so cheaper commercial models don't do as well. With unlimited API access, we prioritize "live" requests (chats and API access from low usage accounts) so bulk projects like this don't block other users. Because it's unlimited and included with membership, we can just let it rip without worrying it's going to blow through a token budget. And because Strongwall doesn't log content, we can load sensitive, non-public documents without worrying they're floating around as a Google or OpenAI training set forever. Strongwall is the *only* company that can do this - unlimited private AI, exclusively under your control.
English
1
4
6
1.6K
strongwall.ai
strongwall.ai@StrongwallAi·
@AlexBerenson If there are no logs, there's nothing to produce in response to a subpoena. That's one reason we built Strongwall.
English
0
2
2
57
strongwall.ai
strongwall.ai@StrongwallAi·
@GlobalStatDesk Large providers don't give customers security, they give them a piece of paper with a promise on it.
English
0
0
0
27
𝐆𝐥𝐨𝐛𝐚𝐥 𝐒𝐭𝐚𝐭 𝐃𝐞𝐬𝐤
@StrongwallAi Big picture: As AI adoption hits 80%+ in enterprises (Gartner est.), privacy demands surge—tools like this could shift the market (e.g., legal/medical users need zero-risk). Worth the switch for sensitive work? Or stick with big players + opt-outs?
English
1
0
0
43
strongwall.ai
strongwall.ai@StrongwallAi·
Your AI prompts shouldn't be public property. 🙅‍♂️ Get 100% data isolation and zero conversation storage with Strongwall. Free for 30 days with code: TRYSTRONGWALL
English
1
1
11
47.8K
strongwall.ai
strongwall.ai@StrongwallAi·
The only way to safely use AI in a legal context is when it does not generate discoverable documents in the first place. x.com/i/status/20217…
Moish Peltz@mpeltz

Your AI conversations aren't privileged. Yesterday, Judge Jed Rakoff ruled that 31 documents a defendant generated using an AI tool and later shared with his defense attorneys are not protected by attorney-client privilege or work product doctrine. The logic is simple: an AI tool is not an attorney. It has no law license, owes no duty of loyalty, and its terms of service explicitly disclaim any attorney-client relationship. Sharing case details with an AI platform is legally no different from talking through your legal situation with a friend (which is not privileged). You can't fix it after the fact, either. Sending unprivileged documents to your lawyer doesn't retroactively make them privileged. That's been settled law for years. It just hadn't been tested with AI until now. And here's what really hurt the defendant: the AI provider's privacy policy (Claude), in effect when he used the tool, expressly permits disclosure of user prompts and outputs to governmental authorities. There was no reasonable expectation of confidentiality. The core problem is the gap between how people experience AI and what's actually happening. The conversational interface feels private. It feels like talking to an advisor. But unless you negotiate for an enterprise agreement that says otherwise, you're inputting information into a third-party commercial platform that retains your data and reserves broad rights to disclose it. Judge Rakoff also flagged an interesting wrinkle: the defendant reportedly fed information from his attorneys into the AI tool. If prosecutors try to use these documents at trial, defense counsel could become a fact witness, potentially forcing a mistrial. Winning on privilege doesn't make the evidentiary picture simple. For anyone advising clients or managing legal risk, this is a wake-up call. AI tools are not a safe space for clients to process their counsel's advice and to regurgitate their legal strategy. Every prompt is a potential disclosure. Every output is a potentially discoverable document. So what do we do about it? First, attorneys need to be proactive. Advise clients explicitly that anything they put into an AI tool may be discoverable and is almost certainly not privileged. Put it in your engagement letters. Make it part of onboarding. Don't assume clients understand this, because most don't. Second, if clients want to use AI to help process legal issues (and they clearly will, increasingly), then let's give them a way to do it inside the privilege. Collaborative AI workspaces shared between attorney and client, where the AI interaction happens under counsel's direction and within the attorney-client relationship, can change the analysis entirely. I'm excited to be planning this kind of approach, and I think it's where the industry needs to head. storage.courtlistener.com/recap/gov.usco…

English
0
1
1
529
strongwall.ai
strongwall.ai@StrongwallAi·
This is exactly one of the reasons we built Strongwall.ai. Most people don’t realize this. The chat interface feels private. It feels like thinking out loud with an advisor. But on most platforms, your prompts are logged, retained, and in many cases can be disclosed. That’s a dangerous gap between perception and reality — especially when legal strategy is involved. We built Strongwall on a simple premise: your conversations shouldn’t become someone else’s records. No logging. No retention. No training on your data. If AI is going to be part of serious decision-making, privacy can’t be an afterthought.
English
0
3
5
106
Moish Peltz
Moish Peltz@mpeltz·
Your AI conversations aren't privileged. Yesterday, Judge Jed Rakoff ruled that 31 documents a defendant generated using an AI tool and later shared with his defense attorneys are not protected by attorney-client privilege or work product doctrine. The logic is simple: an AI tool is not an attorney. It has no law license, owes no duty of loyalty, and its terms of service explicitly disclaim any attorney-client relationship. Sharing case details with an AI platform is legally no different from talking through your legal situation with a friend (which is not privileged). You can't fix it after the fact, either. Sending unprivileged documents to your lawyer doesn't retroactively make them privileged. That's been settled law for years. It just hadn't been tested with AI until now. And here's what really hurt the defendant: the AI provider's privacy policy (Claude), in effect when he used the tool, expressly permits disclosure of user prompts and outputs to governmental authorities. There was no reasonable expectation of confidentiality. The core problem is the gap between how people experience AI and what's actually happening. The conversational interface feels private. It feels like talking to an advisor. But unless you negotiate for an enterprise agreement that says otherwise, you're inputting information into a third-party commercial platform that retains your data and reserves broad rights to disclose it. Judge Rakoff also flagged an interesting wrinkle: the defendant reportedly fed information from his attorneys into the AI tool. If prosecutors try to use these documents at trial, defense counsel could become a fact witness, potentially forcing a mistrial. Winning on privilege doesn't make the evidentiary picture simple. For anyone advising clients or managing legal risk, this is a wake-up call. AI tools are not a safe space for clients to process their counsel's advice and to regurgitate their legal strategy. Every prompt is a potential disclosure. Every output is a potentially discoverable document. So what do we do about it? First, attorneys need to be proactive. Advise clients explicitly that anything they put into an AI tool may be discoverable and is almost certainly not privileged. Put it in your engagement letters. Make it part of onboarding. Don't assume clients understand this, because most don't. Second, if clients want to use AI to help process legal issues (and they clearly will, increasingly), then let's give them a way to do it inside the privilege. Collaborative AI workspaces shared between attorney and client, where the AI interaction happens under counsel's direction and within the attorney-client relationship, can change the analysis entirely. I'm excited to be planning this kind of approach, and I think it's where the industry needs to head. storage.courtlistener.com/recap/gov.usco…
English
304
897
3.7K
2.4M
strongwall.ai
strongwall.ai@StrongwallAi·
But as Dean points out, most people complaining about minors accessing AI fundamentally want to control all use of AI, including by adults. Strongwall believes if you can legally have a US credit card, you should have access to private, powerful AI under your control.
English
0
1
1
175
strongwall.ai
strongwall.ai@StrongwallAi·
"So what's the epub, why was it so long, and why were you trying to dig up a cite?" 👏🏻None 👏🏻of 👏🏻our 👏🏻business👏🏻
English
0
0
0
201
strongwall.ai
strongwall.ai@StrongwallAi·
- Be CTO - Have 2000 page epub - Need cite for specific paragraph - Vaguely remember the subject matter, but no specific searchable phrases - Use Strongwall unlimited API access - Prompt, "examine this section for an anecdote where the author discusses..." - Use Strongwall to hook up a epub parser to feed pages to Strongwall API - LLM goes brrrrr - Citation found It's literally that easy
English
1
1
0
300