
Sigh…
Covenant Labs
72 posts

@Covenantlabsai
AI Research Lab building verifiably private AI systems // Own your models, own your mind

Sigh…

This is baaaaad.



Ohh well here's a novel form of regulatory capture! Use your personal ChatGPT sub to get advice on a lawsuit? Unprivileged, other side can subpoena. Your lawyer uses their sub to ask the exact same questions, and forwards you the answers? Privileged, inadmissible in court!



Your AI conversations aren't privileged. Yesterday, Judge Jed Rakoff ruled that 31 documents a defendant generated using an AI tool and later shared with his defense attorneys are not protected by attorney-client privilege or work product doctrine. The logic is simple: an AI tool is not an attorney. It has no law license, owes no duty of loyalty, and its terms of service explicitly disclaim any attorney-client relationship. Sharing case details with an AI platform is legally no different from talking through your legal situation with a friend (which is not privileged). You can't fix it after the fact, either. Sending unprivileged documents to your lawyer doesn't retroactively make them privileged. That's been settled law for years. It just hadn't been tested with AI until now. And here's what really hurt the defendant: the AI provider's privacy policy (Claude), in effect when he used the tool, expressly permits disclosure of user prompts and outputs to governmental authorities. There was no reasonable expectation of confidentiality. The core problem is the gap between how people experience AI and what's actually happening. The conversational interface feels private. It feels like talking to an advisor. But unless you negotiate for an enterprise agreement that says otherwise, you're inputting information into a third-party commercial platform that retains your data and reserves broad rights to disclose it. Judge Rakoff also flagged an interesting wrinkle: the defendant reportedly fed information from his attorneys into the AI tool. If prosecutors try to use these documents at trial, defense counsel could become a fact witness, potentially forcing a mistrial. Winning on privilege doesn't make the evidentiary picture simple. For anyone advising clients or managing legal risk, this is a wake-up call. AI tools are not a safe space for clients to process their counsel's advice and to regurgitate their legal strategy. Every prompt is a potential disclosure. Every output is a potentially discoverable document. So what do we do about it? First, attorneys need to be proactive. Advise clients explicitly that anything they put into an AI tool may be discoverable and is almost certainly not privileged. Put it in your engagement letters. Make it part of onboarding. Don't assume clients understand this, because most don't. Second, if clients want to use AI to help process legal issues (and they clearly will, increasingly), then let's give them a way to do it inside the privilege. Collaborative AI workspaces shared between attorney and client, where the AI interaction happens under counsel's direction and within the attorney-client relationship, can change the analysis entirely. I'm excited to be planning this kind of approach, and I think it's where the industry needs to head. storage.courtlistener.com/recap/gov.usco…

it has been ruled that conversations with a major LLM providers are NOT considered privileged overtime, cases like this will increase demand for solutions like @covenantlabsai (encrypted LLMs) and @runanywhereai (run LLMs locally) that do NOT expose your AI chat data to entities who can turn it over to the government



Ads are coming to AI. But not to Claude. Keep thinking.





All the analysts forever writing about OpenAI vs Anthropic vs Google are missing the real story that already happened. 80% of startups pitching Andreessen Horowitz are running on Chinese open-source models. Not OpenAI. Not Anthropic. Chinese models like DeepSeek that cost 214x less per token. The math here breaks everything. DeepSeek trained its model for $5 million. OpenAI spent $500 million per six-month training cycle for GPT-5. That gap translates directly to API pricing where startups pay $0.14 per million tokens versus $30 for GPT-4. For a startup burning through 100 million tokens monthly, that’s $1,400 versus $300,000. The difference between 18 months of runway and 3 months. This tells you the real constraint in AI was never capability. Chinese models are matching GPT-4 on coding benchmarks while costing 2% as much. The constraint was always burn rate, and China solved it first by optimizing for efficiency instead of chasing AGI. The second-order effect gets interesting. When your infrastructure costs drop 98%, you can actually afford to fine-tune models for your specific use case. American startups paying OpenAI’s API rates are stuck with generic models. Chinese open-source users are building specialized variants. Silicon Valley thought the moat was model quality. Turns out the moat was cost structure, and they built it backwards. When a16z partner Anjney Midha says “it’s really China’s game right now” in open-source, he’s not talking about benchmarks. He’s talking about who controls the default foundation layer. Now look at where this goes. American AI labs are optimizing for AGI and superintelligence. Raising billions to chase the theoretical ceiling. China optimized for distribution and adoption. Making AI cheap enough to become infrastructure. All 16 top-ranked open-source models are Chinese. DeepSeek, Qwen, Yi. The models actually being deployed at scale. While OpenAI charges premium rates for exclusive access, Chinese labs are flooding the zone with free alternatives that work. The third-order cascade is what changes everything. Every startup that survives the next funding winter will have optimized around Chinese open-source as default infrastructure. Not as a China strategy. As a survival strategy. That 80% number at a16z only goes one direction. When you’re a seed-stage founder choosing between 18 months of runway or 3 months, economics beats nationalism every time. America is still competing to build the best model. China already won the race to build the one everyone uses.


we are grateful to the more than 1 million business customers building with us openai.com/index/1-millio…