Covenant Labs

72 posts

Covenant Labs banner
Covenant Labs

Covenant Labs

@Covenantlabsai

AI Research Lab building verifiably private AI systems // Own your models, own your mind

Katılım Temmuz 2025
303 Takip Edilen676 Takipçiler
Covenant Labs
Covenant Labs@Covenantlabsai·
You could keep giving away all of your most intimate thoughts.. all of your core IP… To AI companies that at the very least are using it to train their next models (at worst, it’s being weaponized against you, eroding your competitive edge, etc) Or… you could just use Covenant (very soon!)
Chamath Palihapitiya@chamath

Sigh…

English
0
0
1
261
Covenant Labs retweetledi
Will Preble
Will Preble@kingwillxm·
Agent frameworks need better compartmentalization. The monolithic plugin system model is outdated We are building the @Covenantlabsai version of this, but I’d love to connect with any other builders who are innovating in agent security
Bojan Tunguz@tunguz

This is baaaaad.

English
0
1
1
261
Covenant Labs
Covenant Labs@Covenantlabsai·
@ThisWeeknAI @Jason @theallinpod On prem has its place (and we support on local models in our open source framework!) But end to end encrypted inference in the cloud unlocks a whole different level of scale/usability/composabiltiy
English
0
0
1
8
This Week in AI
This Week in AI@ThisWeeknAI·
"It's cost savings + do I want to give all of the secrets of my organization, every piece of IP to Sam Altman who needs to make $1B a year to keep up with spend?!" - @Jason Source: @theallinpod
English
33
31
332
47.3K
Yohei
Yohei@yoheinakajima·
it has been ruled that conversations with a major LLM providers are NOT considered privileged overtime, cases like this will increase demand for solutions like @covenantlabsai (encrypted LLMs) and @runanywhereai (run LLMs locally) that do NOT expose your AI chat data to entities who can turn it over to the government
Moish Peltz@mpeltz

Your AI conversations aren't privileged. Yesterday, Judge Jed Rakoff ruled that 31 documents a defendant generated using an AI tool and later shared with his defense attorneys are not protected by attorney-client privilege or work product doctrine. The logic is simple: an AI tool is not an attorney. It has no law license, owes no duty of loyalty, and its terms of service explicitly disclaim any attorney-client relationship. Sharing case details with an AI platform is legally no different from talking through your legal situation with a friend (which is not privileged). You can't fix it after the fact, either. Sending unprivileged documents to your lawyer doesn't retroactively make them privileged. That's been settled law for years. It just hadn't been tested with AI until now. And here's what really hurt the defendant: the AI provider's privacy policy (Claude), in effect when he used the tool, expressly permits disclosure of user prompts and outputs to governmental authorities. There was no reasonable expectation of confidentiality. The core problem is the gap between how people experience AI and what's actually happening. The conversational interface feels private. It feels like talking to an advisor. But unless you negotiate for an enterprise agreement that says otherwise, you're inputting information into a third-party commercial platform that retains your data and reserves broad rights to disclose it. Judge Rakoff also flagged an interesting wrinkle: the defendant reportedly fed information from his attorneys into the AI tool. If prosecutors try to use these documents at trial, defense counsel could become a fact witness, potentially forcing a mistrial. Winning on privilege doesn't make the evidentiary picture simple. For anyone advising clients or managing legal risk, this is a wake-up call. AI tools are not a safe space for clients to process their counsel's advice and to regurgitate their legal strategy. Every prompt is a potential disclosure. Every output is a potentially discoverable document. So what do we do about it? First, attorneys need to be proactive. Advise clients explicitly that anything they put into an AI tool may be discoverable and is almost certainly not privileged. Put it in your engagement letters. Make it part of onboarding. Don't assume clients understand this, because most don't. Second, if clients want to use AI to help process legal issues (and they clearly will, increasingly), then let's give them a way to do it inside the privilege. Collaborative AI workspaces shared between attorney and client, where the AI interaction happens under counsel's direction and within the attorney-client relationship, can change the analysis entirely. I'm excited to be planning this kind of approach, and I think it's where the industry needs to head. storage.courtlistener.com/recap/gov.usco…

English
13
9
43
11.5K
Zoë Hitzig
Zoë Hitzig@zhitzig·
I resigned from OpenAI on Monday. The same day, they started testing ads in ChatGPT. OpenAI has the most detailed record of private human thought ever assembled. Can we trust them to resist the tidal forces pushing them to abuse it? I wrote about better options for @nytopinion
Zoë Hitzig tweet media
English
301
1.9K
8.8K
1.6M
Covenant Labs retweetledi
Will Preble
Will Preble@kingwillxm·
Putting ads in our AI models is the definition of AI MIS-Alignment (and pretty clearly dystopian) We have been talking about this moment for years at @Covenantlabsai , but its wild to see it reach mainstream consciousness like this Props to @AnthropicAI for resisting the urge to fully sell their soul here AI weaponized against you and/or your firm, even froma a seemingly innocuous thing like ads, can go very wrong, very fast I am heavily betting that the ability to trust your AI models and agents will become extremely important in the coming months and years (and of course valuable to the market by extension) Anthropic may be better than the alternative hyperscalers, but its important to remember they can still see all of your data + IP Any AI model, or agent, that is not verifiably private, will always have the potential to be weaponized against you. AI Privacy is the precursor to AI Alignment
Claude@claudeai

Ads are coming to AI. But not to Claude. Keep thinking.

English
0
1
2
188
Covenant Labs
Covenant Labs@Covenantlabsai·
@TheAhmadOsman Definitely a great option Our forthcoming secure cloud will allow you to keep your models and data fully encrypted in the cloud But our free, open source dev framework also supports a local provider for those who prefer it
English
0
0
0
77
Ahmad
Ahmad@TheAhmadOsman·
i have fully dropped Claude Code for OpenCode i donʼt use Opus 4.5, i use GLM-4.7 and MiniMax-M2.1 theyʼre opensource and can be self-hosted nobody can nerf my models or rug pull me nobody should be able to do that to your intelligence p.s. buy a GPU and run your LLMs locally
English
342
209
3.8K
356.7K
Covenant Labs
Covenant Labs@Covenantlabsai·
"In practice, sovereignty matters because competitive advantage increasingly lives inside prompts, workflows, fine-tuning, and proprietary context. When intelligence is rented, so is leverage." When your edge industry data and core IP live in your AI models AI privacy is not a "nice to have," it is the moat!
Yohei@yoheinakajima

x.com/i/article/2008…

English
0
3
9
3.1K
Arjun Kalsy
Arjun Kalsy@ArjunKalsy·
@Covenantlabsai This is the kind of design that wins long term because people eventually choose what protects them
English
1
0
1
93
Covenant Labs
Covenant Labs@Covenantlabsai·
we’re building an AI stack where: – your questions go in encrypted – the thinking stays encrypted – the answers leave encrypted Privacy is not a promise—its a design choice. If they can't prove it, you shouldn't trust it.
English
8
7
42
9.3K
Ares Labs AI
Ares Labs AI@AreslabsAI·
New project:@Covenantlabsai (follower 735) Bio:AI Research Lab building verifiably private AI systems // Own your models, own your mind alert follower:@yoheinakajima
English
2
2
8
662
Covenant Labs
Covenant Labs@Covenantlabsai·
Private AI must become a Public Good It is not a "nice to have" It is the foundation for human flourishing in the AI era.
English
1
0
4
339
Covenant Labs retweetledi
Will Preble
Will Preble@kingwillxm·
This is one reason I am so bullish on the shift to open source, fine tuned models The economics just make sense. Add in verifiable privacy by encrypting your models, and now you have a product that is better in 98% of use cases than calling gpt5 Our internal research @Covenantlabsai these last few months has gone heavily into a developer framework for tuning and deploying pipelines of encrypted models The easier we make it to shift your stack to fine tuned open source, the quicker companies will realize that they no longer have to compromise on cost or quality to actually own their models and data
Aakash Gupta@aakashgupta

All the analysts forever writing about OpenAI vs Anthropic vs Google are missing the real story that already happened. 80% of startups pitching Andreessen Horowitz are running on Chinese open-source models. Not OpenAI. Not Anthropic. Chinese models like DeepSeek that cost 214x less per token. The math here breaks everything. DeepSeek trained its model for $5 million. OpenAI spent $500 million per six-month training cycle for GPT-5. That gap translates directly to API pricing where startups pay $0.14 per million tokens versus $30 for GPT-4. For a startup burning through 100 million tokens monthly, that’s $1,400 versus $300,000. The difference between 18 months of runway and 3 months. This tells you the real constraint in AI was never capability. Chinese models are matching GPT-4 on coding benchmarks while costing 2% as much. The constraint was always burn rate, and China solved it first by optimizing for efficiency instead of chasing AGI. The second-order effect gets interesting. When your infrastructure costs drop 98%, you can actually afford to fine-tune models for your specific use case. American startups paying OpenAI’s API rates are stuck with generic models. Chinese open-source users are building specialized variants. Silicon Valley thought the moat was model quality. Turns out the moat was cost structure, and they built it backwards. When a16z partner Anjney Midha says “it’s really China’s game right now” in open-source, he’s not talking about benchmarks. He’s talking about who controls the default foundation layer. Now look at where this goes. American AI labs are optimizing for AGI and superintelligence. Raising billions to chase the theoretical ceiling. China optimized for distribution and adoption. Making AI cheap enough to become infrastructure. All 16 top-ranked open-source models are Chinese. DeepSeek, Qwen, Yi. The models actually being deployed at scale. While OpenAI charges premium rates for exclusive access, Chinese labs are flooding the zone with free alternatives that work. The third-order cascade is what changes everything. Every startup that survives the next funding winter will have optimized around Chinese open-source as default infrastructure. Not as a China strategy. As a survival strategy. That 80% number at a16z only goes one direction. When you’re a seed-stage founder choosing between 18 months of runway or 3 months, economics beats nationalism every time. America is still competing to build the best model. China already won the race to build the one everyone uses.

English
1
2
6
1.4K
Covenant Labs
Covenant Labs@Covenantlabsai·
@Jason we couldn't agree more. happy to help any business leaders who are making the transition to a private AI stack
English
0
0
1
10
@jason
@jason@Jason·
Just a fair warning to founders: like Facebook and microsoft, I predict OpenAI will study their platform partners and compete with many of them in the coming years. This isn't abnormal, mind you… Microsoft built windows, then destroyed their top partners lotus 1-2-3 and wordperfect with excel and word Standup an open source llm or use pure platform plays who don't have to hit $200B in revenue to break-even in their 1.4t build outs Every dollar you spend with openAI and every job you send to their API, might create the demise of your business
Brad Lightcap@bradlightcap

we are grateful to the more than 1 million business customers building with us openai.com/index/1-millio…

English
212
259
2.9K
593.3K
Proton Mail
Proton Mail@ProtonMail·
Comment if you really really really really really really really really really really really really really really really really really really really really really love encryption
English
506
135
3.7K
87.1K
Covenant Labs retweetledi
Proton
Proton@ProtonPrivacy·
"Use Perplexity, we spy on you better than Google"
English
128
315
5.8K
298K
Covenant Labs
Covenant Labs@Covenantlabsai·
Every time you paste proprietary code into ChatGPT, you're essentially pasting your company's IP to OpenAI's training data.
English
0
3
5
480
Covenant Labs
Covenant Labs@Covenantlabsai·
90% of business use cases are better served by pipelines of fine-tuned small models than a single API call to GPT-5 or Claude Sonnet.
English
3
1
3
219