Confidential AI

37 posts

Confidential AI banner
Confidential AI

Confidential AI

@Confi_AI

Conf AI makes it trivial to deploy AI workloads inside TEEs. e2e private inference, training, and agents.

San Francisco Katılım Haziran 2025
77 Takip Edilen28 Takipçiler
Sabitlenmiş Tweet
Confidential AI
Confidential AI@Confi_AI·
🌖 Lunal is the fastest way to deploy code in Trusted Execution Environments (TEEs) - with zero configuration, guaranteed privacy, full verifiability, and automatic scaling. TEEs are game-changing but hard to use. We make them easy. Here's how:
English
1
0
5
258
Confidential AI
Confidential AI@Confi_AI·
By 2029, Gartner predicts more than 75% of operations processed in untrusted infrastructure will be secured in-use by confidential computing. Only 75%? We're building for 100%.
English
0
0
5
30
Confidential AI
Confidential AI@Confi_AI·
We are in SF this week around HumanX. If you're thinking about private inference, secure model deployment, or confidential computing, let's grab a coffee.
English
0
0
5
50
Confidential AI
Confidential AI@Confi_AI·
Trusted Execution Environments, explained. A useful analogy is the transition from HTTP to HTTPS. With HTTP, you sent your data in plaintext but couldn't confirm who you were talking to. HTTPS allows you to confirm who you're talking to and encrypt your data in transit. But once your data arrives at the server, it's decrypted and processed in plaintext. This means the system administrator can see your data. TEEs go a step further. Your data stays encrypted by hardware during computation. You can verify who you're talking to, what software they're running, and that your data is private throughout. The sys admin can't access the data. No one can. All of this, like HTTPS, adds a negligible performance cost.
English
0
1
5
58
Confidential AI
Confidential AI@Confi_AI·
Confidential AI is the most important infrastructure problem of today. Exposing our data and identity to the tech giants was bad. The same is happening with the frontier labs in fast forward mode. As AI infiltrates every workflow, confidentiality will necessarily become table stakes. Yet the labs want this too. They want to protect their model weights. Privacy is essential from all perspectives. Confidential computing is hitting an inflection point. It is crossing the chasm from theory and research into production. But this problem is not yet solved. That's why today we decided to rename ourselves Confidential AI. This new name describes exactly what gets the team out of bed every day. Lunal as a name has worked for us until now: an engineering team solely focused on shipping by daylight or moonlight. Until a few weeks ago, our domain lunal.dev just pointed to our Github page. A signal that we cut out the fluff and focus on the load-bearing work. The name has changed but the approach hasn't. Same team, same roadmap, same hellbent desire to ship high-quality, privacy-preserving products. We've shipped confidential inference, training, fine-tuning, and agentic flows. And now, finally, we got round to shipping a new name and a website. We are Confidential AI and we do Confidential AI. Find us at confidential.ai. If you're building with AI and care about who sees your data or model weights, we should talk.
Confidential AI tweet media
English
0
0
4
56
Confidential AI
Confidential AI@Confi_AI·
Huge thanks to @Initialized for having Lunal's CEO @ansgargg on stage last night at the 3rd annual Initialized Talent Summit. We really enjoyed the event.
Confidential AI tweet media
English
0
1
7
85
Confidential AI
Confidential AI@Confi_AI·
We built a simulator for c8s, our confidential Kubernetes product with autoscaling.
English
1
1
5
45
Confidential AI
Confidential AI@Confi_AI·
Your agent's API keys must be accessible in plaintext to be used. They are only truly safe inside a TEE. Even a compromised host can't extract them. Hardware-enforced credential isolation. That's what agent security should look like.
English
0
0
2
29
Confidential AI
Confidential AI@Confi_AI·
@BatsouElef By listening to that inner voice whenever we send prompts into the frontier Labs. That prompt should be private.
English
0
0
0
2
Eleftheria Batsou
Eleftheria Batsou@BatsouElef·
Time to promote your product. 🚀 Share that product URL!!
English
567
4
247
21.4K
Confidential AI
Confidential AI@Confi_AI·
You heard it from the great @ameanasad: Confidential Computing is available in the latest GPUs - H100s, B-Series, Vera Rubins and more. At Lunal, we make it trivial to set up your AI flows in CC mode.
aminoacid@ameanasad

@wyatt_benno @LouisThibault87 the CC functionality in GPUs is already available in H100s, B200s, B300s, and the new vera rubins. So they cost the same as the GPU itself, you just have to turn it on and pair the GPU with a CPU that also supports CC which is the case for most modern data center CPUs

English
0
0
5
53
Confidential AI
Confidential AI@Confi_AI·
The first AI infra provider to offer end-2-end confidential inference will break the market open. The history of the internet is full of moments where one player raised the floor on privacy or security, and the entire market had to follow. In 2018, Chrome started marking HTTP sites as "Not Secure." Web admins who had been putting off HTTPS for years suddenly had a deadline. One browser's UI change became an industry mandate. Not because of regulation. Because users saw the warning and stopped trusting the site. Apple shipped Touch ID in 2013. Within two years, every major Android offering had fingerprint auth. Passwords as the default auth method never recovered. The pattern is always the same. One player makes the privacy feature the default. Everyone else scrambles. AI inference is next.
English
0
0
6
42
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
Been so much fun cooking OpenShell and NemoClaw with the @NVIDIAAI folks! 🙏🦞 Huge step towards secure agents you can trust. What’s your OpenClaw strategy?
English
247
211
4.3K
226.1K
Confidential AI
Confidential AI@Confi_AI·
The AI market breaks open the moment someone makes confidential inference the default.
English
0
0
4
45