J. Alex Halderman

635 posts

J. Alex Halderman banner
J. Alex Halderman

J. Alex Halderman

@jhalderm

Bredt Family Professor of Computer Science and Engineering, @UMich: Security and privacy, election security, and Internet freedom. Co-founded @LetsEncrypt

Ann Arbor, MI Katılım Aralık 2009
608 Takip Edilen12.4K Takipçiler
J. Alex Halderman retweetledi
Percy Liang
Percy Liang@percyliang·
I stopped using ChatGPT a few months ago. Since then, I have been only using oa-chat. All chat history is stored locally. Each query is sent to OpenAI under a temporary key which is unlinkable to any other query. I’m not a privacy nut, but oa-chat is such a convenient drop-in replacement for your favorite AI assistant that there’s no reason not to try it out.
Ken Liu@kenziyuliu

Can we build a blind, *unlinkable inference* layer where ChatGPT/Claude/Gemini can't tell which call came from which users, like a “VPN for AI inference”? Yes! Blog post below + we built it into open source infra/chat app and served >15k prompts at Stanford so far. How it helps with AI user privacy: # The AI user privacy problem If you ask AI to analyze your ChatGPT history today, it’s surprisingly easy to infer your demographics, health, immigration status, and political beliefs. Every prompt we send accumulates into an (identity-linked) profile that the AI lab controls completely and indefinitely. At a minimum this is a goldmine for ads (as we know now). A bigger issue is the concentration of power: AI labs can easily become (or asked to become) a Cambridge Analytica, whistleblow your immigration status, or work with health insurance to adjust your premium if they so choose. This is a uniquely worse problem than search engines because your average query is now more revealing (not just keywords), interactive, and intelligence is now cheap. Despite this, most of us still want these remote models; they’re just too good and convenient! (this is aka the "privacy paradox".) # Unlinkable inference as a user privacy architecture The idea of unlinkable inference is to add privacy while preserving access to the remote models controlled by someone else. A “privacy wrapper” or “VPN for AI inference”, so to speak. Concretely, it’s a blind inference middle layer that: (1) consists of decentralized proxies that anyone can operate; (2) blindly authenticates requests (via blind signatures / RFC9474,9578) so requests are provably sandboxed from each other and from user identity; (3) relays prompts over randomly chosen proxies that don’t see or log traffic (via client-side ephemeral keys or hosting in TEEs); and (4) the provider simply sees a mixed pool of anonymous prompts from the proxies. No state, pseudonyms, or linkable metadata. If you squint, an unlinkable inference layer is essentially a vendor for per-request, anonymous, ephemeral AI access credentials (for users or agents alike). It partitions your context so that user tracking is drastically harder. Obviously, unlinkability isn’t a silver bullet: the prompt itself still goes to the remote model and can leak privacy (so don't use our chat app for a therapy session!). It aims to combat *longitudinal tracking* as a major threat to user privacy, and its statistical power increases quickly by mixing more users and requests. Unlinkability can be applied at any granularity. For an AI chat app, you can unlinkably request a fresh ephemeral key for every session so tracking is virtually impossible. # The Open Anonymity Project We started this project with the belief that intelligence should be a truly public utility. Like water and electricity, providers should be compensated by usage, not who you are or what you do with it. We think unlinkable inference is a first step towards this “intelligence neutrality”. # Try it out! It’s quite practical - Chat app “oa-chat”: chat.openanonymity.ai (<20 seconds to get going) - Blog post that should be a fun read: openanonymity.ai/blog/unlinkabl… - Project page: openanonymity.ai - GitHub: github.com/OpenAnonymity

English
22
79
900
146.4K
J. Alex Halderman
J. Alex Halderman@jhalderm·
How can users enjoy the benefits of frontier AI without falling into a privacy dystopia? Ken and my PhD student Erik Chi are charting a path to a solution, based on ideas of unlikable inference and private personal intelligence. Try what they've built at openanonymity.ai.
Ken Liu@kenziyuliu

Can we build a blind, *unlinkable inference* layer where ChatGPT/Claude/Gemini can't tell which call came from which users, like a “VPN for AI inference”? Yes! Blog post below + we built it into open source infra/chat app and served >15k prompts at Stanford so far. How it helps with AI user privacy: # The AI user privacy problem If you ask AI to analyze your ChatGPT history today, it’s surprisingly easy to infer your demographics, health, immigration status, and political beliefs. Every prompt we send accumulates into an (identity-linked) profile that the AI lab controls completely and indefinitely. At a minimum this is a goldmine for ads (as we know now). A bigger issue is the concentration of power: AI labs can easily become (or asked to become) a Cambridge Analytica, whistleblow your immigration status, or work with health insurance to adjust your premium if they so choose. This is a uniquely worse problem than search engines because your average query is now more revealing (not just keywords), interactive, and intelligence is now cheap. Despite this, most of us still want these remote models; they’re just too good and convenient! (this is aka the "privacy paradox".) # Unlinkable inference as a user privacy architecture The idea of unlinkable inference is to add privacy while preserving access to the remote models controlled by someone else. A “privacy wrapper” or “VPN for AI inference”, so to speak. Concretely, it’s a blind inference middle layer that: (1) consists of decentralized proxies that anyone can operate; (2) blindly authenticates requests (via blind signatures / RFC9474,9578) so requests are provably sandboxed from each other and from user identity; (3) relays prompts over randomly chosen proxies that don’t see or log traffic (via client-side ephemeral keys or hosting in TEEs); and (4) the provider simply sees a mixed pool of anonymous prompts from the proxies. No state, pseudonyms, or linkable metadata. If you squint, an unlinkable inference layer is essentially a vendor for per-request, anonymous, ephemeral AI access credentials (for users or agents alike). It partitions your context so that user tracking is drastically harder. Obviously, unlinkability isn’t a silver bullet: the prompt itself still goes to the remote model and can leak privacy (so don't use our chat app for a therapy session!). It aims to combat *longitudinal tracking* as a major threat to user privacy, and its statistical power increases quickly by mixing more users and requests. Unlinkability can be applied at any granularity. For an AI chat app, you can unlinkably request a fresh ephemeral key for every session so tracking is virtually impossible. # The Open Anonymity Project We started this project with the belief that intelligence should be a truly public utility. Like water and electricity, providers should be compensated by usage, not who you are or what you do with it. We think unlinkable inference is a first step towards this “intelligence neutrality”. # Try it out! It’s quite practical - Chat app “oa-chat”: chat.openanonymity.ai (<20 seconds to get going) - Blog post that should be a fun read: openanonymity.ai/blog/unlinkabl… - Project page: openanonymity.ai - GitHub: github.com/OpenAnonymity

English
0
4
7
830
J. Alex Halderman retweetledi
Ken Liu
Ken Liu@kenziyuliu·
Can we build a blind, *unlinkable inference* layer where ChatGPT/Claude/Gemini can't tell which call came from which users, like a “VPN for AI inference”? Yes! Blog post below + we built it into open source infra/chat app and served >15k prompts at Stanford so far. How it helps with AI user privacy: # The AI user privacy problem If you ask AI to analyze your ChatGPT history today, it’s surprisingly easy to infer your demographics, health, immigration status, and political beliefs. Every prompt we send accumulates into an (identity-linked) profile that the AI lab controls completely and indefinitely. At a minimum this is a goldmine for ads (as we know now). A bigger issue is the concentration of power: AI labs can easily become (or asked to become) a Cambridge Analytica, whistleblow your immigration status, or work with health insurance to adjust your premium if they so choose. This is a uniquely worse problem than search engines because your average query is now more revealing (not just keywords), interactive, and intelligence is now cheap. Despite this, most of us still want these remote models; they’re just too good and convenient! (this is aka the "privacy paradox".) # Unlinkable inference as a user privacy architecture The idea of unlinkable inference is to add privacy while preserving access to the remote models controlled by someone else. A “privacy wrapper” or “VPN for AI inference”, so to speak. Concretely, it’s a blind inference middle layer that: (1) consists of decentralized proxies that anyone can operate; (2) blindly authenticates requests (via blind signatures / RFC9474,9578) so requests are provably sandboxed from each other and from user identity; (3) relays prompts over randomly chosen proxies that don’t see or log traffic (via client-side ephemeral keys or hosting in TEEs); and (4) the provider simply sees a mixed pool of anonymous prompts from the proxies. No state, pseudonyms, or linkable metadata. If you squint, an unlinkable inference layer is essentially a vendor for per-request, anonymous, ephemeral AI access credentials (for users or agents alike). It partitions your context so that user tracking is drastically harder. Obviously, unlinkability isn’t a silver bullet: the prompt itself still goes to the remote model and can leak privacy (so don't use our chat app for a therapy session!). It aims to combat *longitudinal tracking* as a major threat to user privacy, and its statistical power increases quickly by mixing more users and requests. Unlinkability can be applied at any granularity. For an AI chat app, you can unlinkably request a fresh ephemeral key for every session so tracking is virtually impossible. # The Open Anonymity Project We started this project with the belief that intelligence should be a truly public utility. Like water and electricity, providers should be compensated by usage, not who you are or what you do with it. We think unlinkable inference is a first step towards this “intelligence neutrality”. # Try it out! It’s quite practical - Chat app “oa-chat”: chat.openanonymity.ai (<20 seconds to get going) - Blog post that should be a fun read: openanonymity.ai/blog/unlinkabl… - Project page: openanonymity.ai - GitHub: github.com/OpenAnonymity
Ken Liu tweet media
English
62
157
828
373.2K
J. Alex Halderman
J. Alex Halderman@jhalderm·
I'm in DC the rest of this week for the Election Verification Network conference. Ping me if you want to say 👋
English
7
1
5
1.1K
J. Alex Halderman
J. Alex Halderman@jhalderm·
I don't know whether the President actually has the authority to order these changes, but if not, Congress should see them as a starting point for long-needed election security reforms. Here's the full order: whitehouse.gov/presidential-a…
English
2
5
21
1K
J. Alex Halderman
J. Alex Halderman@jhalderm·
A new executive order just dropped that, if implemented, promises to significantly strengthen federal certification of voting machines.
J. Alex Halderman tweet media
English
9
18
59
24.1K
J. Alex Halderman
J. Alex Halderman@jhalderm·
@Curiousityfirst There's a lot that's broken about the whole EAC certification process, but I still think these changes would be positive.
English
2
0
2
143
J. Alex Halderman retweetledi
Miles O'Brien
Miles O'Brien@milesobrien·
🎧 NEW PODCAST: Protecting Democracy: My Journey Covering Election Security w/ @jhalderm! From Georgia’s voting machines to myths about mail-in ballots, we explore how to safeguard elections in a digital world. 🔗 Listen: milestogo.libsyn.com/episode-38-not…
English
1
5
10
1.6K
J. Alex Halderman
J. Alex Halderman@jhalderm·
@robertgraham The optimist in me hopes that time is now. The pessimist worries that the issue will fall completely off the radar.
English
3
1
6
252
Robert Graham
Robert Graham@robertgraham·
@jhalderm I don't think we get get security progress when it's the aggrieved losers chasing conspiracy-theories using it as an excuse to overturn an election. I think we only get progress when there's no other agenda other than sincerely wanting to improve security.
English
2
0
6
314
J. Alex Halderman
J. Alex Halderman@jhalderm·
From a security perspective, elections are much safer when there's a decisive outcome than when they hinge on razor-thin margins in a few states. But what about next time? How many voters will continue to call for necessary election security improvements?
English
11
6
26
2K
J. Alex Halderman retweetledi
INFORMS
INFORMS@INFORMS·
How can #optimization help with #electionsecurity? Read the latest article from Operations Research. "Improving the Security of United States Elections with Robust Optimization" bit.ly/3YKslim
INFORMS tweet media
English
4
6
15
1.9K
J. Alex Halderman
J. Alex Halderman@jhalderm·
@elonmusk Hey Elon, glad you're aware of my work. I'd be happy to talk if you'd like to learn more about the current state of play.
English
5
7
25
1.1K
Elon Musk
Elon Musk@elonmusk·
Electronic voting machines and anything mailed in is too risky. We should mandate paper ballots and in-person voting only.
Elon Musk tweet mediaElon Musk tweet mediaElon Musk tweet mediaElon Musk tweet media
English
17.8K
63.3K
281.6K
27.9M
J. Alex Halderman retweetledi
Duncan Campbell
Duncan Campbell@duncan_2qq·
@rossjanderson Professor Ross Anderson, FRS, FREng Dear friend and treasured long term campaigner for privacy and security, Professor of Security Engineering at Cambridge University and Edinburgh University, Lovelace Medal winner, has died suddenly at home in Cambridge.
Duncan Campbell tweet media
English
74
295
807
487.5K
J. Alex Halderman retweetledi
Let's Encrypt
Let's Encrypt@letsencrypt·
That amazing feeling when you look something up on @wikipedia and there is your TLS cert, looking back at you.
Let's Encrypt tweet media
English
5
10
143
22.7K