Julian Griffin 🦊
1.8K posts


An “Age Verification for all Operating Systems” (including Windows, macOS, & Linux) bill has been introduced in the state of Illinois. This new bill (Illinois SB 3977) is *very* similar to the recently passed California bill (and the introduced Colorado bill) and, if passed, would set a deadline of January 1st, 2028 for compliance. The Illinois version of the bill is being sponsored by Laura Ellman (Democrat). legiscan.com/IL/bill/SB3977…






The NY legislature is rapidly pushing through a 2025 bill that would prohibit LLMs from providing substantive legal analysis or advice in NY. EDIT: Thanks to @InquisitiveUrsa, I now agree that this bill is not as bad as I first thought. It seems that LLMs could still provide substantive help *to lawyers* under this bill as written (e.g., in the same way an unlicensed summer associate is not forbidden from doing legal research or drafting a legal memo). Consumers, however, would be stuck with the chatbot refusing to answer their legal questions. Regardless, this bill is still terrible and should not be passed.








What companies use Persona for age verification? If you have sent your ID to any of these companies then the United States government has all of your information (and trying to map it to your financial records). This is all publicly listed companies. - OpenAI - Coursera - Twilio - Square - Lime - Brex - Branch - WeTravel - CoffeeMeetsBagel - Flipster - NextDoor - Frax - Bridge - Okta - LinkedIn - Twitch - Roblox - RegionalBank - CitizenHealth - K Health - Neighbor - BitGo - PlayBoy - CryptoExchange - AngelList - Empower - MyRent - RobinHood - Discord (previously)








Your AI conversations aren't privileged. Yesterday, Judge Jed Rakoff ruled that 31 documents a defendant generated using an AI tool and later shared with his defense attorneys are not protected by attorney-client privilege or work product doctrine. The logic is simple: an AI tool is not an attorney. It has no law license, owes no duty of loyalty, and its terms of service explicitly disclaim any attorney-client relationship. Sharing case details with an AI platform is legally no different from talking through your legal situation with a friend (which is not privileged). You can't fix it after the fact, either. Sending unprivileged documents to your lawyer doesn't retroactively make them privileged. That's been settled law for years. It just hadn't been tested with AI until now. And here's what really hurt the defendant: the AI provider's privacy policy (Claude), in effect when he used the tool, expressly permits disclosure of user prompts and outputs to governmental authorities. There was no reasonable expectation of confidentiality. The core problem is the gap between how people experience AI and what's actually happening. The conversational interface feels private. It feels like talking to an advisor. But unless you negotiate for an enterprise agreement that says otherwise, you're inputting information into a third-party commercial platform that retains your data and reserves broad rights to disclose it. Judge Rakoff also flagged an interesting wrinkle: the defendant reportedly fed information from his attorneys into the AI tool. If prosecutors try to use these documents at trial, defense counsel could become a fact witness, potentially forcing a mistrial. Winning on privilege doesn't make the evidentiary picture simple. For anyone advising clients or managing legal risk, this is a wake-up call. AI tools are not a safe space for clients to process their counsel's advice and to regurgitate their legal strategy. Every prompt is a potential disclosure. Every output is a potentially discoverable document. So what do we do about it? First, attorneys need to be proactive. Advise clients explicitly that anything they put into an AI tool may be discoverable and is almost certainly not privileged. Put it in your engagement letters. Make it part of onboarding. Don't assume clients understand this, because most don't. Second, if clients want to use AI to help process legal issues (and they clearly will, increasingly), then let's give them a way to do it inside the privilege. Collaborative AI workspaces shared between attorney and client, where the AI interaction happens under counsel's direction and within the attorney-client relationship, can change the analysis entirely. I'm excited to be planning this kind of approach, and I think it's where the industry needs to head. storage.courtlistener.com/recap/gov.usco…















