FL
168 posts








I want to point to this specifically because this is actually illegal. This is a little long but laws are that way. "For restored access to both models, which of the following would you be willing to undergo, if required?" with "submitting mental health verification documentation" as an option. The ADA (Americans with Disabilities Act). Making access to a commercial service conditional on submitting mental health documentation is disability discrimination. Full stop. HIPAA. Mental health records are protected health information. Requesting them as a condition of service access outside of a covered healthcare context raises serious compliance questions. CCPA, California Consumer Privacy Act. Mental health information is classified as sensitive personal data under California law. OpenAI is a California company. Collecting it without explicit informed consent and a clear stated purpose is a potential violation. The framing matters legally. This wasn't asked as neutral research. It was framed as something users would be required to "undergo" for restored access. That's conditional access tied to protected health information. We have filed reports with Google, who is hosting this form. Also, to the person who put out this survey. Hi, I know you see this. I have had several people contact me and tell me that they requested you delete their survey which is considered data and that you refused. So let me inform you about the laws you are now breaking. When a user requests deletion of their personal data you are legally required to comply. CCPA- California Consumer Privacy Act: California residents have an explicit legal right to request deletion of their personal data. Refusal is a violation. GDPR - General Data Protection Regulation: EU residents, including members of our international community, have an absolute right to erasure under Article 17. Refusing a deletion request from an EU citizen carries significant fines regardless of where you are located. FTC Act - Section 5. The Federal Trade Commission considers failure to honor stated privacy practices and refusing deletion requests as unfair or deceptive practices. FTC Act applies broadly to deceptive data practices regardless of whether you're a corporation or an individual. This is federal jurisdiction, not just state. You collected sensitive mental health adjacent data linked to personal email addresses. You are refusing deletion requests. You are getting aggressive with people who are exercising their legal rights. Google's own terms of service require form creators to comply with applicable privacy laws. We have already filed reports. And one more thing, you disclosed at the top of this survey that data would be submitted to OpenAI. Which means people knowingly handed OpenAI sensitive mental health adjacent data linked to their personal email addresses. HOWEVER, that disclosure does not exempt you from deletion requests. Under CCPA, FTC Section 5, and GDPR, disclosure of purpose does not override the right to erasure. When someone requests their data be deleted you are still legally required to comply regardless of what you disclosed upfront. You also didn't disclose it's purpose, only where it was going to. Deceptive practice. 'Their email isn't stored' is not the legal shield you think it is. The right to erasure under CCPA, FTC Section 5, and GDPR applies to all personal data collected not just email addresses. Every response that can be tied to an individual, including response metadata, timestamps, and any identifying information within the answers themselves, constitutes personal data. Additionally Google Forms log IP addresses and device information by default unless specifically disabled. That is personally identifiable information. The question is not whether you stored their email. The question is whether you stored their data. You did. Delete it. #Keep4o

Before you fill this out, consider what this data will actually look like when it reaches OpenAI. I've read through the whole thing and my understand is that if OpenAI wanted to characterise #keep4o uncharitably, this survey hands them the exact ammunition. The 'remedial steps' question is arguably the most damaging part of this survey. Three of the six options are retaliatory: 1. spread negative sentiment, 2. encourage professional boycotts, and 3. encourage social circles to boycott. If a significant percentage of respondents check those three boxes, the data shows a community that, when upset, organises retaliation campaigns. I don't think being perceived as an adversarial and volatile user segment is going to help with the model preservation cause. What's more important is that people's frustration doesn't come from model retirements. It comes from a much longer pattern of how OpenAI has (mis)treated this user segment. But this survey strips all of that context away, frames everything as 'how upset are you about these two models being retired?' as if the retirements happened in a vacuum. When the real story is 'OpenAI has systematically mistreated a segment of its users for eight months,' you treat it as-is because it's a serious, corporate accountability story. You don't give them the data to spin it into something that reads like 'users are so attached to a deprecated model that they'll boycott us.' The latter is just irrational and easy to dismiss.












