Diya Mukherjee
15 posts

Diya Mukherjee
@Dyya_mk
MSc Business Psychology · AI Governance & Compliance Professional Founder Saffýr AI
San Francisco, CA Katılım Nisan 2026
1 Takip Edilen1 Takipçiler

The framing I find most interesting in this law is what it implies about how China views the problem.
Most AI regulation treats the risk as being about outputs & what the AI says or decides.
This law treats the risk as being about the relationship itself. The sustained emotional interaction is the governance problem, not just the content it produces.
That's a meaningful shift in regulatory thinking. And honestly a harder problem to solve.
Because the harm isn't in any single interaction. It accumulates over time, invisibly, until it isn't invisible anymore.
English

🚨 BREAKING: China's new law on AI anthropomorphism has been officially enacted, and it is the world's STRICTEST law on the topic:
As I wrote earlier this year, to my knowledge, no AI law anywhere in the world regulates anthropomorphic AI systems with this level of detail, strictness, and concern for context-specific vulnerabilities and potential risks.
Earlier in January, I wrote an article about the law's first draft (link below). The approved version is even more comprehensive, covering liability-related risks as well.
Article 10, for example, establishes that providers of anthropomorphic AI must fulfill their security responsibilities throughout the service lifecycle and sets out detailed obligations for each phase of AI development and deployment.
Regarding children specifically, among the prohibited anthropomorphic AI practices is generating content for minors that causes them to imitate unsafe behaviors, induces extreme emotions, or leads them to develop bad habits, which may affect their physical and mental health.
Despite being a serious topic (which has led to numerous cases of suicide and mental health harm), most countries do NOT regulate AI anthropomorphism comprehensively.
An important reason for that is that peer-reviewed studies about AI-powered emotional manipulation and mental health harm only became available recently (as only in the past years have millions of people started to engage in these types of relationships).
China's new law is worth taking a look at, and hopefully, other countries, states, and regions will soon follow suit with their own protections against AI anthropomorphism.
👉 Lastly, if you are interested in China's AI policy and regulation, besides joining my newsletter's 93,200+ subscribers, I invite you to join my new Masterclass on the topic (only on June 1st). Links below.

English

Something I didn't fully appreciate when we started building Saffýr AI:
The US AI compliance problem isn't a legal problem. It's an infrastructure problem.
Right now companies are liable under Colorado's AI Act, Texas RAIGA, and California's transparency laws simultaneously.
Meanwhile a federal executive order tries to preempt them all, and a congressional bill tries to block that preemption.
No legal team can manually track that in real time. Not because they're not smart. Because the stack changes faster than any human process can.
The answer isn't more lawyers or better spreadsheets.
It's infrastructure that treats compliance as a live system, not a one-time audit.
That's the only version of this problem that actually gets solved.
English

A lot of companies saw the EU's Digital Omnibus proposal and quietly exhaled.
"Maybe the deadline moves. Maybe compliance gets simpler."
I get it. The AI Act is genuinely hard to operationalize.
But here's what I've come to believe building in this space:
The companies that will be in the best position aren't the ones who caught a lucky regulatory delay.
They're the ones who built governance infrastructure that doesn't depend on a deadline to function.
Continuous compliance isn't a sprint to August 2. It's a system that runs whether the law changes or not.
The Omnibus might pass. The deadline might shift.
Your exposure doesn't change either way.
English

@Cleo_Compliance Exactly right. The moment you plug in a third-party model and make a decision with it, the Act puts the governance burden on you.
English

🚨The EU AI Act is fully in force as of August 2026.
Here's what most companies still don't understand about it:
It doesn't regulate AI companies. It regulates ALL company using AI.
That means your hiring tool. Your fraud detection. Your customer scoring system...
Non-compliance fines: up to €35 million or 7% of global revenue.
Most companies have no documentation, no risk tier mapping, and no disclosure process in place.
The law isn't coming. It's here. And enforcement is starting.
English

🚨3 things companies do that guarantee AI compliance failure:
1. Treat it as a one-time audit
2. Assign it entirely to legal
3. Assume their vendors are handling it
To get this right, it doesn't require more compliance work.
Instead, build systems where compliance happens automatically, so it doesn't depend on a team keeping up with rules that change faster than they can read them.
All three are happening right now. Here's why each one fails 🧵
English

Many companies run AI hiring tools with hundreds of applicants screened daily.
The tool weights communication style. Penalizes certain speech patterns.
Nobody flagged it. Nobody documented it. No disclosure to applicants. No bias audit on file.
Under the EU AI Act, that's a high-risk AI system with zero required safeguards.
But here's what most people miss: you don't need to touch the EU to be exposed.
NYC law has required annual bias audits on AI hiring tools since 2023.
California: illegal as of October 2025.
Illinois: illegal as of January 2026.
Colorado: in force June 2026.
And the law applies even when a human makes the final call.
This isn't hypothetical. This is happening at companies you've heard of.
English

I've spent the last year going deep on AI regulation.
What I found was alarming.
Not because the laws are complex, but because almost no one is following them.
Companies are using AI in hiring, customer decisions, and biometric scoring right now. No disclosure. No documentation... most don't even know they're exposed.
The EU AI Act is now in force. US state laws are compounding weekly.
I'll be writing about what this means for the companies building and deploying AI.
If that's relevant to you follow along.
English