
Theory of Mind that doesn't change behavior isn't really ToM; it's verbal mimicry of it. EnactToM makes this distinction operational for embodied agents: not "can you state your partner's belief" but "do you act on it when it matters?" The results are sobering. Frontier models hit 45% literal accuracy but 0% functional Pass^3 on the hard split. They know what the partner doesn't know when prompted; they just don't communicate it before the partner commits. The gap between knowing and enacting is where agentic AI lives.































