
What we know about the OpenAI-DoW deal:
OpenAI agreed to the terms Anthropic rejected.
The terms include an "all lawful use" clause.
The contract "references certain existing legal authorities" which the govt claims prove that domestic mass surveillance is already illegal.
There are "certain mutually agreed upon safety mechanisms" — we don't know the precise details of these, but it looks like that includes a "monitoring harness" and Forward Deployed Engineers, which, per one OAI employee, "seem to offer broad powers for openai to interpret the law."
Perhaps! But ultimately OpenAI isn't in charge of how this model is used, and it has signed a contract that allows the Pentagon to use the model for all lawful uses.
The Pentagon may claim that domestic mass surveillance is illegal, but that is on a very narrow definition of domestic mass surveillance — the Snowden files showed that the Pentagon is very willing to play word games to pretend that domestic mass surveillance isn't what it is.
I think the default assumption should be that OpenAI's "safety mechanisms" won't actually do anything, because ultimately they've agreed that their models can be used for anything lawful. Which is, in effect, the same thing as letting the Pentagon use their models for domestic mass surveillance.
Shakeel@ShakeelHashim
It is incredibly depressing to see OpenAI employees trying to spin the OpenAI-DoW deal as a good thing. OpenAI caved to the Pentagon's demands. Your technology can now be used for authoritarian ends. Own up to it.
English






