Microsoft has changed policy ban US police departments from using genetic AI for facial recognition through Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI technologies.
Language added Wednesday to the Terms of Service for the Azure OpenAI Service prohibits the use of integrations with the Azure OpenAI Service “by or for” US police departments for facial recognition, including integrations with its text and speech analysis models OpenAI.
A separate new bullet covers “any law enforcement worldwide” and expressly prohibits the use of “real-time facial recognition technology” in mobile cameras, such as body cameras and cameras, to attempt to identify a person in “uncontrolled, in-the-wild” environments .
The changes to the terms come a week after Axon, a maker of technology products and weapons for the military and law enforcement, announced a new PRODUCT which leverages OpenAI’s GPT-4 text generation model to summarize audio from body cameras. Critics were quick to point out potential pitfalls, such as illusions (even the best genetic AI models today invent facts) and racial biases introduced by the training data (which is especially troubling given that people of color are much more likely to be stopped by the police than their white peers).
It is unclear whether Axon was using GPT-4 through the Azure OpenAI Service and, if so, whether the updated policy was in response to Axon’s product release. OpenAI had previously limited the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft, and OpenAI and will update this post if we hear back.
The new terms leave room for Microsoft.
The complete ban on using the Azure OpenAI Service applies only to the US, not international, police. And it doesn’t cover facial recognition performed with motionless cameras inside controlled environments, such as a back office (although the terms prohibit any use of facial recognition by US police).
This tracks with Microsoft’s recent approach to OpenAI and Microsoft’s close partner in AI-related law enforcement and defense contracts.
In January, Bloomberg reported revealed that OpenAI is working with the Pentagon on a number of projects, including cybersecurity capabilities — a departure from the startup’s previous ban on providing its AI to militaries. Elsewhere, Microsoft has demonstrated the use of OpenAI’s imaging tool, DALL-E, to help the Department of Defense (DoD) build software to run military operations, per The Intercept.
The Azure OpenAI service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features aimed at government agencies, including law enforcement. In a suspensionCandice Ling, SVP of Microsoft’s government-focused Microsoft Federal division, pledged that the Azure OpenAI service will “submit for additional authorization” to the Department of Defense for workloads that support DoD missions.
Modernize: After the publication, Microsoft said that its original change to the terms of service contained an error, and in fact the ban only applies to facial recognition in the US. It is not a blanket ban on police departments using the service.