Microsoft is turning its back on its scientifically dubious and ethically questionable emotion recognition technology. At least for now.
In a big win for privacy advocates alarmed about undertested and invasive biometric technology, Microsoft has announced plans to retire its so-called “emotion recognition” detection systems from Azure Face facial recognition services. The company will also phase out capabilities that attempt to use A.I. to reveal identity characteristics such as gender and age.
Microsoft’s decision to pull the brakes on controversial technology comes amid a more significant overhaul of its ethics policies. Natasha Crampton, Microsoft’s Chief Artificial Intelligence Officer, said the company’s turnaround came in response to experts who expressed a lack of consensus on the definition of “emotions” and concerns about overgeneralizing how A.I. systems would interpret those emotions.
“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the trade-offs,” Sarah Bird, Azure AI Main Group Product Manager, said in a separate statement; “API access to capabilities that predict sensitive attributes also opens up a wide variety of ways they can be abused, including exposing people to stereotyping, discrimination, or unfair denial of service,” Bird added.
Bird said the company will move away from a general-purpose system in the Azure Face API that seeks to measure these features to “reduce risks.” Starting Tuesday, new Azure customers will no longer have access to this detection system, but existing customers will have until 2023 to stop their use. Most importantly, though Microsoft says its API can no longer be used for general purpose use, Bird said the company might continue to explore the technology in limited use cases, particularly as a tool to support people with disabilities.