Watch all the Transform 2020 sessions on-demand here.
In a blog post today, Microsoft announced an update to Face API that improves the facial recognition platform’s ability to recognize gender across different skin tones, a longstanding challenge for computer vision platforms.
With the improvements, the Redmond company said, it was able to reduce error rates for men and women with darker skin by up to 20 times, and by 9 times for women.
For years, researchers have demonstrated facial ID systems’ susceptibility to ethnic bias. A 2011 study found that algorithms in China, Japan, and South Korea had more trouble distinguishing between Caucasian faces than faces of East Asians, and a separate study showed that widely deployed facial recognition tech from security vendors performed 5 to 10 percent worse on African American faces.
To tackle the problem, researchers at Microsoft revised and expanded Face API’s training and benchmark datasets and collected new data across skin tones, genders, and ages. It also worked with experts in artificial intelligence (AI) fairness to improve the precision of the algorithm’s gender classifier.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
“We had conversations about different ways to detect bias and operationalize fairness,” Hanna Wallach, senior researcher at Microsoft’s New York research lab, said in a statement. “We talked about data collection efforts to diversify the training data. We talked about different strategies to internally test our systems before we deploy them.”
The enhanced Face AI tech is just the start of a company-wide effort to minimize bias in AI. Microsoft is developing tools that help engineers identify blind spots in training data that might result in algorithms with high gender classification error rates. The company is also establishing best practices for detecting and mitigating unfairness in the course of AI systems development, according to the blog post.
More concretely, Microsoft’s Bing team is collaborating with ethics experts to explore ways to surface search results that reflect “the active discussion in boardrooms, throughout academia and on social media about the dearth of female CEOs.” Microsoft notes that less than 5 percent of Fortune 500 CEOs are women and that web search results for “CEO” largely turn up images of men.
“If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases,” Wallach said. “This is an opportunity to really think about what values we are reflecting in our systems, and whether they are the values we want to be reflecting in our systems.”
Microsoft isn’t the only company attempting to minimize algorithmic bias. In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Recent studies from IBM’s Watson and Cloud Platforms group have also focused on mitigating bias in AI models, specifically as they relate to facial recognition.