Watch all the Transform 2020 sessions on-demand here.
Bias in algorithms is more common than you might think. An academic paper in 2012 showed that facial recognition systems from vendor Cognitec performed 5 to 10 percent worse on African Americans than on Caucasians, and researchers in 2011 found that models developed in China, Japan, and South Korea had difficulty distinguishing between Caucasians and East Asians. In another recent study, popular smart speakers made by Google and Amazon were found to be 30 percent less likely to understand non-American accents than those of native-born users. And a 2016 paper concluded that word embeddings in Google News articles tended to exhibit female and male gender stereotypes.
It’s a problem. The good news is, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) are working toward a solution.
In a paper (“Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure“) scheduled to be presented at the Association for the Advancement of Artificial Intelligence’s conference on Artificial Intelligence, Ethics, and Society in Honolulu this week, MIT CSAIL scientists describe an AI system that can automatically “debias” data by resampling it to be more balanced. They claim that, when evaluated on a dataset specifically designed to test for biases in computer vision systems, it demonstrated both superior performance and “decreased categorical bias.”
“Facial classification in particular is a technology that’s often seen as solved, even as it’s become clear that the datasets being used often aren’t properly vetted,” Ph.D. student Alexander Amini, who was colead author on a related paper, said in a statement. “Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement and other domains.”
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Amini and fellow Ph.D. student Ava Soleimany contributed to the new paper, along with graduate student Wilko Schwarting and MIT professors Sangeeta Bhatia and Daniela Rus.
It’s not MIT CSAIL’s first pass at the problem — in a 2018 paper, professor David Sontag and colleagues described a method to reduce bias in AI without reducing the accuracy of predictive results. But the approach here features a novel, semisupervised end-to-end deep learning algorithm that simultaneously learns the desired task — for example, facial detection — and the underlying latent structure of the training data. That latter bit enables it to uncover hidden or implicit biases within the training data, and to automatically remove that bias during training without the need for data preprocessing or annotation.
How the debiasing works
The beating heart of the researchers’ AI system is a variational autoencoder (VAE), a neural network — layers of mathematical functions modeled after neurons in the human brain — comprising an encoder, a decoder, and a loss function. The encoder maps raw inputs to feature representations, while the decoder takes the feature representations as input, uses them to make a prediction, and generates an output. (The loss function measures how well the algorithm models the given data.)
In the case of the proposed VAE, dubbed debiasing-VAE (or DB-VAE), the encoder portion learns an approximation of the true distribution of the latent variables given a data point, while the decoder reconstructs the input back from the latent space. The decoded reconstruction enables unsupervised learning of the latent variables during training.
To validate the debiasing algorithm on a real-world problem with “significant social impact,” the researchers trained the DB-VAE model with dataset of 400,000 images, split 80 percent and 20 percent into training and validation sets, respectively. They then evaluated it on the PPB test dataset, which consists of images of 1,270 male and female parliamentarians from various African and European countries.
The results were really promising. According to the researchers, DB-VAE managed to learn not only facial characteristics such as skin tone and the presence of hair, but other features such as gender and age. Compared to models trained with and without debiasing on both individual demographics (race/gender) and the PPB dataset as a whole, DB-VAE showed increased classification accuracy and decreased categorical bias across race and gender — an important step, the team says, toward the development of fair and unbiased AI systems.
“The development and deployment of fair … systems is crucial to prevent unintended discrimination and to ensure the long-term acceptance of these algorithms,” the coauthors wrote. “We envision that the proposed approach will serve as an additional tool to promote systematic, algorithmic fairness of modern AI systems.”
Making progress
The past decade’s many blunders paint a depressing picture of AI’s potential for prejudice. But that’s not to suggest progress hasn’t been made toward more accurate, less biased systems.
In June, working with experts in artificial intelligence (AI) fairness, Microsoft revised and expanded the datasets it uses to train Face API, a Microsoft Azure API that provides algorithms for detecting, recognizing, and analyzing human faces in images. With new data across skin tones, genders, and ages, it was able to reduce error rates for men and women with darker skin by up to 20 times, and by 9 times for women.
An emerging class of algorithmic bias mitigation tools, meanwhile, promises to accelerate progress toward more impartial AI.
In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Startup Pymetrics open-sourced its bias detection tool Audit AI. Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias. And in September, Google debuted the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework, following the debut of Microsoft’s own solution in May.
IBM, not to be outdone, in the fall released AI Fairness 360, a cloud-based, fully automated suite that “continually provides [insights]” into how AI systems are making their decisions and recommends adjustments — such as algorithmic tweaks or counterbalancing data — that might lessen the impact of prejudice. And recent research from its Watson and Cloud Platforms group has focused on mitigating bias in AI models, specifically as they relate to facial recognition.
With any luck, those efforts — along with pioneering work like MIT CSAIL’s new algorithm — will make change for the better.