testsetset
With polls showing that more than 70 percent of people in the U.S. remain wary of autonomous machines, the amount of research going into transparency in artificial intelligence (AI) is no surprise. In February, Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias, and in May Microsoft launched a solution of its own. Now, Google is following suit.
The Mountain View company today debuted the What-If Tool, a new bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework. With no more than a model and a dataset, users are able to generate visualizations that explore the impact of algorithmic tweaks and adjustments.
“Probing ‘what if’ scenarios [in AI] often means writing custom, one-off code to analyze a specific model,” Google AI software engineer James Wexler wrote in a blog post. “Not only is this process inefficient, it makes it hard for non-programmers to participate in the process of shaping and improving ML models.”

Above: Exploring scenarios on a data point within TensorBoard.
Using the What-If Tool, TensorBoard users can manually edit examples from datasets and see the effects of the changes in real time, or generate plots that illustrate how a model’s predictions correspond with any single feature.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Key to this process is counterfactuals and algorithmic fairness analysis. With a button click, the What-If Tool can show a comparison between a data point and the next-closest datapoint where the model predicts a different result. Another click shows the effects of different classification thresholds, and a third has the tool automatically take into account constraints to optimize for fairness.
Wexler wrote that the What-If Tool has been used internally to detect features of datasets that’d been previously ignored and to discover patterns in outputs that contributed to improved models.
The What-If Tool is available in open source starting today. Alongside it, Google published three examples using pretrained models that demonstrate its capabilities.
“One focus … is making it easier for a broad set of people to examine, evaluate, and debug ML systems,” Wexler wrote. “We look forward to people inside and outside of Google using this tool to better understand ML models and to begin assessing fairness.”
One needn’t look far for examples of prejudicial AI.
The American Civil Liberties Union in July revealed that Amazon’s Rekognition facial recognition system could, when calibrated a certain way, misidentify 28 sitting members of Congress as criminals, with a strong bias against persons of color. Recent studies commissioned by the Washington Post, meanwhile, revealed that popular smart speakers made by Google and Amazon were 30 percent less likely to understand foreign accents than those of native-born speakers.