Watch all the Transform 2020 sessions on-demand here.
Bias is a serious problem in artificial intelligence (AI). Research shows that popular smart speakers are 30 percent less likely to understand non-native U.S. accents, for example, and that facial recognition systems such as those from Cognitec perform demonstrably worse on African American faces. In fact, according to a recent study commissioned by IBM, two-thirds of businesses are wary of adopting AI because of potential liability concerns.
In an effort to help enterprises address this problem, IBM today announced the launch of a cloud-based, fully automated service that “continually provides [insights]” into how AI systems are making their decisions. It also scans for signs of prejudice and recommends adjustments — such as algorithmic tweaks or counterbalancing data — that might lessen their impact.
The service explains which factors influenced a given machine learning model’s decision, plus its overall accuracy, performance, fairness, and lineage. In addition, it lays bare the confidence in its bias-mitigating recommendations and any factors contributing to that confidence.
IBM said it works with popular machine learning frameworks and AI build environments, including IBM Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML, and that it can be tailored to individual enterprises’ workflows.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies,” said David Kenny, SVP of cognitive solutions at IBM. “It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making.”
Alongside the announcement, IBM launched in open source a toolkit — the AI Fairness 360 toolkit — containing a library of algorithms, code, and tutorials that demonstrate ways to implement bias detection in models.
IBM’s moves come a month after the publication of a whitepaper in which several of its researchers proposed “factsheets” for AI systems. The voluntary factsheets — formally called “Supplier’s Declaration of Conformity” (DoC) — would answer questions about things ranging from system operation and training data to underlying algorithms, test setups, and results. More granular topics might include governance strategies used to track the AI service’s data workflow, the methodologies used in testing, and bias mitigations performed on the dataset.
“Like nutrition labels for foods or information sheets for appliances, factsheets for AI services would provide information about the product’s important characteristics,” Aleksandra Mojsilovic, head of AI foundations at IBM Research and co-director of the AI Science for Social Good program, wrote in a blog post introducing the paper. “The issue of trust in AI is top of mind for IBM and many other technology developers and providers. AI-powered systems hold enormous potential to transform the way we live and work but also exhibit some vulnerabilities, such as exposure to bias, lack of explainability, and susceptibility to adversarial attacks. These issues must be addressed in order for AI services to be trusted.”
IBM’s not the only one developing platforms to mitigate algorithmic prejudice. At its F8 developer conference in May, Facebook announced Fairness Flow, an automated bias-catching service for data scientists. Microsoft and Accenture have released similar tools.