Watch all the Transform 2020 sessions on-demand here.
IBM formally announced the IBM Policy Lab — an initiative aimed at providing policymakers with recommendations for emerging problems in technology — ahead of a panel discussion to be held tomorrow at the World Economic Forum in Davos. The panel will be hosted by IBM CEO Ginni Rometty, with Siemens CEO Joe Kaeser, White House Deputy Chief of Staff for Policy Coordination Chris Liddell, and OECD Secretary General Angel Gurría. IBM also outlined a set of priorities for AI regulation, including several aimed at compliance and explainability.
The Policy Lab — which soft-launched in November 2019 — serves as a forum for establishing a “vision” and actionable suggestions to “harness the benefits of innovation while ensuring trust,” according to press materials published this morning. It is under the leadership of codirectors Ryan Hagemann, a former senior policy fellow at the International Center for Law and Economics at the Niskanen Center, and Jean-Marc Leclerc, who currently vice-chairs the American Chamber of Commerce to the European Union’s Digital Economy Committee and chairs the Software Alliance’s Europe, Middle East, and Africa Policy Committee. To execute its mandate, the think tank convenes stakeholders and leaders in public policy, academia, civil society, and tech to formulate ideas to help tackle global challenges.
The IBM Policy Lab will publish studies and research that buoy industry and state decision-making processes. It also plans to develop “bold” policy positions that “look forward to the opportunities of tomorrow” but are intended to be implemented relatively quickly. “Our approach is grounded in the belief that tech can continue to disrupt and improve civil society while protecting individual privacy,” wrote Hagemann and Leclerc in a joint statement. “As technological innovation races ahead, our mission to raise the bar for a trustworthy digital future could not be more urgent.”
On the subject of AI, the IBM Policy Lab calls for what it describes as “precision regulation” of AI, or laws that require companies to develop and operate “trustworthy” systems. IBM’s proposed framework takes into account whether companies are providers or owners of AI systems (or both), in addition to addressing the level of risk presented by particular products as determined by the potential for harm associated with the intended use, the level of automation and human involvement, and whether an end user is substantially reliant on the AI system.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
IBM advocates for the appointment of AI ethics officials to guarantee compliance with providers’ and owners’ expectations. These watchdogs would be accountable for internal guidance and compliance mechanisms like AI ethics boards, which would oversee risk assessments and harm mitigation strategies, and for improving public acceptance and trust of systems while driving commitments to responsible development, deployment, and stewardship of AI.
The IBM Policy Lab also posits specific rules for different levels of system risk, to the extent that companies conduct high-level assessments of their AI’s potential for harm followed by in-depth tests for high-risk applications. In the latter case, IBM says evaluations should be documented in auditable formats and retained for agreed-upon minimum periods of time.
IBM recommends policies of transparency — that is, making the purpose of AI systems clear to consumers and businesses — while acknowledging that low-risk systems might not require exhaustive disclosures. That said, the company asserts that any AI system on the market making determinations or recommendations with “potentially significant implications” for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.
Providers and owners of AI systems should maintain audit trails for their input and training data, according to the IBM Policy Lab, and operators of those systems should make available documentation that details essential information for consumers to be aware of (e.g., confidence measures, levels of procedural regularity, and error analysis). IBM also says companies should test their AI for fairness, bias, robustness, and security and take remedial actions before deployment and after their systems are operationalized. In addition, companies should retain responsibility for ensuring use of their systems is aligned with anti-discrimination laws, as well as statutes addressing safety, privacy, financial disclosure, consumer protection, employment, and other sensitive contexts.
IBM suggests this might be achieved at the government level by designating existing co-regulatory mechanisms, like the National Institute of Standards and Technology in the U.S., to identify definitions, frameworks, and benchmarks for standards in AI systems. Supporting minority-serving organizations and impacted communities in efforts to engage with academia and industry could accelerate the development of these criteria, and providing various levels of liability safe harbor protections could incentivize the adoption of new standards and validation regimes.
Finally, IBM says any action or practice prohibited by anti-discrimination laws should likewise be prohibited when automated decision-making system are involved. “Among companies building and deploying artificial intelligence, and the consumers making use of this technology, trust is of paramount importance,” continued Hagemann and Leclerc. “Companies want the comfort of knowing how their AI systems are making determinations, and that they are in compliance with any relevant regulations, and consumers want to know when the technology is being used and how (or whether) it will impact their lives.”
IBM’s announcements come a day after Google and parent company Alphabet CEO Sundar Pichai called for AI to be regulated with “international alignment,” and a week after it was revealed that the European Commission is considering a five-year ban on facial recognition technologies. The White House earlier this month published its own proposed regulatory principles and urged Europe to “avoid heavy-handed innovation-killing models.” In a comment to a reporter last September, Jeff Bezos said Amazon is drafting facial recognition regulation to pitch to lawmakers. Separately, Microsoft executives have called on lawmakers to investigate facial recognition and craft policies guiding its usage.
State-level regulation of AI systems around the world is at best piecemeal, and companies have struggled to adopt lasting policies around development of the technology. Notably, Google dissolved an external ethics board designed to monitor its use of AI after just one week.