Watch all the Transform 2020 sessions on-demand here.
Kay Firth-Butterfield is a busy person. She’s tasked with leading AI and machine learning efforts at the World Economic Forum (WEF) and the Centre for the Fourth Industrial Revolution. The center works with governments around the world, but many countries have yet to create an AI policy. Firth-Butterfield spoke with VentureBeat last week following a conversation with Irakli Beridze, head of the United Nations Center for Artificial Intelligence and Robotics, at the Applied AI conference in San Francisco.
Since the launch of its Centre for the Fourth Industrial Revolution two years ago, the World Economic Forum has spawned efforts in the U.S. (San Francisco), China, India, and now the United Arab Emirates, Colombia, and South Africa. Only 33 of 193 United Nations member states have adopted unified national AI plans, according to FutureGrasp, an organization working with the UN.
Firth-Butterfield recommends that businesses and governments recognize the unique data sets they have access to and create an AI policy that best serves their citizens or shareholders. Current examples include an effort to create a data marketplace for AI in India to help small and medium-sized business adopt the technology and an initiative underway in South Africa to supply AI practitioners with local data, instead of data from the United States or Europe.
“We need to grow indigenous data sets,” she said.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The value of AI ethics boards
In the months ahead, the WEF plans to ramp up initiatives to boost implementation of AI ethics.
Firth-Butterfield believes tech giants and businesses should be creating advisory boards to help guide the ethical use of AI. The establishment of such boards at the likes of Microsoft, Facebook, and Google in recent years made the notion a quasi-established norm in the tech industry, but the dissolution of two AI ethics boards at Google in recent weeks has called into question the effectiveness of advisory boards when they have no teeth or power.
Even so, she said, “At the forum, we are very definite that having an ethics advisory panel around the use of AI in your company is a really good idea, so we support very much Google’s efforts to create one.”
Her insistence on the value of such bodies is drawn in part from the fact that she established an AI ethics advisory program at Lucid.ai in 2014. Though Google’s DeepMind disbanded a health-related board last year, Firth-Butterfield thinks the DeepMind board structure was sound. Sources told the Wall Street Journal that the board was denied information requested for its oversight duties.
Transparency is essential if tech companies want to overcome the perception that they’re only interested in the appearance of doing good — sometimes called ethics theater or ethics washing. AI ethics boards should be independent, entitled to draw information from business practices, and allowed to go directly to a company’s board of directors or talk about their work publicly.
“[In that role,] I should have an observer role on the board so I can tell the board what I saw in the company if I saw something problematic and couldn’t negotiate it with C-suite officers — so that you have a way of talking to those people who have ultimate control of the company,” Firth-Butterworth said.
The establishment of an ethics board or appointment of a C-suite executive to oversee ethical use of AI systems can be part of a broader strategy that helps businesses protect human rights without stifling innovation, she added. “What we want to do is make sure they think about putting in either a chief AI officer or a chief technology ethics officer — Salesforce just created that position — or an advisory board.”
“We’re also advising that [companies] think about ethics at the beginning, so when you start having ideas for a product, that’s the time to bring in your ethics officer, because then you’re not going to spend a huge amount of money on the R&D,” she said.
Worldwide standards-making
On May 29, the WEF will host the first meeting of the Global AI Council to focus on creating international standards for artificial intelligence. The gathering will include stakeholders from business, civil society, academia, and government.
“That brings together all of our multi-stakeholders, [and] it brings together a lot of ministers of various countries around the world to think about ‘Okay, we can do these national things. But what can we also do internationally together?’ I think there’s a definite feeling that countries will probably [do] best to try and work together to solve some of these difficulties around AI,” she said.
Questions of U.S. leadership and international participation were also raised at an ethics gathering held this week by the U.S. Department of Defense. The Organisation for Economic Cooperation and Development (OECD) will publicly share AI policy recommendations with participation from the United States this summer.
Among individual nations working with the WEF, the United Kingdom will consider guidelines for acquiring AI systems for government use in July. That policy could be adopted this fall and is expected to include rules for ethics, governance, development and deployment, and operations. Other countries may adopt similar government procurement guidelines. “The idea is that we scale what we do with one country across the world,” Firth-Butterfield said.
An initiative was recently started with the government in New Zealand to reimagine what an AI regulator would do.
“What does the regulator for AI in a modern world, where we don’t want to stifle innovation but we do want to protect the public, what does that person look like? Are there certification standards that we should put in place? We don’t know what the answer is at the moment,” she said.