Skip to main content

World Economic Forum launches toolkit to help corporate boards build AI-first companies

Image Credit: World Economic Forum

The value of building data-driven businesses with AI at their core is well known today, and business executives are rushing to implement the technology into their operations and gain a competitive advantage, but it’s not as simple as creating a data lake and crafting AI models.

A large number of AI companies attempting to implement more AI models or build AI-first businesses have experienced challenges. A December 2018 PwC survey found that only 4% of businesses have successfully implemented AI.

That’s why today the World Economic Forum released the AI toolkit for Boards of Directors. The toolkit shares guidance on how AI alters things like branding, operations, and company culture.

The AI toolkit for Boards of Directors is being released ahead of the annual WEF meeting in Davos, Switzerland where the toolkit will be formally debuted next week. Next week the Singapore government will release its own framework for companies using AI in Singapore, in conjunction with Microsoft president Brad Smith.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


The toolkit also advises businesses to consider how AI can impact their competitive strategy. It contains a series of questions boards of directors should be asking themselves, including whether management anticipates compliance with upcoming laws and regulations, and possible impacts on individuals’ rights, society, and ethical values.

“Scandals such as Cambridge Analytica have created a deepened resolve to hold management accountable for unethical uses of data and AI. By insisting that AI is fair, safe, reliable and secure, boards help companies build the trust needed to bond customers and partners,” the report reads.

The ethics portion of the kit includes tools to help create AI development principles or form ethics boards and advises businesses to consider the costs of ethical failures.

“Ethics codes may be about making informed choices within guardrails that set limits of acceptable behavior, and not simply about doing the right thing,” the report reads. “Any failure to consider and address these issues and concerns could drive away clients, partners, and employees.”

Each section of the toolkit also includes a curated selection of executive education programs, as well as books, reports, and news articles to read to learn more.

Kay Firth-Butterfield is director of the AI team at the World Economic Forum’s Center for the Fourth Industrial Revolution, an entity that explores the way AI is reshaping business and society that the WEF launched in 2017. Firth-Butterfield spoke with VentureBeat last year about how to protect human rights without hampering innovation.

She said in the WEF’s initiative to pull together the toolkit, they found that few boards of directors had an understanding of AI, perhaps in part due to a lack of age or diversity.

The report resulted from the WEF studying 100 businesses over the span of a year and from companies that sent fellows to the center to help carry out the study, including Accenture, BBVA, IBM, and Suntory.

Governments also sent fellows to the center, like the U.K. government, which together with the World Economic Forum and fellows from companies like Salesforce released a guide for procurement officials within governments last summer. And a New Zealand fellow is considering how AI should change the way regulators think about their jobs in the age of AI.

Industry-specific AI toolkits will be released in the future, Firth-Butterfield said, likely starting with health care and retail, where businesses are beginning to use AI tools like facial recognition. An AI toolkit will also be made for C-suite executives. “Because again, in our research, we found a lot of companies that going to be using AI but are not technical companies, so we think there is a need there as well,” Firth-Butterfield said.

Firth-Butterfield said the Center for the Fourth Industrial Revolution has a number of plans for the year ahead, like a workshop requested by the WEF’s Global AI Council that brings together economists, AI scientists, and sci-fi writers to talk about what our futures might look like.

The first in a series of workshops on the topic will be held at Center for the Fourth Industrial Revolution headquarters in San Francisco.

“We hope to do one in South America, one in the Middle East, one in Africa, one in China, one in India, and one in Europe and then gradually bring them all together probably in April of 2021,” Firth-Butterfield said.

The WEF launched its Global AI Council in May 2019 together with top AI companies and organizations. The council is cochaired by former Google China president Dr. Kai-Fu Lee and Microsoft president Brad Smith.

Work on how governments should think about the use of facial recognition software for security and protection of civil liberties, completed in conjunction with the French government, is likely to be released this summer, Firth-Butterfield said.

The Center will also continue work with UNICEF on human rights guidelines for how AI interaction with children can comply with international law. A toy that stores data, for example, raises issues about who gives permission for that data to be stored, whether the data is used for educational purposes, or the monetization of children’s data, Firth-Butterfield said. “The reason that we do this isn’t to be worry-wusses about children and their toys, but what we want to do is make sure that the right standards and norms are being set in the industry so when we move to being able to educate our kids using AI, we have the right foundations for that.”