Skip to main content

How a strong board of directors keeps AI companies on an ethical path

Image Credit: stoatphoto / Shutterstock

Watch all the Transform 2020 sessions on-demand here.


Following the corporate corruption scandals of the early 2000s, then-Securities and Exchange Commission chairman William Donaldson said determining the company’s moral DNA “should be the foundation on which the Board builds a corporate culture based on a philosophy of high ethical standards and accountability.” Today’s crisis of confidence in technology companies, especially those controlling deep pools of data and developing and deploying artificial intelligence, not only demands more responsible engineers, entrepreneurs, and executives but more assertive boards who make ethics and the public interest strategic priorities.

The board’s role

Board of directors’ responsibilities include hiring, firing, and holding the CEO’s feet to the fire, as well as approving and overseeing the company’s strategy and ensuring the integrity of company financials. Boards must also set a tone at the top of ethics and responsible business practices.

Companies in traditional industries such as health care and manufacturing can turn to decades of laws, regulations, and litigation for guard rails and guidance. That’s largely unavailable in new fields such as artificial intelligence. Norms and standards are still emerging; laws, regulations, and legal precedent are scarce; and pressure groups are still finding their voice and translating concerns into actionable demands. Moreover, black-letter law will likely never be able to keep pace with technological progress, stop every nefarious actor determined to wreak havoc, or account for every ethical blind spot and trapdoor. That is why it is important for AI company boards to be aggressive stewards of corporate ethics, making it a top priority alongside other concerns such as capital allocation and succession planning.

Boards should hold CEOs accountable for making ethics an organization-wide priority, subject a company’s strategy and major business decisions to ethical stress tests, and use conflicts between commercial pressures and ethical outcomes as an opportunity to innovate in the public interest’s favor. For example, directors should not take it on faith that their AI products do not exacerbate racism or sexism. Instead, board members should push management to prove that products cannot cause gender and race-based harm. They should also facilitate robust discussions about ways products could enable harm once in the hands of customers and the public and ensure plans are in place to adapt products and practices when confronted with evidence of harm.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Governance excellence on AI ethics

Boards can build their ethics and governance muscle in three ways.

First, directors should receive thorough and ongoing training in business-related ethics, covering issues ranging from bias and privacy to best practice in fairness, transparency, and accountability. They should also be well versed in the ethical implications of their company’s products and services, including conflicts between professed values and the underlying business model. Directors should also require management briefings on ethics-related litigation and concerns raised by customers, policymakers, and advocates who could have a material impact on the company’s reputation, relationship with regulators, or growth.

Second, to bring a richer array of perspectives to the boardroom, directors should open their ranks to include ethics experts as well as AI experts who are not computer scientists. For example, an AI company serving the health care sector might include specialists in privacy or health equity. A company offering human resource-related AI might bring in a leader in diversity and inclusion who has deep knowledge in both labor and civil rights law, as well as broader concerns such as bias, building inclusive workplaces, and the politics of diversity and business. This is especially important for startup boards dominated by founders and investors, where enthusiasm about the product and the pursuit of a profitable exit can cloud judgment and dampen debate. Directors can make hay about ethical concerns in the boardroom and resign if the company prefers taking the wrong path, signaling to the market that all is not well.

Finally, AI companies with significant market power or that have a business with a substantial impact on human health or civil rights should have a board ethics committee in the mold of well-functioning audit committees. Companies should set up this committee with independent directors, including at least one with expertise in ethics and technology. It should be able to seek independent technical, legal, and political advice and to probe senior management without the CEO present.

What the market will bear

Technology companies can look to the growing environmental and social impact pressure on boards in other industries for a harbinger of things to come. Earlier this year, Larry Fink, chairman of investing behemoth BlackRock, caused a stir when his annual letter to CEOs highlighted the importance of good environmental and community stewardship when the firm makes investment decisions. Fink not only called for better executive management of these matters, he called for stronger board leadership as well. “The board is essential to helping a company articulate and pursue its purpose, as well as respond to the questions that are increasingly important to its investors, its consumers, and the communities in which it operates,” he wrote. A 2018 report by the consultant firm EY’s Center for Board Matters found market demand for a more expansive approach to board governance that included “increasingly high-profile environmental and social topics, such as climate change, political spending, and lobbying, diversity and inclusiveness, health care, immigration and more.”

Whether more ethically assertive corporate governance will prevent AI from sending the wrong people to jail, denying otherwise qualified people health care and social services, or making it harder for women, people of color, and others to climb the economic ladder depends on two factors. First, management must respond to board signals and enforce high ethical standards throughout the company or face replacement. Second, customers, investors, and regulators must reward companies that demonstrate ethical excellence and punish those that do not. Ultimately, AI will become as ethical as the market demands.

Trooper Sanders is a Rockefeller Foundation fellow (@TrooperSanders). The views expressed are the author’s and do not reflect the views of The Rockefeller Foundation.