testsetset
Following the fallout from Project Maven, the controversial Pentagon project that prompted protests and resignations at Google this month, Google says it will develop an ethics policy to guide its involvement in military projects. That’s according to The New York Times, which reported today that the rules will explicitly ban the use of artificial intelligence in weaponry.
A Google spokesperson said that the company would take into account employee feedback in drafting “a set of principles” around defense and AI contracting, following CEO Sundar Pichai’s promise at a companywide meeting of “guidelines” that would “[stand] the test of time.” Details of the policy are expected to be announced in the coming weeks.
Project Maven is a hotly contested topic among top Google executives, internal email exchanges obtained by The New York Times show. In a September thread led by Scott Frohman, Google’s head of defense and intelligence sales, Fei-Fei Li, chief scientist for Google Cloud and head of Stanford’s AI lab, expressed concern about a potential backlash.
“Avoid at ALL COSTS any mention or implication of AI,” she wrote. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google … I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Li is an outspoken advocate for artificial intelligence in education. She cofounded AI4All, a nonprofit organization dedicated to increasing diversity and inclusion in artificial intelligence.
Google decided against publicizing its contributions to Project Maven, but debate broke out among employees on Google’s internal message boards.
A departing engineer renamed a conference room after Clara Immerwahr, a German chemist who committed suicide in 1915 to protest the use of science in warfare. Executives at London-based Google subsidiary DeepMind, including founder Mustafa Suleyman, entered into policy discussions involving Project Maven with Pichai and other stakeholders. And Jeff Dean, who oversees Google’s AI research, signed a letter opposing the use of machine learning for autonomous weapons.
Shortly after news of Google’s involvement broke, Diane Greene, chief executive of Google Cloud, responded to questions about Project Maven at one of Google’s weekly T.G.I.F. meetings, as did Google cofounder Sergey Brin. Brin has spoken at length with Larry Page, CEO of Google’s holding company, Alphabet, and Pichai, according to The New York Times.
As protests at Google reached a fever pitch, Greene decided to host a roundtable discussion about Project Maven, the nature of Google’s involvement, and ethical uses of AI on April 11. Vint Cerf, a Google vice president and noted former Defense Department researcher, and Meredith Whittaker, a Google AI researcher, participated in a three-session discussion that was broadcast to Google employees around the world.
It did little to allay the concerns of employees who felt Google had betrayed one of its core mantras, “Don’t be evil.”
“We can steer the conversation about cloud,” Aileen Black, a Google executive in Washington, wrote in the email exchange with Li and Frohman, “but this is an AI specific award. I think we need to get ahead of this before it gets framed for us.”