Skip to main content

AI Weekly: Google’s ethics council barely lasted a week, but there’s a thin silver lining

Image Credit: Khari Johnson / VentureBeat

testsetset

Google disbanded its external AI ethics board on Thursday, only 10 days after it was formed to advise the Alphabet company about potential issues with AI models.

Critics of the Advanced Technology External Advisory Committee (ATEAC) seemed to primarily oppose Heritage Foundation president Kay Cole James, who Google employees say voiced anti-immigrant, anti-gay, and anti-transgender statements. Another member of the eight-person committee resigned. It never held a single formal meeting.

Before opposition began to mount, on its face, Google’s advisory appeared to be little more than posturing. The group was only scheduled to meet quarterly and did not appear to have any binding power. ATEAC appeared to be an attempt to send signals to consumers around the world and lawmakers in Washington at a time when talking about regulation of big tech has become populist politics. It’s roughly the same reason Microsoft and Facebook are anxious to talk about regulation.

But Google isn’t alone in its posturing.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Politicians, including many 2020 presidential candidates, want to convince voters that they care about the adverse impacts of big tech’s monopolies, and big tech companies are flexing to prove they can regulate themselves and that they care deeply about ethical issues like bias in AI systems. The average person would be wise to be skeptical of statements made by both factions.

Researchers, technologists, and the rest of the world will debate the significance of how Google, which sees itself as an AI company instead of a tech company, employs an army of AI researchers, and is creator of the popular machine learning framework TensorFlow, got things so wrong.

In hindsight, it would have been much better if Google created a controversial council made up of people knowledgeable about AI who it can argue represents a diverse range of points of view and experiences. After all, the most poignant lesson drilled into the heads of AI practitioners in recent years is that a great range of perspectives can lead to better AI models.

Instead, they appear reactionary and shortsighted. If there’s a silver lining here, it’s that Google initially got things wrong, but listened to dissent and corrected course.

That’s in sharp contrast to Amazon, which also got pushback this week on ethical grounds over use of its facial recognition software Rekognition. Prominent AI researchers essentially called bullshit on claims by top Amazon machine learning and global policy VPs who tried to discredit a recent audit that found Rekognition lacking in its ability to recognize people of color, accurately classify men and women, or account for people beyond binary definitions of sexual orientation.

They also implored Amazon to stop selling Rekognition to law enforcement agencies. Amazon reportedly attempted to sell Rekognition to the Department of Homeland Security, and allowed it to be used in trials by police departments in Washington and Florida.

The Securities and Exchange Commission (SEC) this week rejected an Amazon petition to stop a vote by shareholders that could curb Rekognition use until an audit and civil rights review can take place. Shareholder protest over Rekognition dates back to last year.

It could be that Google chose to disband its council because it was wary of direct action by its employees, tens of thousands of whom walked out of the majority of its offices worldwide last year demanding change, or that Google feared another ongoing campaign like the kind brought about by its working with the U.S. military for Project Maven.

There’s also the possibility that Google learned some lessons from those experiences. After Maven was made public, internal debate led to the creation of Google’s AI guidelines, which included a commitment not to create autonomous weaponry.

If this external advisory committee business ends like Maven, maybe the next such body Google forms will be more robust, be able to withstand criticism, and actually lead to better AI systems.

Right now, Google looks more like a company willing to consider dissent and correct course, while Amazon looks like a company that wants to suppress dissent.

Disbanding an external advisory council after 10 days is a bad look. Fighting with shareholders, researchers, and the SEC is worse.

Both companies need to respond by being proactive about the moral and ethical implications of the AI systems they deploy, and genuine in their efforts to make them as democratic and accepting of all people as possible.

There’s more at stake than a bad user experience with Amazon shopping or the ads you see in Google. Getting things wrong for Rekognition or the sorts of issues ATEAC was to consider could ruin or end human lives.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

AI Staff Writer