Skip to main content

How Google treats Meredith Whittaker is important to potential AI whistleblowers

Image Credit: Khari Johnson / VentureBeat

testsetset

In a co-written letter, Meredith Whittaker — one of two Google employees who claim they’ve faced retaliation for organizing protests against their employer’s treatment of sexual harassment — alleged that she’d been pressured by Google to “abandon her work” at the AI Now Institute, which she’d helped to found, and that she was informed her role would be “changed dramatically.” The situation raises the question of what protections there are for people who speak out concerning AI ethics.

Earlier this month, Whittaker coauthored a report highlighting the growing diversity crisis in the AI sector. “Given decades of concern and investment to redress this imbalance, the current state of the field is alarming,” Whittaker and her coauthors wrote for New York University’s AI Now Institute. “The AI industry needs to acknowledge the gravity of its diversity problem, and admit that existing methods have failed to contend with the uneven distribution of power, and the means by which AI can reinforce such inequality.”

In November, Whittaker and others spearheaded the mass walkout of 20,000 Google employees to bring attention to what they characterized as a culture of complicity and dismissiveness. They pointed to Google’s policy of forced arbitration and a reported $90 million payout to Android founder and former Google executive Andy Rubin, who’s been accused of sexual misconduct, but also a Pentagon contract — Project Maven — that sought to implement object recognition in military drones. “[It’s] clear that we need real structural change, not adjustments to the status quo,” said Whittaker.

The problematic optics of Google’s AI ethics

Is this all coincidental? Perhaps. A Google spokesperson told the New York Times that the company “prohibit[s] retaliation in the workplace … and investigate[s] all allegations,” and VentureBeat has reached out separately for clarification.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Regardless, Whittaker’s treatment sets an alarming precedent for a company that has in recent months struggled with ethics reviews. Just weeks ago, Google disbanded an external advisory board — the Advanced Technology External Advisory Council — that was tasked with ensuring its many divisions adhered to seven guiding AI principles set out last summer by CEO Sundar Pichai.

The eight-member panel was roundly criticized for its inclusion of Heritage Foundation president Kay Coles James, who has made negative remarks about trans people and whose organization is notably skeptical of climate change. And pundits like Vox’s Kelsey Piper argued that the board, which would have convened only four times per year, lacked an avenue to evaluate — or even arrive at a clear understanding of — the AI work in which Google is involved.

Following the swift dissolution of Google’s external ethics board, the U.K.-based panel that offered counsel to DeepMind Health — the health care subsidiary of DeepMind, the AI firm Google acquired in 2014 — announced that it, too, would close, but for arguably more disconcerting reasons. Several of its members told The Wall Street Journal they hadn’t been afforded enough time or information to fulfill their questioning, and that they were concerned that Google and DeepMind’s close relationship posed a privacy risk.

Unsurprisingly, Google contends that its internal ethics boards quite successfully serve the role of watchdogs by regularly assessing new “projects, products, and deals.” It also points out that, in the past, it’s pledged not to commercialize certain technologies — chiefly general-purpose facial recognition — before lingering policy questions are addressed, and that it has both self-interrogated its AI initiatives and thoroughly detailed issues concerning AI governance, including explainability standards, fairness appraisal, safety consideration, and liability frameworks.

Setting aside for a moment recent allegations of retaliation, which Google denies, the company’s recent business decisions involving AI-driven products and research instill little confidence.

Reports emerged that this summer that Google contributed TensorFlow, its open source AI framework, while under a Pentagon contract — Project Maven — that sought to implement object recognition in military drones. The company reportedly also planned to build a surveillance system that would have allowed Defense Department analysts and contractors to “click on” buildings, vehicles, people, large crowds, and landmarks and “see everything associated with [them].”

Other, smaller gaffes include failing to include both feminine and masculine translations for some languages in Google Translate, Google’s freely available language translation tool, and deploying a biased image classifier in Google Photos that mistakenly labeled a black couple as “gorillas.”

In early April during an interview with Recode’s Kara Swisher, Whittaker discussed unaudited, unregulated, and unsupervised AI’s potential impact. “You have systems that are determining which school your child gets enrolled in,” she said. “You have automated essay scoring systems that are determining whether it’s written well enough. Whose version of written English is that? And what is it rewarding or not? What kind of creativity can get through that?”

Whittaker wasn’t articulating a thought experiment; there are countless examples of AI gone awry. Scientists claim that Amazon Web Services’ object detection API fails to reliably determine the sex of female and darker-skinned faces in specific scenarios. In February, researchers at the MIT Media Lab found that facial recognition made by Microsoft, IBM, and Chinese company Megvii misidentified gender in up to 7% of lighter-skinned females, up to 12% of darker-skinned males, and up to 35% of darker-skinned females. In a recent study commissioned by the Washington Post, popular smart speakers made by Google and Amazon were 30% less likely to understand non-American accents than those of native-born users.

Protesters, objectors, and whistleblowers

Google is by no means alone when it comes to AI ethics stumbles. A cohort of AI researchers recently called on Amazon’s AWS to stop selling Rekognition, a facial recognition service that’s been criticized for its binary classification of sexual orientation and uneven treatment of people of color, to law enforcement agencies. It’s a challenge the entire industry is struggling with, but often that struggle manifests in active protests and organized, public objections. In other cases it involves whistleblowers like Whittaker.

In an editorial published Monday in the New York Times, former Google research scientist and Stanford mathematics professor Jack Poulson highlighted employee revolts that led to the reversal of questionable projects pursued by tech giants, including Google’s air gap technology for the Air Force and its abandoned bid for a $10 billion Pentagon cloud computing contract. “Complaints from a single rank-and-file engineer aren’t going to lead a company to act against its significant financial interests,” wrote Poulson. “But history shows that dissenters — aided by courts or the court of public opinion — can sometimes make a difference. Even if that difference is just alerting the public to what these companies are up to.”

Meanwhile, a study published by the European Parliamentary Research Service, the in-house research department and think tank of the European Parliament, asserts that whistleblowers “play an important role” in uncovering “questionable uses and outcomes” of algorithmic decision-making. It specifically cites a New York Times investigation into an Uber algorithm that enabled the ride-hailing company to evade regulators, information that was supplied by current and former Uber employees; and a ProPublica profile of Compas, an automated recidivism assessment software that associated African American defendants with higher risk scores.

Guillaume Chaslot, a 36-year-old former Google employee who’s loudly criticized the tech industry’s treatment of ethics, is another example. He explained to The Guardian how YouTube, which is owned by Google, is algorithmically engineered to increase advertising revenues with attention-retaining formulas. (For its part, Google says that Chaslot was let go for “performance issues” in 2013, and says that YouTube’s recommendation system now discourages the promotion of potentially inflammatory religious or supremacist content and takes into account things like “user satisfaction” and the number of “likes” a video has received.)

“Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers,” wrote Whittaker and colleagues in a separate AI Institute report last year. “The recent surge in activism has largely been driven by whistleblowers within technology companies, who have disclosed information about secretive projects to journalists. These disclosures have helped educate the public, which is traditionally excluded from such access, and helped external researchers and advocates provide more informed analysis.”

A recent whitepaper from Google concludes: “We support the collaborative and consultative process that many are pursuing, and encourage stakeholders everywhere to participate … [and we] hope to find opportunities for Google to continue to listen to, learn from, and contribute more actively to the wider discussion about AI’s impact on society.” That’s a call for respectful dialogue, but on its face, there seems to be little of that in the Whittaker case. If she was indeed pressured to step away from the AI Now Institute and demoted at Google as backlash for her ethics and advocacy work, that’s about silencing a voice, not conversing with it.

Whittaker isn’t a whistleblower in the strictest sense — she’s been transparent and forthright about her concerns with Google leadership, regulators, and members of the press. But even if her treatment doesn’t reach the threshold of retaliation, it sends a worrying message: Protests — particularly coordinated, attention-grabbing walkouts about controversial AI programs and workplace discrimination — aren’t welcome.