Skip to main content

AI Weekly: Facial recognition policy makers debate temporary moratorium vs. permanent ban

Watch all the Transform 2020 sessions on-demand here.


On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China.

At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned.

Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a “uniquely dangerous and oppressive technology.”

On the other side of the San Francisco Bay Bridge, Oakland and Berkeley are considering bans based on the same language used in the San Francisco ordinance, while state governments in Massachusetts and Washington (opposed by Amazon and Microsoft) have explored the idea of moratoriums until such systems’  ability to recognize all Americans is ensured.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Georgetown University Center on Privacy and Technology senior associate Clare Garvie is slated to testify before the House Oversight Committee next Wednesday. Last Thursday, the center released new reports detailing the NYPD’s use of altered images and pictures of celebrities who look like suspects to make arrests, as well as real-time facial recognition systems being used in Detroit and Chicago and tested in other major U.S. cities.

After years of records requests and lawsuits to examine police use of facial recognition software in the United States, Garvie believes it’s time for a nationwide moratorium on the use of such technology by law enforcement.

Garvie and coauthors of the “Perpetual Lineup” report began to monitor facial recognition software in 2016 and initially concluded that facial recognition could be used to benefit people if regulations were put in place. But Garvie’s perspective has shifted.

“What we’re seeing today is that in the absence of regulation, [facial recongition] continues to be used, and now we have more information about just how risky it is, and just how advanced existing deployments are,” Garvie said. “In light of this information, we think that there needs to be a moratorium until communities have a chance to weigh in on how they want to be policed and until there are very, very strict rules in place that guide how this technology is used.”

Before such a moratorium is lifted, Garvie would want to see mandatory bias and accuracy testing for systems, aggressive court oversight, minimum photo quality standards, and public surveillance tech use reports, like the annual surveillance tech use audits already required in San Francisco.

Further, forensic sketches, altered images, and celebrity doppelgangers shouldn’t be used with facial recognition software, and public reports and transparency should be the norm. Obtaining details on facial recognition software use has been challenging. For example, Georgetown researchers first requested facial recognition records from the NYPD in 2016, and they were told no such records existed — even though the technology had been in use since 2011. After two years in court, the NYPD has turned over 3,700 pages of documents related to facial recognition software use.

Garvie believes that facial recognition software use by police in the U.S. is inevitable but that the practice of scanning driver’s license databases with facial recognition software should be banned. “We’ve never before had biometric databases composed of most Americans, and yet now we do — thanks to face recognition technology — and law enforcement has access to driver’s license databases in at least 32 states,” she said.

Real-time facial recognition use by police should also be banned, Garvie argues, because giving police the ability to scan faces of people at protests and track their location would be too risky. “The ability to get every face of people walking by a camera or every face of people in a protest and identify those people to locate where they are in real time — that deployment of the technology fundamentally provides law enforcement new capabilities whose risks outweigh the benefits, in my mind,” Garvie said.

Prosecutors and police should also be obligated to tell suspects and their counsel that facial recognition aided in an arrest. This recommendation was part of the 2016 report, but Garvie said she has not encountered any jurisdictions that have made this official policy or law.

“What we see is that information about face recognition searches is typically not turned over to the defense, not because of any rules around it, but in fact the opposite. In the absence of rules, defense attorneys are not being told that face recognition searches are being conducted on their clients,” she said. “The fact that people are being arrested and charged and never find[ing] out that the reason why they were arrested and charged was face recognition is deeply troubling. To me, that seems like a very straightforward violation of due process.”

Mutale Nkonde, a policy analyst and fellow at the Data & Society Research Institute, was part of a group that helped author the Algorithmic Accountability Act. Introduced in the U.S. Senate last month, the bill requires privacy, security, and bias risk assessments, and it puts the Federal Trade Commission in charge of regulation.

Like Garvie, Nkonde believes the San Francisco ban provides a model for other concerned parties, such as Brooklyn residents currently fighting landlords who want to replace keys with facial recognition software. She also favors a moratorium on the use of such technology.

“Even though a ban sounds really appealing, if we can get a moratorium and do some more testing, and auditing algorithms go deeper into the work around the fact that they don’t recognize dark faces and gendered people, that at least creates a grounded legal argument for a ban and gives time to really talk to industry,” she said. “Why would they put the resources into something that doesn’t have a marketplace?”

The bill, which she said gathered momentum after Nkonde briefed members of the House Progressive Caucus on algorithmic bias last year, may not be signed into law anytime soon, but Nkonde still believes it’s important to bring attention to the issue prior to a presidential election year and to educate members of Congress.

“It’s really important for people in the legislature to constantly have these ideas reinforced, because that’s the only way we’re going to be able to move the needle,” she said. “If you keep seeing a bill that’s hammering away at the same issue between [Congressional] offices, that’s an idea that’s going to be enacted into law.”

On the business side, Nkonde thinks regulations and fines are needed to ensure legally binding consequences for tech companies that fail to deliver racial and gender parity. Otherwise, she warns, concerned AI companies may engage in the kind of “ethics washing” sometimes applied to matters of diversity and inclusion, with talk of an urgent need for change but little genuine progress.

“It’s one thing saying a company’s ethical, but from my perspective, if there’s no legal definition that we can align this to, then there’s no way to keep companies accountable and it becomes like the president saying he didn’t collude. Well that’s cool that you didn’t collude, but there’s no legal definition of collusion, so that was never a thing in the first place,” she said.

An irredeemable technology

As Nkonde and Garvie advocate for a moratorium on facial recognition use by governments and police, attorney Brian Hofer wants to see more governments impose permanent bans.

Hofer helped author the facial recognition software ban in San Francisco, the fourth Bay Area municipality he’s helped craft surveillance tech policy for using the ACLU’s CCOP model.

Hofer has been speaking with lawmakers in Berkeley and in Oakland, where he serves as chair of the city’s Privacy Advisory Committee. Previously known for his opposition to license plate readers, he favors the permanent ban of facial recognition software in his hometown of Oakland because he’s afraid of misuse and lawsuits.

“We’re [Oakland Police Department] in our 16th year of federal monitoring for racial profiling. We always get sued for police scandals, and I can’t imagine [the suits] with this powerful technology. Attached to their liability, it would bankrupt us, and I think that would happen in a lot of municipalities,” Hofer said.

More broadly, Hofer hopes Berkeley and Oakland produce momentum for facial recognition software bans, because he thinks there’s “still time to contain it.”

“I believe strongly that the technology will get more accurate, and that’s my greater concern, that it will be perfect surveillance,” he said. “It’ll be a level of intrusiveness that we never consented to the government having. It’s just too radical of an expansion of their power, and I don’t think walking around in my daily life that I should have to subject myself to mass surveillance.”

If bans do not become the norm, Hofer thinks legislation should allow independent audits of software and limit usage to specific use cases — but he believes that mission creep is inevitable and mass surveillance is always abused.

“Identifying a kidnapping suspect, a homicide suspect, a rapist, truly violent predators — there could be some success cases there, I’m sure of it. But once you get that door open, it’s going to spread. It’s going to spread all over,” he warned.

Facial recognition for better communities?

Not everyone wants a blanket ban or moratorium put in place. Information Technology and Innovation Foundation (ITIF) VP and Center for Data Innovation director Daniel Castro is staunchly opposed to facial recognition software bans, calling them a step backward for privacy, and more likely to turn San Francisco into Cuba.

“Cuba’s classically driving around in these 1950s cars and motorcycles and sidecars because they’ve been cut off from the rest of the world. A ban like this, instead of a kind of oversight or go-slow approach, locks the police into using the [old] technology and nothing else, and that I think is a concern, because I think people want to see police forces [be] effective,” Castro said.

ITIF is a Washington D.C-based think tank focused on issues of tech policy, life science, and clean energy. This week, ITIF’s Center for Data Innovation joined the Partnership on AI, a coalition of more than 80 organizations for the ethical use of AI that includes the likes of Microsoft, Facebook, Amazon, and Google. Employees of companies like Microsoft and Amazon sit on the board.

Castro thinks police departments need to do more performance accuracy audits of their own systems and put minimum performance standards in place. Like Garvie, he agrees that minimum photo quality standards are needed, but he believes overpolicing and use of facial recognition software should be considered as separate matters.

He also envisions facial recognition software accompanying police reform initiatives. “I think there are opportunities for police departments — that are actively trying to improve relations with marginalized communities to address systemic bias in their own procedures and in their own workforce — to use facial recognition to help address some of those problems. I think the tool is neutral in that way. It certainly could be used to exacerbate those problems, but I don’t think it is necessarily going to do that,” Castro said.

Veritone, an AI company selling facial recognition software to law enforcement in the United States and Europe, also thinks the technology could enable better community relations and that it will be used to exonerate innocent suspects rather than leading to false convictions or misidentification.

“The most biased systems on this planet are humans,” Veritone CEO Chad Steelberg told VentureBeat in a phone interview.

Like Hofer and Garvie, Steelberg believes that police using automated real-time facial recognition in public places, like the system currently used in Detroit, shouldn’t be allowed to monitor the daily lives of people who haven’t committed any crime. And he agrees that the tool could be used to infringe on civil rights and freedom of assembly and speech.

But he also thinks facial recognition can be used responsibly to help solve some of humanity’s toughest problems. “The benefit of AI is kind of counter to most of the things you read about. It’s a system that provides a true truth, free of bias and human backdrop and societal impact,” he said. “And I think that’s necessary for both law enforcement and many other broken parts of our society. Banning that technology seems like an absolute foolish approach from an outright standpoint, and I think that legislation which is far more thoughtful is necessary.”

As more cities and legislative bodies consider facial recognition software bans or put moratoriums in place, it’s clear the ruling in San Francisco may be only the beginning. However communities and lawmakers choose to write law, it’s imperative that these debates remain thoughtful and in line with American values, because despite civil rights guarantees in the Constitution, nobody should be naive enough to believe that mass surveillance with facial recognition is not a potential reality in the United States.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel.

Thanks for reading,

Khari Johnson

AI Staff Writer