Watch all the Transform 2020 sessions on-demand here.
The New York Police Department used photos of actor Woody Harrelson to arrest a man who was accused of stealing beer from a CVS after officers concluded from a partial photo that the suspect looked like actor Woody Harrelson. Facial recognition software was used to make the arrest in 2017, according to a report released today by the Georgetown University Center on Privacy and Technology.
Georgetown researchers are calling the incident representative of the risks associated with unregulated use of facial recognition software by police in the United States. They’re also calling for a local, state, and federal moratorium on facial recognition software use by police.
The report, titled “Garbage In, Garbage Out: Face Recognition on Flawed Data,” also found that police departments, including the NYPD, edited photos — including copying facial features from photos of other people — in order to get a match.
At least half a dozen police departments across the country use composite sketches to search facial recognition databases containing driver’s license photos. Departments cited include the Maricopa County Sheriff’s Office in Arizona and the Washington County Sheriff’s Department in Oregon.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
This approach is endorsed by Amazon’s AWS, and AWS Rekognition was used in facial recognition software tests conducted by the Washington County Sheriff’s Department last year.
Analysis of the composite method found it to be effective in only 1 of every 20 facial recognition searches, while NYPD analysts determined that forensic sketches fail 95% of the time. Both techniques increase the possibility that innocent people will be misidentified as suspects in crimes.
Facial recognition software has come under increasing scrutiny as local, state, and federal lawmakers explore how best to regulate use of the technology.
Earlier this week, San Francisco became the first city in the nation to ban facial recognition software use by police and city departments — due in part to fears of misuse and overpolicing of marginalized communities. On Monday, New York lawmakers proposed legislation to ban use of facial recognition software by landlords.
In April, a judge ordered the Georgetown University privacy center to return documents after the NYPD mistakenly turned over 20 pages of confidential information as part of more than two years of legal effort to examine the department’s use of facial recognition technology.
Also out today is a report called “America Under Watch: Face Surveillance in the United States,” which determined that police departments in Detroit and Chicago have acquired real-time facial recognition capabilities. The Detroit system uses a network of 500 cameras at traffic lights and public places throughout the city.
These reports build on the 2016 release of “Perpetual Lineup,” which concluded that law enforcement agencies in a majority of states were using facial recognition software to search databases of driver’s license or ID photos and that roughly half of U.S. adults were already being used in facial recognition databases. The report referred to the use of images of law-abiding citizens as “unprecedented and highly problematic” and concluded that proliferation of the technology was unregulated and likely to negatively impact the lives of African-Americans.
That report draws its conclusions from more than 100 public records requests submitted to local and state police departments across the United States.
The NYPD report author Clare Garvie said the technology is being abused in alarming ways by police departments in the absence of regulation and standards, causing police to make “irresponsible mistakes.”
“We have learned that some cities in the United States have quietly developed massive networks of face recognition-enabled cameras — networks with the ability to track us wherever we go, without our knowledge or consent. While we do not yet know whether all the switches have been flipped to ‘on,’ the potential for abuse of these systems is alarming,” she said in a statement provided to VentureBeat.
Garvie is scheduled to testify before the House Oversight Committee on May 22 alongside Joy Buolamwini, author of research that found facial recognition software lacking in its ability to recognize people with dark skin, particularly women of color.