Skip to main content

AI Weekly: Proposed ban on facial recognition in public housing points to regulation appetite

Image Credit: iStock / gece33

Watch all the Transform 2020 sessions on-demand here.


Facial recognition made headlines again this week after three Congressional lawmakers — Yvette Clarke (D-NY), Ayanna Pressley (D-MA), and Rashida Tlaib (D-MI) — introduced legislation that would bar the technology from public housing. As proposed, the No Biometric Barriers to Housing Act would prohibit federally funded apartment complexes from using facial analysis software, and it would require the Department of Housing and Urban Development (HUD) to detail in a report facial recognition’s impact on tenants.

Cnet noted that it would be the first national bill to prevent landlords from imposing facial recognition on tenants. Private properties would be exempt — the draft names only HUD housing — but it’s likely to spark debate about the technology’s limitations and privacy implications. For instance, the ban could affect programs like Detroit’s controversial Project Green Light, which uses facial recognition software paired with cameras erected at businesses and public housing to alert local law enforcement to potential crimes in progress.

“We’ve heard from … experts, researchers who study facial recognition technology, and community members who have well-founded concerns about the implementation of this technology and its implications for racial justice,” Tlaib said. “We cannot allow residents of HUD-funded properties to be criminalized and marginalized with the use of biometric products like facial recognition technology. We must be centered on working to provide permanent, safe, and affordable housing to every resident — and unfortunately, this technology does not do that.”

Two months ago, over 130 rent-stabilized tenants in Brooklyn filed a legal opposition to their landlord’s application to install a facial recognition entry system in their buildings. They questioned in their complaint the bias and accuracy of the system, which they worried could lock out the predominantly elderly, black and brown, and female tenants from their own homes.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


“We know next to nothing about this new system, and our landlord refuses to sufficiently answer our questions about how the system works, what happens to our biometric data, and how they plan to address accuracy and bias gaps,” said tenant Icemae Downes. “We don’t believe he’s doing this to beef up security in the building. We believe he’s doing this to attract new tenants who don’t look like us.”

They have reason to be concerned. A study in 2012 showed that facial algorithms from vendor Cognitec performed 5% to 10% worse on African Americans than on Caucasians, and researchers in 2011 found that facial recognition models developed in China, Japan, and South Korea had difficulty distinguishing between Caucasian faces and those of East Asians. In a test last year, the American Civil Liberties Union demonstrated that Amazon’s Rekognition service, when fed 25,000 mugshots from a “public source” and tasked with comparing them to official photos of Congressional members, misidentified 28 as criminals. And MIT Media Lab researcher and Algorithmic Justice League founder Joy Buolamwini discovered in audits of facial recognition systems — including those made by Amazon, IBM, Face++, and Microsoft — that they performed poorly on young people, women, and people with dark skin.

Even Rick Smith, CEO of Axon, one of the largest suppliers of body cameras in the U.S., was last summer quoted as saying that facial recognition isn’t yet accurate enough for law enforcement applications.

“[They aren’t] where they need to be to be making operational decisions off the facial recognition,” he said. “This is one where we think you don’t want to be premature and end up either where you have technical failures with disastrous outcomes or … there’s some unintended use case where it ends up being unacceptable publicly in terms of long-term use of the technology.”

Perhaps unsurprisingly, beyond narrowly tailored bans, lawmakers at the national, state, and local levels have pushed back against unfettered facial recognition software. Last week, Oakland became the third U.S. city after San Francisco and the Boston suburb of Somerville to ban facial recognition use by local government departments, including its police force. U.S. Congress House Oversight and Reform Committee hearings in May saw bipartisan support for limitations on systems use by law enforcement. State legislatures in Massachusetts and Washington have considered imposing moratoriums on face surveillance platforms, and separately, the California State Legislature is currently weighing a facial recognition ban on police body cam footage, as is the Berkeley City Council.

If the current trend holds, more bans are likely on the way.

“Vulnerable communities are constantly being policed, profiled, and punished, and facial recognition technology will only make it worse,” Rep. Pressley said. “Program biases misidentify women and people of color, and yet the technology continues to go unregulated. [This bill] will ban the use of facial recognition and other biometric technologies in HUD-funded properties — protecting the civil rights and civil liberties of tenants throughout the country.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers
AI Staff Writer

P.S. Please enjoy this video of AI agents trained with Uber’s evolvability ES toolkit, which was released this week.