Skip to main content

Ruha Benjamin on deep learning: Computational depth without sociological depth is ‘superficial learning’

Watch all the Transform 2020 sessions on-demand here.


Princeton University associate professor of African American Studies and Just Data Lab director Dr. Ruha Benjamin said engineers creating AI models should consider more than data sets when deploying systems. She further asserted that “computational depth without historic or sociological depth is superficial learning.”

“An ahistoric and asocial approach to deep learning can capture and contain, can harm people. A historically and sociologically grounded approach can open up possibilities. It can create new settings. It can encode new values and build on critical intellectual traditions that have continually developed insights and strategies grounded in justice. My hope is we all find ways to build on that tradition,” she said.

In a talk that examined the tools needed to build just and humane AI systems, she warns that without such guiding principles, people in the machine learning community can become like IBM workers who participated in the Holocaust during World War II — technologists involved in automated human destruction hidden within bureaucratic technical operations.

Alongside deep learning pioneer Yoshua Bengio, Benjamin was a keynote speaker this week at the all-digital International Conference on Learning Representations (ICLR), an annual machine learning conference. ICLR was originally scheduled to take place in Addis Ababa, Ethiopia this year to engage the African ML community. But due to the pandemic, ICLR became a digital conference with keynote speakers, poster sessions, and even social events happening entirely online.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Harmful algorithmic bias has proven to be fairly pervasive in AI. Recent examples include ongoing racial disparity in facial recognition performance identified by federal tech standards maker NIST late last year, but researchers have also found bias in top-performing pretrained language models, object detection, automatic voice AI, and home lending.

Benjamin also referenced instances of bias in health care, personal lending, and job hiring processes but said AI makers’ recognition of historical and sociological contexts can lead to more just and humane AI systems.

“If it is the case that inequity and injustice [are] woven into the very fabric of our societies, then that means each twist, coil, and code is a chance for us to weave new patterns, practices, and politics. The vastness of the problem will be its undoing once we accept that we are pattern makers,” she said.

Benjamin explored themes from her book Race After Technology, which urges people to consider imagining a tool for counteracting power imbalances and examines issues like algorithmic colonialism and anti-blackness embedded in AI systems, as well as the overall role of power in AI. Benjamin also returned to her assertion that imagination is a powerful resource for people who feel disempowered by the status quo and for AI makers whose systems will either empower or oppress.

“We should acknowledge that most people are forced to live inside someone else’s imagination, and one of the things we have to come to grips with is how the nightmares that many people are forced to endure are really the underside of elite fantasies about efficiency, profit, safety, and social control,” she said. “Racism, among other axes of domination, helps to produce this fragmented imagination, so we have misery for some and monopoly for others.”

Answering questions Tuesday in a live conversation with members of the machine learning community, Benjamin said her next book and work at the Just Data Lab will focus on matters related to race and tech during the COVID-19 global pandemic. Among recent examples at the intersection of these issues, Benjamin points to the Department of Justice’s use of a PATTERN algorithm to reduce prison populations during the pandemic. An analysis found that the algorithm is more than 4 times as likely to label white inmates low risk as black inmates.

Benjamin’s keynote comes as companies’ attempts to address algorithmic bias have drawn accusations of ethics washing, similar to criticism leveled at the lack of progress on diversity in tech over the better part of the last decade.

When asked about opportunities ahead, Benjamin said it’s important that organizations maintain ongoing conversations around diversity and pay more than lip service to these issues.

“One area that I think is really crucial to understand[ing] the importance of diversity is in the very problems that we set out to solve as tech practitioners,” she said. “I would encourage us not to think about it as cosmetic or downstream — where things have already been decided and then you want to bring in a few social scientists or you want to bring in a few people from marginalized communities. Thinking about it much earlier in the process is vital.”

Recent efforts to put ethical principles into practice within the machine learning ethics community include a framework from Google AI ethics leaders for internal auditing and an approach to ethics checklists from principal researchers at Microsoft. Earlier this month, researchers from 30 major AI organizations — including Google and OpenAI — suggested creating a third-party AI auditing marketplace and “bias bounties” to help put principles into practice.