Watch all the Transform 2020 sessions on-demand here.
The Partnership on AI released its first-ever research report today, which declares algorithms now in use unfit to automate the pre-trial bail process or label some people as high risk and detain them, while declaring others low risk and fit for release and sending them home.
Validity, data sampling bias, and bias in statistical predictions were called out as issues in currently available risk assessment tools. Human-computer interface issues and unclear definitions of high risk and low risk were also considered important shortcomings in those tools.
The Partnership on AI is an organization created in 2016 that attempts to join the biggest names in AI like Amazon, Google, Facebook, and Nvidia together with Amnesty International, ACLU, EFF, and Human Rights Watch.
Education, news, and multinational organizations like the United Nations are also member organizations. Created by Apple, Amazon, Google, and Facebook, more than half of the group’s 80 current member organizations are nonprofits.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
PAI recommends policymakers either avoid using algorithms entirely for decision-making surrounding incarceration, or find ways to meet minimum data collection and transparency standards laid out in the report.
The report was motivated by the passage of the First Step Act passed by Congress and signed into law last year by President Trump, as well as California’s SB 10, legislation that uses algorithms to get rid of the state’s cash bail system that will be on the ballot in 2020.
Both bills were seen as part of a broader national issue for criminal justice reform advocates, but the Partnership on AI says such tools can have an adverse, significant impact on millions of lives.
The release of the report is one of the first public and declarative actions by the Partnership since its founding. A focus on criminal justice reform may seem like a left turn for an organization created by the biggest AI companies in the world.
Issues like AI bias by tech giants and the sale of facial recognition software by companies like Microsoft and Amazon seem to have attracted more headlines in recent months. However, Partnership on AI researcher Alice Xiang says the report’s focus on risk assessment algorithms by judges for pretrial bail was a conscious decision.
“There have already been a lot of concerns about algorithmic bias in various contexts, but criminal justice is really the one where these questions are the most crucial, since we’re actually talking about making decisions about individuals’ liberty, and that can have huge ramifications for the rest of their lives and the lives of others in their communities,” Xiang told VentureBeat in an interview. “Part of our reason for choosing criminal justice for this initial report is that we do think it is really the best example of why fairness is very important to consider in the context of really any use of AI to make important life decisions for individuals, and especially when the government is making those decisions, because then it’s something where issues of transparency too are important to facilitate public discourse.”
Xiang says risk assessment tools are the best example of why AI fairness is important and has such great potential impact on people’s lives.
The report calls for addressing 10 minimum requirements before deploying such systems. These concerns fall into three main categories: human-computer interface concerns, technical requirements that deal with bias, and validity concerns at the level of the data being used to train these tools.
Re-weighting methods in attempts to mitigate historical bias in data sets are also suggested in the report.
Most, if not all, of the broad recommendations in the report apply to the use of risk assessment tools in other contexts within the criminal justice system as well, she said.
The report does not mention any companies by name or assess use of algorithms by specific jurisdictions in the United States.
Xiang declined to share the names of particular organizations that contributed directly to the research or algorithms used for risk assessment but said no currently available tools fit the Partnership’s minimum requirements.
The use of algorithms in decision making for judges has been known to produce race-based unfair results that are more likely to label African-American inmates as at risk of recidivism.
The report attempts to balance opinions of member organizations who felt the use of algorithms in matters of criminal justice to be impossible because you can never remove historical bias from data, and those who feel it can do a better job than biased judges.
Therefore the document is not meant to 100% reflect the views of any single partner organization but to represent a consensus of 30 to 40 partner organizations who helped draft and edit the report.
It also attempts to acknowledge that risk assessment tools are often being suggested by people with good intentions interested in reduction of mass incarceration. The United States leads the world in the size of its incarcerated population.
“One thing we do try to acknowledge in the report is that the use of these tools wasn’t motivated by animus,” Xiang said. “The purpose was to try and reform very broken systems, but the concerns laid out in this report are that these tools were deployed without necessarily all of the work that needed to be done to make sure they don’t create more problems than they solve in the long run, especially given that there are many potential reforms that could be undertaken beyond just automating decisions.”
Beyond pretrial bail risk assessments, algorithms are also used in the criminal justice system to do things like predict recidivism rates. A 2016 ProPublica analysis found that the COMPAS recidivism algorithm was twice as likely to misclassify African-American defendants as presenting a high risk of violent recidivism than white defendants.
Other applications of AI in law include predictions for class-action lawsuits and automation of the discovery process.