Berten Verbeeck & Cem Pektas
Recidivism risk algorithms
The prevalence of algorithmic assessment tools has been growing rapidly in recent years. Across industries, organizations have been developing use cases for predictive analytical models, ranging from determining your car insurance premium to setting your credit score.
Many of these practices are now well established. Yet, in 2016, major controversy was stirred when ProPublica published an investigation into the use of recidivism risk algorithms in America’s criminal justice system (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing). In their article, ProPublica’s researchers lay out how they analyzed data from a widely used recidivism risk assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions).
The idea behind COMPAS is simple: Have artificial intelligence process enormous quantities of data on issues such as age, sex, employment, current criminal charge, number of past convictions, etc. to provide advice on whether or not a specific individual will commit another offense in the future. Proponents of the system have argued that it helps judges and parole officers make better-informed, data-driven decisions. However, ProPublica’s investigation claims that COMPAS is biased against African Americans. In their article, the authors state: “Black defendants were often predicted to be at a higher risk of recidivism than they actually were. Our analysis found that black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent).”
While some later researchers have criticized ProPublica’s methodology – and have argued that algorithm-based tools vastly outperform humans in predicting recidivism (see, for example, https://news.berkeley.edu/2020/02/14/algorithms-are-better-than-people-in-predicting-recidivism-study-says/) – there is no doubt that systems such as COMPAS risk perpetuating established biases. When considering the context of America’s criminal justice system, this is particularly worrisome for two reasons. First, the stakes are incredibly high, with individuals’ most fundamental possession – their freedom – in the balance. Second, America’s criminal justice system has a well-documented history of racial bias, and it is not unthinkable that African Americans – when evaluated based on an algorithm – may find themselves discriminated against once again due to overrepresentation in past arrest data.
This does not mean we have to do away with recidivism risk algorithms altogether. Tools such as COMPAS can be valuable in highly complex cases, where vast numbers of factors – more than the human mind can process – need to be taken into account. Yet, two fundamental conditions need to be put in place before authorizing its use. First, the algorithm should never replace judges. It can be leveraged as one input into a judge’s overall decision making – a complement to existing methods – but can never be allowed to make decisions independent of any human. Second, any judge who uses the algorithm must be able adequately interpret and explain its workings. A judge must be able to determine when certain situational factors are not considered by the algorithm, and then be capable to overwrite the algorithm’s recommendations. When a judge does decide to follow the algorithm’s advice, he/she must be able to spell out in clear language what drove the decision-making. As per the Fifth Amendment, any defendant has the right to “due process of law”. This includes the right to understand how one’s algorithmic score was calculated and the right to challenge the score’s accuracy. Blindly following an algorithm would mean depriving defendants of their right to due process.