Considering The Present And Future Of AI In Pretrial Risk Assessments

By: Alexandra Chouldechova

Courts Data Analysis Presumption of Innocence May 3, 2016

Many of us think of Artificial Intelligence as technology more likely to appear in science fiction films than show up in our communities.

But in practice, the term is used to describe any computational technologies capable of producing reasoned or “intelligent “outputs. It is already with us, spreading quickly through the world as AI realizes its commercial potential.

From recommending music you might like or helping pick a romantic partner, to following the spread of pandemic contacts, individuals and governments alike are increasingly trusting AI to predict, classify and detect things in many spheres of our lives.

While there is broad public acceptance of the use of such tools in shaping what we buy, who we date and what shows we watch, the use of AI in criminal justice remains fraught and contentious.

In thinking about how AI might change things, it’s important to note that many existing risk assessment tools are already based on simple forms of AI. Existing tools have drawn scrutiny over concerns that they may produce biased outcomes and in turn result in biased decision making.

Our new report, The Present And Future Of AI In Pretrial Risk Assessments, produced with the support of the John D. and Catherine T. MacArthur Foundation, outlines some important questions criminal justice decisionmakers ought to consider when contemplating adopting an AI-based pre-trial risk assessment tool.

For example, some basic questions that pertain to current-day risk assessment tools are:

  • What exactly are the models predicting?
  • Does the process for obtaining the inputs for risk assessment tools respect the rights and dignity of the accused?
  • Are there racial, ethnic, gendered, or any other relevant disparities in the model’s predictions?

As more complex AI comes online for use in pre-trial risk assessment, it will be subject to many of the same critiques and concerns that have been raised about current versions of the programs. In most cases, we believe the issues with the current tools will be magnified by the new data sources and more complex model-building approaches that are to come, marketed as AI.

More questions than answers are likely to arise as we move forward. For example, if surveillance-based inputs for AI come online, are they derived from unevenly distributed surveillance systems? Do those systems fail differently for different groups of people? Do AI tools broaden the definition of unacceptable pre-trial behaviors, or widen the net of those eligible for pre-trial supervision or detention? Does the data contain any information that was obtained via legally or ethically questionable methods? Does collecting data to administer the assessment in the future require any morally objectionable or overly invasive procedures? Is the model understandable? Are the gains in predictive accuracy sufficient to offset the loss in interpretability?

In wrangling with these difficult questions, agencies will want to engage with their communities and with technical experts to be sure that the hype and promises marketed by developers of AI do not distract from the overall goals of vastly reducing pre-trial detention and eliminating racial disparities in the criminal justice system.

Alexandra Chouldechova is the Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Heinz College, Carnegie Mellon University

Kristian Lum is  an Assistant Research Professor, Department of Computer and Information Science, University of Pennsylvania