Algorithmic risk assessments are touted as being more objective and accurate than judges in predicting future violence. Across the political spectrum, these tools have become the darling of bail reform. But their success rests on the hope that risk assessments can be a valuable course corrector for judges’ faulty human intuition.
When it comes to predicting violence, risk assessments offer more magical thinking than helpful forecasting. We and other researchers have written a statement about the fundamental technical flaws with these tools.
Actuarial risk assessments are virtually useless for identifying who will commit violence if released pretrial. Consider the pre-eminent risk assessment tool on the market today, the Public Safety Assessment, or P.S.A., adopted in New Jersey, Kentucky and various counties across the country. In these jurisdictions, the P.S.A. assesses every person accused of a crime and flags them as either at risk for “new violent criminal activity” or not. A judge sees whether the person has been flagged for violence and, depending on the jurisdiction, may receive an automatic recommendation to release or detain.
Risk assessments’ simple labels obscure the deep uncertainty of their actual predictions. Largely because pretrial violence is so rare, it is virtually impossible for any statistical model to identify people who are more likely than not to commit a violent crime.
The data set used to build the P.S.A. predicts as much: 92 percent of the people who are flagged for pretrial violence will not get arrested for a violent crime. The fact is, a vast majority of even the highest-risk individuals will not commit a violent crime while awaiting trial. If these tools were calibrated to be as accurate as possible, then they would simply predict that every person is unlikely to commit a violent crime while on pretrial release.
Instead, the P.S.A. sacrifices accuracy for the sake of making questionable distinctions among people who all have a low, indeterminate or incalculable likelihood of violence. Algorithmic risk assessments label people as at risk for violence without providing judges any sense of the underlying likelihood or uncertainty of this prediction. As a result, these tools could easily lead judges to overestimate the risk of pretrial violence and detain far more people than is justified.
These limits may offer a broader lesson for the project of reducing mass incarceration. Applying “big data” forecasting to our existing criminal justice practices is not just inadequate — it also risks cementing the irrational fears and flawed logic of mass incarceration behind a veneer of scientific objectivity. Neither judges nor software can know in advance who will and who won’t commit violent crime. Risk assessments are a case study of how a real-world “Minority Report” doesn’t work.