Computer algorithms can outperform people at predicting which criminals will get arrested again, a new study finds.
Risk-assessment algorithms that forecast future crimes often help judges and parole boards decide who stays behind bars (SN: 9/6/17). But these systems have come under fire for exhibiting racial biases (SN: 3/8/17), and some research has given reason to doubt that algorithms are any better at predicting arrests than humans are. One 2018 study that pitted human volunteers against the risk-assessment tool COMPAS found that people predicted criminal reoffence about as well as the software (SN: 2/20/18).
The new set of experiments confirms that humans predict repeat offenders about as well as algorithms when the people are given immediate feedback on the accuracy of their predications and when they are shown limited information about each criminal. But people are worse than computers when individuals don’t get feedback, or if they are shown more detailed criminal profiles.
In reality, judges and parole boards don’t get instant feedback either, and they usually have a lot of information to work with in making their decisions. So the study’s findings suggest that, under realistic prediction conditions, algorithms outmatch people at forecasting recidivism, researchers report online February 14 in Science Advances.