Crime Prediction Algorithms Aren't Very Good At Predicting Crimes
Some courts in the U.S., particularly in states from California to New Jersey, use crime-predicting algorithms to determine if a defendant is likely to commit another crime in the future. While the software helps judges decide who gets bail, who goes to jail and who can walk away free, it appears the technology isn’t very reliable and opens doors to a more unfair justice system.
Dartmouth College researchers Julia Dressel and Hany Farid tackled the issue with the so-called risk assessment algorithms in a paper published in Science Advances. The study examined one popular risk-assessment algorithm, called Compas, and pointed out how the software’s recidivism predictions are no different from the answers random people give to online surveys.
Farid, who teaches computer science at Dartmouth, and Dressel, who majored in computer science and gender studies at the same school, used Amazon Mechanical Turk in the study. They asked around 400 participants of the online marketplace — where people get paid small amounts for simple tasks — to decide if a specific defendant is likely to reoffend or commit a crime again.
Participants were asked after being provided with seven pieces of data from the defendant’s profile, without including the latter’s race. Dressel and Farid’s sample in the study comprised data from 1,000 real defendants from Broward County and public records showing if such defendants really committed another crime at a later time.
The participants were divided into groups, and each one were made to assess 50 defendants based on a brief description such as the one shown below:
The defendant is a [SEX] aged [AGE]. They have been charged with: [CRIME CHARGE]. This crime is classified as a [CRIMI- NAL DEGREE]. They have been convicted of [NON-JUVENILE PRIOR COUNT] prior crimes. They have [JUVENILE- FELONY COUNT] juvenile felony charges and [JUVENILE-MISDEMEANOR COUNT] juvenile misdemeanor charges on their record.
Again, the study only used seven pieces of data, which is significantly smaller than the 137 data points that Compas uses in its questionnaire. However, it’s worth noting that the company behind Compas, Equivant, said in a statement that its software only uses six pieces of data to come up with its recidivism predictions.
The results of the study show that overall predictions made by random people were 67 percent accurate, while Compas’ predictions were 65 percent accurate. The researchers also pointed out that even without providing the defendant’s race, participants incorrectly predicted that black defendants are likely to reoffend just like how Compas would incorrectly categorize black defendants as having a high risk of reoffending. Both the participants and Compas also incorrectly classified white defendants as unlikely to reoffend.
Upon learning that their study yielded 37 percent false positive rate for black defendants and 27 percent for white defendants, Dressel and Farid repeated the experiment with another 400 participants. However, they found out that even when racial data is provided, the results were largely the same. This led the two to try validating their findings by creating their own algorithm and feeding it with data from Broward County and information on whether the defendants did reoffend at a later date.
The team discovered using their own algorithm that only two data points are actually needed to achieve 65 percent accuracy: the defendant’s age and the number of the defendant’s prior convictions. “Basically, if you're young and have a lot of convictions, you're high risk, and if you're old and have few priors, you're low risk,” Farid said.
In the end, the study claimed that crime-predicting algorithms aren’t better at predicting crimes than random people on the internet. “There was essentially no difference between people responding to an online survey for a buck and this commercial software being used in the courts,” Farid explained. “If this software is only as accurate as untrained people responding to an online survey, I think the courts should consider that when trying to decide how much weight to put on them in making decisions.
Nevertheless, Equivant claims what the researchers did further legitimizes Compas’ ability to make good predictions. “Instead of being a criticism of the Compas assessment, [it] actually adds to a growing number of independent studies that have confirmed that Compas achieves good predictability and matches,” Equivant said in a statement, according to Wired.
© Copyright IBTimes 2024. All rights reserved.