Analysing The Use Of Artificial Intelligence in Criminal Sentencing through the Loomis Decision

By Rishabh Warrier

shutterstock_678583375

Introduction

Modern times have seen our lives being pervaded by algorithms, from suggestions on Netflix to music recommendations. An area where this pervasion is seeing a stark increase is in the domain of criminal sentencing. Prisons in the United States hold more people than any other country across the world. This problem, of an overburdened prison population, has resulted in pressure to reduce such prison populations and yet prevent an increase in crimes. There have been calls for increased automation and use of Artificial Intelligence to streamline the criminal justice system to make it as efficient as possible.

State v. Loomis

A controversial application of AI has been to check criminal recidivism through risk assessment tools. These tools usually have the task of collecting data on various aspects of an offender’s behaviour and link it to whether there exists a chance of recidivism. The use of AI in assessing such recidivism has resulted in controversy as seen in the decision of State v. Loomis. The accused Loomis was charged with “attempting to flee a traffic officer and operating a motor vehicle without the owner’s consent”. While sentencing the accused, the trial court took the help of COMPAS, an AI risk assessment tool which predicts recidivism on factors like the accused’s criminal history, level of education etc. COMPAS then churns out a score, predicting the possibility of recidivism. Based on the assessment given by COMPAS, Loomis was sentenced to 6 years in prison plus probation. He appealed his sentencing by stating that the use of this AI tool violated his due process rights. The reasoning given behind his appeal was that owing to trade secret concerns, the methodology used to determine Loomis’s risk couldn’t be revealed, violating his right to know the reason behind his conviction. Yet, the trial court rejected his appeal and a further appeal to the Wisconsin Supreme Court was met with a similar fate. The use of AI in sentencing gives rise to some problems that need to be given consideration.

Need for AI sentencing?

There have been calls for keeping AI away from criminal sentencing for it is not as fair as it seems to be. Machines use statistics for arriving at a result, by analysing criminal data gathered historically and then forming patterns of criminal behaviour. The problem with this is that the patterns may overtly replicate biases of law enforcement against minority communities or lower-income groups. Herein comes the question, will AI actually make sentencing fairer?

Increasingly, there have been calls to automate the criminal justice system with the help of AI. As recently as 2017, the American Law Institute agreed upon a draft for a Model Penal Code, which consists of approval to the use of ‘actuarial risk assessment tools’ for assessing the potential risks posed by offenders. There have been arguments that human beings bring with them, human biases. Extraneous matters like when did a judge take a break from proceedings; or what proportion of decisions do they want to be favourable can affect their decisions.

The aftermath of the Loomis decision resulted in researchers delving deeper into tools like COMPAS. A report published by ProPublica analysed over 7000 decisions of COMPAS and came up with a finding that its predictions were possibly biased and unreliable. The report suggested that the number of false positives the AI tool gave for people who were black was way more than the ones it gave for those who were white. A false positive means that people were stated to be likely of committing a crime in the future but in reality didn’t. Another aspect to be considered is whether AI tools like COMPAS are actually fairer and more accurate than judges. A report published by Science Advances shows that the COMPAS, using 137 questions in its questionnaire, is marginally more accurate than humans, who were given only basic details about an offender such as his age, sex, previous criminal history, etc.,  in predicting recidivism. Additionally, it was also shown that this similarity inaccuracy was prevalent across various other algorithmic predictive tools as well.

Yet, to paint a view of the other side of the debate, Sharad Goel conducted his own study using a different testing method than the previous two reports. He felt that the participants of both the reports were given constant feedback as to how accurate they were. In reality, though, he feels, judges never know what happens to the offenders, depriving them of feedback. Additionally, judges usually have large amounts of data in front of them, making it difficult to make sense of what is relevant and what is not. Subsequently, participants in Goel’s study were deprived of feedback, which resulted in an overwhelming decrease in accuracy of prediction concerning COMPAS, which still evolved. However, it is important to ask that even if the AI is 80% accurate, are humans willing to tolerate the other 20%?

Due process concerns and privatization

Apart from the question that is AI actually fair, it is also important to look at the constitutional problems it brings with itself. A vital tool that judges have, in the course of sentencing, is the Presentence Investigation Report (PSR). The objective of the PSR is for the judges to choose between different mandatory sentence limits based on offenders culpability. The PSR report is said to consist of ex-parte interviews and other documents often based on hearsay. Judgements have enumerated that the information contained in PSR reports are inherently questionable and cannot be a decider on contestable issues. Now the problem with AI generating risk scores is that it presents to the court a “presumptive factual determination”. Owing to the AI “black box”, offenders may often due to lack of resources or time not be able to question the result reached by the algorithmic tools. The neural networks being opaque, even the developers themselves may not understand how the AI tool reached a particular conclusion. Henceforth, this black box prevents offenders from challenging the risk assessment tool’s “scientific validity”, violating their fourteenth amendment due process rights.

Another problem with the use of AI in sentencing is the problem of increasing privatization. Recidivism being predicted by AI tools will depend upon how the private developers define recidivism. As has been argued, even the data set (jurisdiction, factors contributing towards recidivism) being chosen to incorporate into the AI risk assessment tool will affect the results it shows. Hence, the sentencing of criminals, which depends on the possibility of recidivism, will be dependent on private actors. Yet, private companies may well take the garb of “trade secret” to prevent judges, parole officers, offenders etc. from knowing what goes on inside the mind of the AI system. This has been called a “legal black box”, wherein even the law cannot question the proprietary source codes owing to the aforementioned trade secret laws. Hence, private actors play an important role in sentencing without being bound by the traditional constitutional accountability. A solution to this problem could be the modification of laws to disclose such algorithmic processes when undertaking a public service such as providing recidivism reports.

Coming back to the Loomis decision, the Wisconsin Supreme Court while upholding the use of COMPAS nevertheless stated that the risk assessment tools need to have a warning label with them. The warning suggests that the risk assessment tool is to merely guide a judge and is in no way a definitive predictor of risk. But this warning seems superfluous at best. The five-point warning of the Wisconsin SC though, merely suggests that risk assessment tools have been criticized, but doesn’t mention the force of these criticisms. Moreover, there is the aspect of an automation bias. This bias results in an anchoring effect, which states that humans are submissive to computer-generated data, and hence will seldom take a stance different from this data.

Conclusion

Artificial intelligence is increasingly being seen as a way to make all aspects of governance more equitable. The decision in Loomis shows this increasing dependence on AI even in human-driven areas like criminal sentencing. As studies have shown, there is a possibility that AI may not be as fair or as accurate as it seems to be. The opacity of the mechanisms of these risk assessment tools also means that due process rights of individuals may be violated. Moreover, the privatization owing to the use of AI raises questions of how can private actors be held accountable owing to the technological black box. Lastly, mere warning labels may not prevent judges from giving their autonomy away to the risk assessment tools. While AI may be perfect enough to replace humans in the future, that time hasn’t come yet.

[The author is a first-year B.A. LL.B. student from NALSAR University of Law, Hyderabad with a keen interest in criminal law.]

Advertisement

2 thoughts on “Analysing The Use Of Artificial Intelligence in Criminal Sentencing through the Loomis Decision

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s