Artificial Intelligence in Criminal Justice: Sustained or Overruled?

In the midst of World War II, Alan Turing, along with a group of codebreakers working for MI6, developed a primitive computer that would go on to break the Nazi’s ENIGMA code and win the Allies the war. 75 years later, computers are ubiquitous in our modern society – and its use will continue to proliferate as our societal systems continue to rely on technology now more than ever. In the past couple of years, countries such as the United States and Canada have been evaluating the possibility of applying artificial intelligence to our criminal justice system, consequently overhauling the entire system as we know it. 

Artificial Intelligence (AI) is the ability of machines to perceive and respond to environments independent of human inputs. Essentially, AI enables computers to function in human-like ways to process data and make decisions. Strong AI, made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), is the futuristic version of AI we see in movies that reached or surpassed the intelligence of the human brain. Its counterpart, Weak AI, is trained to perform specific tasks rather than emulate the abilities of humans – this is the type of AI being researched as a decision-making aid to those working in the criminal justice system.

In the current system, AI has numerous applications that may enable a reduction in crime through applications such as gunshot detection, DNA analysis, facial recognition, sentencing, and developing fields like crime forecasting – whether individuals are likely to recidivate - in order to reduce human error within the justice system. However, the technology is still in development and certain applications are more controversial than others. During DNA analysis, AI can be used to analyze unviable or damaged DNA to project an accurate strand that can be matched with evidence already in the system. Meanwhile, in Pretrial Risk Assessments – predicting whether an individual is likely to commit a future crime or skip bail - AI has the potential to objectively evaluate an individual’s risk depending on factors such as their socioeconomic background, neighbourhood crime rate, and employment history. The defendant is given a risk score that is used to aid the judge in deciding whether to grant an individual bail. Ultimately, while AI can be applied to these areas of the system, the applications in the latter area are much more controversial due to possible racial biases in the data used in AI algorithms, a concept known as data discrimination.

In 2017, a lawyer named Rachel Cicurel working in Washington, D.C., was representing a juvenile defendant who was denied bail based on an algorithm that deemed her client high-risk. After issuing a challenge to the determination, Cicurel found that the AI algorithm made a risk assessment based on racially biased factors such as whether her client lived in government housing and previously expressed negative attitudes towards the police. Furthermore, the assessment technology hadn’t been properly validated by a scientific group and yet was being used to recommend sentences or deny teenage offenders bail. In the criminal justice system, AI is thus transferring decision-making authority away from prosecutors, judges, and other humans in the system towards algorithms. If the underlying algorithm is faulty and biased, this could have devastating consequences on the perception that our criminal justice system is fair. 

AI is neither inherently racially biased nor unfair; its function is to make predictions that are interpreted by humans. Government policy dictates how we use these predictions and whether the underlying algorithms are safe to use in their application. It is the responsibility of the government to regulate when and how AI is used in the criminal justice system – yet the D.C. case has demonstrated that it has failed to do just that in the United States. In Toronto, the Toronto Police Services Board has established five levels of risk-based categories regarding the application of AI. The risk levels range from low-risk speech-to-text transcription software to extreme risk facial recognition software that could result in mass surveillance. To maximize AI’s effectiveness in the system, the government should place limits on not only where AI can be used, but to what extent it can be depending on the risk level. The TPSB has already established that higher risk applications be subject to extensive evaluation and consultation. However, currently there is no federal law detailing the limits and legality of AI applications in our criminal justice system.

In the criminal justice system, AI should be regulated through policy at the federal rather than the municipal level to ensure equal legal standards are applied throughout the country. It is essential that algorithms are studied extensively before implementation lest the increased possibility of data discrimination may occur. If used properly, AI has the ability to provide investigative assistance and better maintain public safety. Over the next 50 years, the use of technology and AI is increasing ever more and will have profound impacts on the future of our public systems.

Sean El-Khouri AbboudComment