How Machine Learning Algorithms Are Used in The Justice System?

How Machine Learning Algorithms Are Used in The Justice System?

Artificial intelligence has established itself as the leading emerging technology and the future of humankind. As a result, various industries implement machine learning algorithms that coexist with human labor to complete job-related tasks, and the law enforcement industry doesn’t lag behind.

Primarily, the justice system uses risk assessment tools to help judges determine sentence length and probation options. Law enforcement also utilizes these tools to predict and prevent criminal activity using hi-tech image detection and facial recognition systems. ML models can even be trained to predict possible recidivism in convicts.

These advancements can potentially change how the entire justice system functions. But making decisions in law enforcement assumes immense responsibilities, as they decide the fates of individuals. Are current AI tools competent enough to make unbiased and fair predictions? Do they actually help or hinder the process? Read on to find out more.

AI in Law Enforcement

Examples of AI in law enforcement can be grouped into four main categories. These include:

  • Facial Recognition. These tools are used to ensure public safety through video and image analysis. High-resolution cameras can provide investigators with information about a person’s behavior. For example, Janus uses a robust model that connects to a video surveillance system. It transmits data from the AI camera to a centralized server warning of abnormal or dangerous behavior.
  • DNA Analysis. Forensic DNA testing has played a significant role in the criminal justice system, helping scientists perform more profound and precise analyses of human DNA. A group of researchers from Syracuse University developed an ML-based software that uses data mining techniques to improve the DNA dissection process, improving the accuracy of the results of DNA matching.
  • Gunshot Detection. Cadre Research Labs have employed an ML model that can analyze gunshot audio files on smart devices to assist law enforcement agents in investigations. The software can perform many tasks, including differentiating muzzle blasts from shock waves, determining shot-to-shot timings, determining the number of firearms present, assigning shots to weapons, and estimating probabilities of class and caliber.
  • Crime Forecasting. Several law enforcement agencies use machine learning algorithms to predict the types of individuals most likely to commit crimes. For instance, AI-powered tools have become integral to the Chicago Police Department’s Violence Reduction Strategy. They gather information and construct social networks that help them identify potentially high-risk individuals.
    Several courts utilize these tools to assess risks associated with culprits, like the possibility of committing a crime again or not showing up to court. They detect patterns in human behavior that allows them to make informed decisions about bail, sentencing, and parole. Even though the judge makes the final decision, the model’s outcome can guide them to deliver a fully objective and unbiased verdict.

Supporters of these AI models claim that criminal justice decision-making models can provide juster outcomes and eradicate the flaws in the traditional judicial system. For instance, these models consider age, sex, nationality, criminal history, type of crime, and probation of the culprit. The model is then trained to predict the recidivism of culprits: whether they are likely to relapse and commit another crime.

What Are the Benefits of AI in Law Enforcement?

The most prominent benefits of deploying AI in the justice system include the following:

  • Expediting Human Tasks and Increasing Accuracy. Machine learning algorithms can accelerate the processes performed by humans, helping to shortlist suspects. For instance, high-quality cameras can capture everything from drug trafficking to gunshots and report these crimes to the relevant authorities in real time. AI-powered tools are also deployed to detect online-related crimes, including financial fraud, phishing attacks, and involvement in the dark web.
  • Predicting Crime Patterns. Using vast amounts of data on past cases, ML models can predict human behavior and their tendency toward committing a future crime. A basic example includes predicting the likelihood of previously convicted felons committing another crime. As a result, this information can be used to set or deny bail and probation.
  • Protecting Critical Infrastructure. AI and machine learning algorithms are applied across critical infrastructures to reduce the risks of terrorist and hacker attacks on transportation, utility, or Internet systems. Edge computing, 5G, and the Internet of Things maximize efficiency and prevent possibilities of terrorist attacks on critical infrastructures that can damage the environment or threaten human lives. 
  • Uncovering Criminal Networks. Finally, AI can help investigators analyze millions of bits of data to find patterns in images, language, and identities. This allows them to uncover networks of organized crime. AI tools can provide visual representations of organizational hierarchies and identify the nature of criminal networks in drug trafficking, human trafficking, counterfeiting, and weapons dealing.

What Are Risk Assessment Tools?

Tools that assist decision-makers in the justice system have been used since the 20th century. Also called risk assessment instruments (RAI), these are algorithms that predict which convicts pose the most threat to society and whether they’ll re-offend in the future. This data is later used in court hearings for sentencing, supervision, or release.

Below are some examples of widely used risk assessment tools in the justice system:

COMPAS

Correctional Offender Management Profiling for Alternative Solutions (COMPAS) is a risk assessment tool that uses artificial intelligence to predict violent recidivism, general recidivism, and pretrial release risk. It has also been used for general sentencing in courts, such as determining the length of the sentence.

Several states use COMPAS in their jurisdictions, including New York, California, and Florida’s Broward County. According to the software’s official guide, COMPAS takes into account the following factors to determine an offender’s likelihood of committing another crime:

  • Pretrial Release – based on current and pending charges, prior arrest and pretrial history, employment status, community ties, and substance abuse.
  • General Recidivism – based on criminal history and associates, drug involvement, and juvenile delinquency.
  • Violent Recidivism – based on the history of violence and non-compliance, vocational/educational problems, the person’s age at intake, and age at first arrest.

Public Safety Assessment (PSA)

Public Safety Assessment is an RAI used to predict an individual’s future risk of misconduct. It helps courts decide whether to incarcerate the offender before trial. 

The algorithm uses several factors, including the age and history of misconduct of the individual, to produce three risk scores – the risk of committing new crimes, the risk of not committing new crimes, and the risk of not showing up for a court hearing. The system doesn’t interview the offender personally but instead answers nine questions by looking at previous court records. 

These risks are later translated into release-condition recommendations, with higher risks corresponding to stricter release conditions. However, the judge is still empowered to make the final decision regardless of the algorithm’s outcome. 

Level of Service/Case Management Inventory (LS/CMI)

The LS/CMI data processing system helps organize and store all information about a culprit’s case and supervision. The survey involves 43 questions about the individual’s criminal history. Justice system employees use this data to set levels of probation, parole supervision, and prison sentences.

Challenges of Criminal Justice Risk Assessment Tools

Current risk assessment tools have caused some controversy, with several experts claiming that they exacerbate the already prevalent bias in decision-making. The problem is that ML models are trained on previous data. For example, the data includes criminal records and information about previous court cases. But if this data contains biased decisions made by humans in the past, the algorithm will inherit the bias and be trained on faulty data, producing biased results.

One of the most prominent examples of biased decision-making includes the discrimination of African-American people in the conviction for marijuana possession. Research says that all racial groups have been estimated to consume marijuana at equal rates. However, African-Americans have been charged at much higher rates than other demographics. Therefore, a model that relies on historical records will unfairly predict African-Americans to be in a high-risk group.

The limitations of machine learning algorithms include two types of biases:

  • Bias and data – biased data produces biased outcomes. While the machine learning model may detect some patterns correlated with crime, these patterns may not accurately represent the actual cause of the crime. Most often, these patterns reflect the existing injustice in law enforcement. As long as machine learning algorithms rely only on past records, demographics that have historically been discriminated against may suffer from injustices.
  • Bias and humans – AI can reinforce human biases. Another problem arises if you look at the same issue differently. The risk assessment tool may produce a result that validates human bias. And since the judges can base their decisions on the algorithm, their decisions can be influenced by implicit biases.

The existence of bias leaves the true impact of AI-assisted tools somewhat ambiguous. Critics also present other arguments, specifically against COMPAS, calling it incompetent for the following reasons:

  1. Lack of individualization. During the 2016 Loomis v. Wisconsin case, petitioners claimed that the sentence was carried out based on historical group tendencies of misconduct assessed by COMPAS. They further asserted that the court did not look at the culprit’s personal details but rather grouped them with criminals with similar behavior and issued a similar sentence. However, the court denied this statement, saying that the final decision did not entirely rely on the risk assessment tool.
  1. Lack of transparency. Founders of COMPAS refuse to disclose a detailed explanation of how the software works, claiming it is a trade secret. For example, during the same Wisconsin court case, gender was included in the assessment process but without any details on how it was assessed and how much weight it had in the equation, making the petitioner believe it was discriminatory.

How to Improve Machine Learning Algorithms in the Justice System

The future of machine learning algorithms in the justice system remains ambiguous. But if everyone, including executive management and software developers, finds a solution that eliminates the current challenges, ML models can truly revolutionize the decision-making process in courts. Here are four recommendations that could help revitalize the role of AI in criminal justice:

  1. Human oversight over model implementation. First, it’s important to keep a human eye on every stage of AI infrastructure, from preparing the data to implementation. No matter how advanced these tools are, humans always make the final decision. The judge should always give a written explanation both in the case of complying with and contradicting the results of an RAI when delivering a verdict. This can help judges consciously motivate their decision and reduce the impact of arbitrary decisions made by software.
  1. Transparent algorithms. In such a high-stakes context, deep knowledge and mastery of these tools are crucial for providing legitimate results. Policymakers using these tools should know exactly how they work. By disclosing detailed information about how risk determination works, judges and law enforcement can more accurately and effectively apply them.
  1. No discrimination against certain demographics. Interested parties in the justice system should carefully examine and scrub the data to remove bias before using it for building ML models. This can help reduce the chance of some groups being treated more unjustly than others. Furthermore, model predictions should still be tested to observe whether groups with similar risk scores re-offend at similar rates. Only after multiple tests can agents finally confirm the model to be unbiased.
  1. Continuous evaluation and monitoring. A machine learning model can only function effectively with constant monitoring and assessment of the results. By evaluating the results of new machine learning algorithms, policymakers can truly identify whether they generate the desired impact they aim to achieve.

Sum Up

Artificial intelligence in criminal justice is used to prevent crimes and ensure public safety. For instance, facial recognition systems and advanced technologies comprise various police departments’ investigative tools. Additionally, courts use risk assessment tools that help judges determine sentencing terms, bail, and probation. 

However, the quality of these tools is a big topic of debate. Critics of these algorithms claim that they are trained to make predictions on biased data, resulting in discriminatory outcomes. But if designed properly, these tools can help people make informed decisions and restore equity in the justice system.