Algorithm to determine the sentence can reduce the length of prison sentences

Algorithm to determine the sentence can reduce the length of prison sentences

US prisons currently hold about 2 million detainees – many of them held while awaiting trial, and others serving extremely long prison sentences. New research by Professor Christopher Slobogin, Milton R. Underwood Chair in Law at Vanderbilt Law School, indicates that a risk prediction algorithm can help dramatically reduce these numbers.

According to the professor, in the US there is a big problem with incarceration, as none of the current solutions work. To solve the problem, he suggests using algorithms to help find out who poses a danger to the community if released.

The United States currently incarcerates 0.6 percent of its population—a rate six times higher than in European countries

Research shows that measures like decriminalization and elimination of mandatory minimum sentences barely made a dent in the incarceration rate,” Slobogin said. “That said, the public won’t buy any reform unless you can assure them of their safety.”

An ideal algorithm would indicate the probability of a particular individual committing a serious crime over a period of time, in the absence of intervention.

Algorithmic Risk Assessment: Balancing Advantages and Criticisms in Criminal Justice

In a recently published survey, Slobogin explained that by making criminal punishment decisions more transparent, algorithms could force a long-standing re-examination of the purposes and objectives of the criminal justice system. He argues that risk assessment algorithms can:

  • to help reduce pre-trial detention and the length of prison sentences without increasing the risk to the public – a particularly important goal as COVID-19 currently spreads across penal establishments;
  • mitigate excessively punitive bails and sentences, which disproportionately affect low-income people;
  • allocate correctional resources more efficiently and consistently;
  • provide the springboard for evidence-based rehabilitation programs that aim to reduce recidivism to divert those candidates who are most likely to succeed from prison.

Even with its advantages, the use of algorithms to decide the fate of a human life is controversial. Critics say the algorithms are not effective in identifying who will offend and who will respond to rehabilitation efforts. Critics also argue that algorithms can be racially prejudiced, dehumanizing, and antithetical to criminal justice principles.

Slobogin said that while the criticisms had merit, current risk prediction methods could be worse.

At least algorithms consistently structure the analysis

Unstructured decision-making by judges, probation officers, and mental health professionals is demonstrably biased and reflexive, he added and is often based on stereotypes and generalizations that ignore the goals of the justice system. Algorithms can perform better, he said, if only to a limited extent, and if they are designed to offset the influence of racialized policing and prosecutorial practices.

If the algorithms are proactively validated and used during the pretrial process, most people arrested “can keep their jobs, keep their families intact and help their lawyer with their defense, helping to track witnesses,” Slobogin said. “Using algorithms to report the sentence, we can release people sooner, which could help them become productive rather than languishing in prison where they lose all hope and learn how to be a better criminal.”


This post was first published on the Phys.

More information: Christopher Slobogin, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk, www.cambridge.org/us/academic/ … dence-risk?format=PB

    Share this post