Skip to main content
Menu

Book

Better algorithms are key to reducing bias in criminal sentencing, argues a legal scholar

Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk

Christopher Slobogin
Cambridge University Press
2021
182 pp.
Purchase this item now

In Just Algorithms, Christopher Slobogin develops two careful and sustained comparative arguments in favor of automating the nature and duration of criminal sentencing in the United States along with principles to govern the machinery of this automation. The primary purpose of the book is to counter recent arguments against automated jurisprudence, especially those made concerning potential biases inherent in computational risk assessment instruments (RAIs) (1–5).

In the book’s first chapter, “Rationale,” Slobogin argues that attorneys, mental health professionals, judges, and parole board members frequently impose and enforce harmful legal decisions and that, while computational simulations are imperfect, they are increasingly valid, fair, and just predictors of an individual’s future criminal tendencies. In support of this assertion, he summarizes his 2015 paper (6), in which he claimed that core elements of US culture, such as individualism, an adversarial justice system, the “laissez-faire, winner-or-loser ethos” of the country’s political economy, the punitive character of “fundamentalist Christians,” and historic racism are so entrenched that they will stymie all efforts designed to reduce incarceration and end racial bias in the US criminal justice system. The book’s central chapters—“Fit,” “Validity,” and “Fairness”—specify and support the legal principles that must govern the design, implementation, auditing, and revision of algorithms used in automated jurisprudence.

In “Fit,” Slobogin describes the criteria an RAI must meet to conform to the law’s specification of risk. These include the ability to provide a “more-likely-than-not standard” of the probability of risk that an individual will commit a specific set of actions within a given time period relative to his or her assigned group, in the absence of any legal constraints such as house arrest or incarceration. RAIs that could meet the specificity criterion would be those trained to predict only “conviction for the most serious violent offenses.”

In his chapter on fairness, Slobogin concedes that allegations that RAIs violate the Equal Protection Clause of the 14th Amendment have “more than a grain of truth.” He argues, however, that RAIs that are fit and valid will not discriminate on the basis of race or other constitutionally protected categories, even if their use “has a disparate racial or sex-based impact.”

To ensure that RAIs are applied fairly under the 14th Amendment, Slobogin argues that they must be used to estimate risk only by simulating future violent crimes, not future missed court dates, other types of misdemeanors, or nonviolent felonies. He also advocates the use of “race-conscious calibration…which involves creating different algorithms for different ethnicities or races.”

Slobogin’s primary argument in favor of RAI-based sentencing appears in his penultimate chapter, “Structure.” Here, he contrasts the advantages of preventive justice, wherein an individual is incarcerated on the assumption that they would otherwise represent a future violent threat to society, with the many disadvantages of “just desert” theory, in which incarceration is intended only to punish an individual for past crimes. The crux of his argument is that “both desert and risk are crucial considerations in fashioning sentences in individual cases, and arguably are the principal considerations in that context.” RAIs, he maintains, will increase fairness in plea bargaining and charging decisions and “should trigger more oversight of a post-conviction process that has long been ignored.”

Ultimately, however, the book does little to allay concerns about the inherent potential of algorithms to perpetuate racial discrimination in criminal sentencing, primarily because Slobogin tends to minimize the potential effects of systemic racism on the probability that one will be implicated in a violent crime. Biased data will always result in processes that are resistant to attempts to achieve algorithmic fairness.

Nonetheless, Just Algorithms is one of the first in-depth, systematic legal arguments in favor of automating justice that considers legal and scientific aspects of criminal punishment via the simulation of recidivism. As such, the book is necessary reading for anyone seriously interested in criminal justice reform and the ethical, legal, and social implications of applying data science technologies in judicial contexts.

References and Notes:
1. T. L. Fass, K. Heilbrun, D. DeMatteo, R. Fretz, Crim. Justice Behav. 35, 1095 (2008).
2. J. Kleinberg, H. Lakkaraju, J. Leskovec, J. Ludwig, S. Mullainathan, “Human Decisions and Machine Predictions” (NBER Working Paper W23180, Social Science Research Network, 2017).
3. J. Dressel, H. Farid, Sci. Adv. 4, eaao5580 (2018).
4. J. Skeem, C. Lowenkamp, Behav. Sci. Law 38, 259 (2020).
5. S. Xue, M. Yurochkin, Y. Sun, Proc. Mach. Learn. Res. 108, 4552 (2020).
6. C. Slobogin, Howard Law J. 58, 317 (2015).

About the author

The reviewer is a faculty member of the Psychology, Neuroscience, and Data Science programs at Scripps College, Claremont, CA 91711, USA.