AI is already shaping the justice system. How can we make sure these technologies are used to make that system more, not less, human?
As artificial intelligence gains momentum across sectors, its use in the justice system has been met with both enthusiasm and suspicion.
This isn’t the first time that courts and justice systems have placed their hopes on a new technology to increase fairness and efficiency, improve decision-making, and deliver justice with scientific precision. But technology isn’t neutral—and it can’t replace human decision-making.
AI is already in use across the justice system, from police precincts to courts, in ways that seem harmless on the surface. According to experts on the “Demystifying AI” webinar series co-presented by John Jay College and the AI and Justice Consortium, AI tools are being used to automate routine administrative tasks, like report writing. But even seemingly low-risk decisions need to be carefully evaluated to ensure AI is used safely and responsibly.
The lessons of risk assessments
The use of risk assessments in the justice system offers a cautionary tale about treating technology as a silver bullet to speed up the work and get around the thorny problems of human values and biases.
Risk assessment tools were devised to help judges decide whether someone charged with a crime can be safely released into the community. Using mathematical algorithms, they aim to measure the likelihood that someone will face another arrest or fail to appear in court based on a variety of factors, from prior criminal history to demographic information like age and gender.
Many proponents saw risk assessments as promising tools for reducing bias, replacing faulty human judgment with data-driven algorithms that could make decisions in a truly impartial way. But without human care and oversight, these technologies can actually reproduce racial bias instead of addressing it. As Michelle Alexander warned nearly a decade ago, risk assessment tools can be “significantly influenced by pervasive bias in the criminal justice system.”
One ProPublica analysis found that a widely used risk assessment tool in Broward County, Florida, consistently labeled Black defendants as higher risk even when they didn’t go on to face another arrest. This, in turn, directly exposed them to higher bail amounts and more time in jail. And our own study of risk assessments, based on data from more than 175,000 defendants in New York City, highlighted how these tools can indirectly reinforce racial disparities even if they’re free from bias on the surface.
Embracing human values
Every technology reflects the human decisions, values, and inputs that go into it. If systemic biases are baked into the data that risk assessments and AI tools are based on, these technologies can mimic or even magnify those biases. For example, in a system that disproportionately criminalizes people of color, technologies that automatically flag people as higher risk based on an existing criminal record can compound these issues instead of correcting them.
In short, biased inputs lead to biased outputs. Just as we’ve seen how AI tools can end up regurgitating discriminatory language and ideas, there is a very real danger that AI models picked up by justice systems could double down on systemic disparities in the data that goes into them.
Yet if human decisions and values prevent technology from operating in a neutral way, they’re also the key to using these tools smartly and responsibly. Despite the concerns highlighted in our 2019 study, we also found that risk assessments could help reduce incarceration and racial disparities when used in a more targeted way. Rather than abandoning them altogether, we argued that practitioners should first decide what they want to use risk assessments for—and intentionally design policies and practices to meet those goals.
The same can be said for the use of artificial intelligence today. Instead of reinforcing racial disparities in the system, AI tools could help sift through data to uncover and address them. For that to happen, people must decide exactly how, when, and why to use AI in the justice system. Instead of using these tools to skirt the problem of human values, we can plan carefully and institute safeguards to ground them in the values we want our systems to reflect—fairness, justice, and transparency.
Like it or not, AI is already shaping the justice system, and it’s people who decide whether these tools will help or set back humanity and equity. We can repeat past mistakes, rushing to adopt new technologies without carefully considering how they might affect people’s lives or what exactly we want to accomplish. Or we can move forward with care and attention, using AI in ways that make the justice system more—not less—human.