Skip to Content

The Do’s and Don’ts of AI in the Criminal Justice System

Jan 13, 2026

Lady Justice blindfolded holding up scales surrounded by binary 0's and 1's and redacted text.

As artificial intelligence plays a growing role across our institutions, how can we make sure courts and justice systems use it responsibly?


Artificial intelligence has quickly become part of the fabric of everyday life. From our search engines to social media and advertisements, AI is virtually unavoidable—and will likely continue to evolve.

Collectively, artificial intelligence is estimated to be a $750 billion industry. On top of millions of individual users, it’s also been widely adopted by businesses and institutions—including the criminal justice system.

Enthusiasm for these new technologies is running high, but so are concerns about what they mean for people’s lives. As AI continues to spread, how can we make sure courts and legal systems use it responsibly?

That question was the focus of our policy brief, “A Line in the Sand: Artificial Intelligence and Human Liberty.” The report makes the case for drawing a firm line when it comes to using AI in decisions that could impact people’s liberty or cause serious harm.

The high stakes of AI in the justice system

The decisions made in courts every day have life-changing consequences. Time spent in jail while awaiting trial can cost someone their job, housing, and the chance to access much-needed services.

Even a brief encounter with the justice system can follow someone for years. Apart from the trauma of the legal process itself, a criminal record can also limit people’s access to future opportunities.

Decisions in this realm cannot be taken lightly. While some look to AI as a way to increase fairness by minimizing human error and saving time, it can also do the opposite—leaving people’s lives in the hands of technologies that can be unpredictable.

In an earlier report, we called for centering human values in decisions about artificial intelligence and the legal system. Instead of rushing to implement new technologies, criminal justice leaders must first be clear about what purposes AI should serve.

Our more recent brief shares insights from a working group we hosted with leaders in both criminal justice and tech. The discussion covered a range of questions, but one core point stood out: Given how hard it is to manage AI, it’s crucial to avoid using it in decisions that could drastically change the course of a person’s life.

“The criminal legal system deprives people of liberty. It shouldn’t be using AI to do this,” said Sara Friedman of the Council of State Governments Justice Center. “There is a line when you are responsible for people’s lives; these are things you shouldn’t do.”

Avenues for innovation

If the dangers of using AI in these crucial decisions outweigh any potential gains, where can it be used safely?

A study we supported with IBM in Jefferson County, Alabama, offers one promising example. Researchers used AI-supported data analysis to uncover disparities in how fines and fees are used in the local legal system. They found that fines and fees disproportionately impact low-income and Black communities in Jefferson County. Middle-aged Black men, in particular, faced significantly higher penalties than their white counterparts for many charges.

Courts, justice systems, and researchers can similarly use AI to understand the impacts of existing policies and programs. That can help us strengthen those policies to better meet the needs of the people they aim to reach.

Our policy brief also points to the potential for AI to reduce administrative burdens and free up time for more direct, person-to-person support. And it might be able to help case managers, social workers, and other support staff more effectively connect people to community-based services that can put them on a better path.

Still, even relatively low-risk decisions need human oversight and strong guidelines. Those guidelines have to be rooted in shared values and goals to ensure AI is used safely, responsibly, and with dignity for everyone in the justice system.

“The data may not be able to solve the problem here,” said one tech executive in the working group. “We’re talking about human nature.” Whatever the future holds for AI in the justice system, it should be guided by the needs and values of the human beings on both sides of the bench.