Skip to Content
Audio

AI and Justice: The Neutrality Myth

Mar 3, 2026

Subscribe on Apple Podcasts Spotify YouTube Pocket Casts

You have to be working for the benefit of people. It cannot be simply about accumulating wealth or accumulating power.

Part of the appeal of artificial intelligence is that it seems to stand above the messy world of human decision-making.

The criminal justice system has no shortage of that kind of decision-making. And AI tools are being put forward as solutions in everything from police departments and prisons to probation offices and courtrooms.

But how do we separate AI’s real promise from the hype? And how do we ensure the technology helps, rather than sets back, the cause of fairness and justice?

Making AI work in the service of justice is precisely the mandate given to Roy Austin, Jr., the inaugural director of the Howard Law Artificial Intelligence Initiative. Austin is a former Deputy Assistant Attorney General with the Civil Rights Division at the Department of Justice under President Obama and, until last year, the Vice President of Civil Rights at the tech giant Meta. He’s also senior advisor to the AI and Justice Consortium, of which the Center for Justice Innovation is a founding partner.

In this episode of New Thinking, Austin argues AI isn’t so much a story about technology. “It’s human beings who decide what data goes in. It’s human beings who decide the algorithms and how they’re going to work,” he explains. “And it’s human beings who are impacted the most by this.”

For more information on our work with AI in the justice system, click the link below:

AI & Justice

For a transcript of the episode, see below.