AI in Crimnal Justice: Blog Post #1

Andrew Cho
3 min readNov 14, 2020

With the 2020 elections just finishing, a need for adequate Criminal Justice policies has never been important. The death of George Floyd in May 2020 sparked mass protests across the nation, where protestors advocated against police brutality and over-policing of minority groups, especially the African American community. Consequently, African Americans experience unequal treatment in many different ways, including risk assessment scores.

COMPAS: A Case Study

COMPAS Risk Score Assessment

ProPublica, a non-profit organization, conducted a study on risk assessment software named COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). This software predicted whether a criminal would recidivate (the tendency of a convicted criminal to re-offend). The software uses a criminal’s history, type of crime, age, gender, race, and other personal information to assess the defendant's risk score.

The study results seemed to follow a general pattern: Black defendants had a higher risk score, White defendants had a lower risk score, and Black defendants were twice as likely to be labeled to re-offend than white defendants. It is clear that there are racial disparities in the results and that the software has biases.

You may be wondering “Arent these biased results not biases but just accurate? The answer is No. ProPublica ran another test, isolating race, age, and gender from criminal history and recidivism. The results were still the same: Black defendants are 77% more likely to have a higher risk score and 45% more likely to be labeled to recidivate.

COMPAS Predictions
COMPAS Predictions

In addition, COMPAS was found to label 44.9% of African American defendants as higher risk thought they do not re-offend. And label 47.7% of white defendants lower risk who do re-offend. There is no question that there are biases and racial disparities integrated into this Algorithm.

However, whether the bias is from the training set (data) or the actually algorithm itself is another question, which can be answered by creating our own AI model (which I will be going into further in later posts). So what can we do to create a “fairer” model and how do we define “fairness”? Is a more accurate model, fair? The remainder of this blog series will dive a deeper look into “fairness” and how we can create a better, unbias model.

Andrew Cho is a Student Ambassador in the Inspirit AI Student Ambassadors Program. Inspirit AI is a pre-collegiate enrichment program that exposes curious high school students globally to AI through live online classes. Learn more at https://www.inspiritai.com/.

--

--