Back to top
Robot - Robotics

I attended Self.conference’s Saturday sessions and I joined a talk titled “Machines Learning Human Biases: How Does It Happen? Can We Unteach Them?” by Devney Hamilton. This session was particularly intriguing to me because about two months ago I started teaching myself machine learning. Machine learning is a form of artificial intelligence (AI) that gives a computer the information to perform tasks without being explicitly programmed to do so. This consists of training an algorithm on a lot of data.

When the algorithm is fed new data, it begins to correctly predict the outcome or react accordingly based on cumulative data. For example: Trying to predict how much a house would sell for. The algorithm would be fed data on houses recently sold with basic house features, such as square footage, age of the home, ZIP code, and then what that house sold for. When the algorithm is fed a new house data, while referencing past data with the new data, it can correctly predict what that house will sell for. I’m interested to continue my learning in this field as well as the future of AI and how it will affect the world we live in.

Devney’s talk was about how machine-learning algorithms learn human biases, potential consequences, and what we can do to prevent this from occurring. Since machine-learning algorithms are trained off of data, the algorithm tends to only work as well as the data you train it on. One of the most interesting parts of the talk was about the use of machine learning in the criminal justice system to determine the sentencing, parole eligibility, and bond amounts of defendants based on a risk assessment generated by machine-learning algorithms. She specifically pointed to a program called COMPAS, which assigns a risk-assessment score to determine how likely it is someone is to commit a crime again.

Human bias has an impact on the output of machine-learning algorithms in the criminal justice system. The algorithm is trained on data such as arrest and conviction history, education, social relationships, employment, ZIP code, gender, age, family and friends’ criminal history, and family and friends’ substance use. Because of racial biases in arrest and conviction rates, Devney points out in her talk that “if black people in general are more likely to be rearrested than white people, then a black defendant is more likely to be given a higher risk score than a white defendant.“

This type of risk assessment is not only dangerous because of the racial biases it presents, but because it removes the individual circumstance of a defendant from the risk-assessment process. People’s behavior is biased, which means it produces biased raw data — and so an unbiased mathematical model trained on biased raw data will then produce results that reflect the bias. When white people are favored in criminal justice decisions, this will result in data showing white people with fewer arrests, fewer charges, fewer convictions, and lighter sentences. These are all data points that could be used by the machine-learning algorithm.

The question that came up continually was, “Do we want the future to resemble the past?” In my opinion, as it relates to the criminal justice system, the answer is no.

The criminal justice system isn’t all bad, but the United States imprisonment rate is near the top compared to other countries, and many people serve large sentences for nonviolent crimes. It’s also very concerning that something like COMPAS is being used to determine outcomes of people’s lives in the criminal justice system. There should be greater examination of these tools before being used in situations like this.

Some of the opportunities of machine learning in the criminal justice system that Devney spoke about were things like keeping low-risk people out of the criminal justice system and to make decisions more consistently. Data can be adjusted to decrease or eliminate bias out of machine-learning algorithms, which requires a deeper examination of the data that the algorithm is being trained on.

I’m excited and curious about the potential of machine learning and artificial intelligence and the impact it will have on our future. There are a lot of legitimate concerns with this technology with a lot of unanswered questions, but it’s good to start thinking about these concerns along with the possible solutions.

I’m still in the early stages of teaching myself machine learning with a long journey ahead. I recently finished Andrew Ng’s machine-learning course on Coursera. My next steps are to learn linear algebra (most commonly used math in machine learning) through Khan Academy as well as work my way through a book called Introduction to Machine Learning with Python: A Guide for Data Scientists. Below are other resources if you would like to learn more about machine learning.

Oxford University AI and Job Automation Study

Machine Learning and Criminal Justice System Articles