by Melinda Yung, Opinions Editor
graphic by Julie Wang
Although algorithmic bias may sound like a complicated term, it is actually of great importance to our daily lives, especially in our increasingly technologically dependent world. It describes the hidden prejudice in a computer when collecting data and statistics — known as datasets — on its users, often leading to unfair or even discriminatory outcomes.
From its use in larger corporations like McDonald’s and Google to local police forces, algorithmic bias taints the machine learning algorithms which are both a constant and pivotal factor in efficient decision-making.
Machine learning algorithms aid companies by producing datasets about the company’s products and consumers. Conjunctively, the machine learns how to use various algorithms to analyze data and produce datasets. The most popular type of algorithms that machines use to collect data is in the form of linear and logistic regressions, a statistical model used to predict how frequently each number in a dataset appears.
Decision trees are diagrams that illustrate the different probabilities in a dataset, while naive bayes act as a classifier used to predict a dataset’s probabilities. The data collected through these methods allow companies to catch patterns in consumer usage while also recognizing potential problems in their products.
However, capability can prove problematic, since the computer often fails to recognize biased datasets produced by the algorithms. Through just one sample of data, many machines do not consider the other dataset possibilities revolving around the consumer and products, and this can draw conclusions on minority groups without considering nuances as would a human.
These often unjust assumptions exclude people of low income or minority backgrounds from the positive effects of machine learning algorithms such as better experiences from a company. The biases from those who program these algorithms without considering interfering variables often translates into their programming, leading to algorithmic bias.
The use of machine learning algorithms extend far beyond just private companies. In the past decade, police forces have increasingly used machine learning algorithms to predict crime through a practice known as predictive policing.
Using various machine learning algorithms allows forces to collect and analyze an area’s crime data on the number of burglaries or incidents occurring there while also analyzing the type of residents that live there. Crime data causes machines to recognize patterns from these statistics, resulting in biased trends.
Several companies engineer predictive policing software, including PredPol, a company founded in 2012, and also the name of their software. More than 60 police departments across the country, one being the Boston Police Department (BPD), regularly uses PredPol software to collect data for multiple projects and studies on property crime.
PredPol uses machine learning algorithms to create crime datasets, but because the algorithms focus on locations that had high numbers of crime, these datasets can incorrectly predict that more crimes will occur around the same area.
According to the Los Angeles Times, in 2019 the Los Angeles Police Department (LAPD) recorded data about the types of cars in higher crime rated areas. The LAPD collected their data by searching cars through traffic stops, and it was recorded that 24% of black drivers were searched, 16% of Latinos drivers were searched, and only 5% of white drivers were searched.
This data collection illustrates how a machine’s dataset can already be biased due to the statistics being inputted. Ultimately, data that predictive police software like PredPol use in their algorithms can unintentionally target minority neighborhoods.
Because machine learning algorithms are not programmed to analyze and consider each data point’s various possibilities, a profuse number of assumptions can be created regarding minority groups. Since police forces hold a great amount of power in our society, numerous problems arise around machine learning algorithms.
Algorithmic bias is created amongst predictive policing as the dataset becomes more biased towards minority groups. Since these biases are through digital technology, it is difficult to fix these dataset issues as it becomes automatic in a machine’s system. There is no quick fix towards preventing algorithmic bias, but there are effective ways to limit its occurrence.
Collecting more diverse datasets rather than relying on one source is a key way to ensure that machines are learning to identify a variety of different trends. Since we live in an innovative society, we should strive to engineer more advanced technology like sensors that can be placed on parking meters, storefronts and pavements to directly capture data without isolating minority groups.
We must also work to create data analysis tools that will strengthen the source of our datasets, further limiting the risk of algorithmic bias. We must constantly be open to new ideas when implementing advanced technology to make sure our society is welcoming, just and truthful.