AI
August 28, 2023

I wish ‘Who is your bias?’ was a simple question that was only meant for K-pop fandoms and not for a machine that has taken over three-fourths of the world. 

Artificial Intelligence since its advent has evoked mixed feelings in the general public. Either it is on a highway to making advancements that mankind has never experienced before or it is creating the most controversial issues. Unfortunately, the topic under scrutiny at this moment belongs to the latter of the two. 

Although this may not be a recent complaint that has been lodged against AI, it keeps resurging due to the fact that it has not been addressed as much as it should have been. AI, which is the phenomenon of the century, is biased in more ways than one. Be it racism or sexism, AI seems to be doing it all. The fact that algorithms encode and spread discrimination is nothing new, but it has been keeping people up at night because it is becoming dangerous. 

An unbiased outlook on a biased technology 

First and foremost, the bottom line of this entire discussion is that these machines learn through humans. That is, AI systems learn through training data processed by algorithms that are designed by humans. As a consequence, if the data used to train the AI contains a bias, the AI itself will exhibit biased outcomes. Subsequently, the biases (whatever they are) in the data are reflected via AI’s decisions and actions. This is why many claim that technologies such as AI and machine learning algorithms underpin structural racism. This will lead to selected people and communities being treated subpar and being branded as ‘lower status’ in society. The danger of an outcome as such lies in the fact that most domains are run by AI. Therefore, AI systems have the power of exacerbating systemic inequalities in industries such as healthcare, employment, and criminal justice. A good place to start would be how these disparities have come to the point where it is now a matter between life and death and not merely fun and games. 

The process of criminal sentencing or determining the possibility of whether a person is a risk or not is often said to be subjected to discriminatory AI. One that was flagged by experts and researchers as discriminatory was the algorithm titled COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which is used in over 46 states in the USA. 

This is because, throughout the country, judges, probation officers, and parole officers are increasingly turning to algorithms to evaluate the probability of a criminal defendant becoming a recidivist (individuals who are likely to reoffend). This goes as a risk assessment algorithm, which apparently makes the lives of the higher-ups in the criminal justice system much easier. The catch to this is that it was said to be inherently racist. 

To clarify what that means, Sadiya Noble draws from the research done by Julia Angwin around the COMPAS in her interview with NPR’s Ailsa Chang. She explains how “Black people who were charged with crimes were more than four times likely to be sentenced to very severe punishment, as opposed to white offenders who were committing violent crimes and were much more likely to be released on bail”. 

This was because the predictive AI systems use factors such as arrest histories in a specific neighborhood – a ‘zip-code’, to make predictions. If you live in an area with a history of excessive policing and over-arresting, which often affects Black and Latino communities, it can lead to a higher predicted risk of re-offending. The angst against such a discriminatory prediction arises because now the situation automatically becomes one that is not about your behavior. It becomes one that rather reflects the historical impact of structural racism that is found in policing across the United States. 

AI IMG 1

A machine that mimics similar functionality is the STMP (Suspect Target Management Plan). A New South Wales (Australia) Police Force initiative that was manufactured to reduce crime among individuals considered to be high-risk, via proactive policing. Despite such efforts at upkeeping law and order, data illustrates that STMP unreasonably targets young people, particularly Aboriginal and Torres Strait Islander people.  

The unfair treatment does not stop there, rather it carries over to the health sector as well. 

For instance, ‘ Optum’, an algorithm that was introduced in the U.S. and was used to identify high-cost beneficiaries, tends to under-refer people of color to necessary support programs in comparison to white people. 

This occurs because it focuses on predicting spending behaviors rather than actual healthcare needs. Statistics show that people of color may seek assistance in healthcare less than their white counterparts. Therefore, they will not be recorded in a system that picks from people who spend on health care. 

The problem is then worsened by other factors. Because it is not merely about the errors in predictions but rather how these errors disproportionately affect certain groups. This is often called the ‘predictive parity’ which can also be borne from a lack of representative data which will contribute to inaccurate decision-making. 

The issue at hand seemed to have seeped into all domains. Facial recognition purportedly reinforces systematic racism in many public places such as cities and airports. The research done on facial recognition algorithms reveal that these technologies are not neutral but instead feed into historical inequalities. For example, the systems seem to glitch when it comes to women, children, and people of color. Dr.Monika Zalnieriute and Professor Tatiana Cutts in their research paper titled “How AI and New Technologies Reinforce Systemic Racism” explain : “The discrimination can be introduced into the facial recognition technology software in three technical ways: first, through the machine learning process through the training data set and system design; second, through technical bias incidental to the simplification necessary to translate reality into code; and third, through emergent bias which arises from users’ interaction with specific populations” 

While these racist outcomes may appear to be like they are from a dystopian novel or a film, it is the reality that we live in. What is worse is that this situation has infected all spheres of life including social media, Beauty AIs, and even something as simple as auto suggestions on Google search. To undo this malignancy,  the entire world of AI and algorithms needs to be reset if we are to ever come out of it. Some may ask the question “if the algorithms and AI are racist and sexist because they are exposed to biased datasets, why don’t we just simply use more representative data?”. But the problem is more threatening than that. And that is because systematic racism is not merely rooted in technology but they are visibly present in our society. How do we change that?

(Sandunlekha Ekanayake)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.