Is AI here to prevent modern slavery or be a shareholder of it?
November 22, 2023

The fervour with which AI is spreading warrants the question of ‘Is this the next Industrial Revolution?’.With the way that things are going, it is evident that perhaps it is actually something bigger than the industrial revolution. This is because artificial intelligence has ushered in a new digital era that is more disruptive than what the factory system was. While changes that are heralded by AI are either largely positive or largely negative, it is disheartening to hear that AI’s success is riding on the waves of child and exploited labour. A contrasting practice to what AI preaches about preventing modern slavery. 

AI to prevent slavery

Bringing to light AI’s attempt to combat slavery is important to put everyone at ease. Especially because of the ominous connotations that AI is often associated with. While the doom and gloom is inevitable, instances like this are convincing enough to make us believe that AI might after all not be the chaperone of a tech apocalypse. Some call it being delusional, others call it being optimistic. Either way, AI can be used as a tool to tackle the issue of slavery. 

Efforts to combat forced labour often face challenges in identifying and accessing the locations where it occurs, particularly in remote or unstable areas. However, human rights activists are exploring the use of artificial intelligence as a new tool to track notorious sites of slavery. The potential of AI in this context is to significantly broaden and expedite the crucial work of identifying and addressing instances of forced labour around the world. For instance, at an annual event in the UK, titled ‘The Data Challenge’, a project named Heyrick emerged as the winner. 

Project Heyrick is tackling the complex challenge of identifying the approximately 135,000 unidentified victims of modern slavery in the UK. These victims are often unidentifiable and challenging to trace using existing data. The project utilises a combination of government and open-source datasets, applying advanced analytics to identify locations and organisations that are at an elevated risk of modern slavery. 

The insights generated can then be used by enforcement departments, such as the police, Home Office, or National Crime Agency, to more effectively identify potential crimes related to modern slavery. While this is a great idea that was handpicked from a think tank, with ample funding and enough guidance a project like this can become a reality. One that has the bandwidth to save lives. 

AI thrives on child labour and slavery 

According to an article on Wired, there is news about underage workers being used to input data into or train AI. This often happens at a below-subpar hourly wage which is clearly an exploitation of the child or an adult. While Wired was able to interview only a few children from different parts of the world such as Kenya and Pakistan, their conversations revealed that this has become a trend that is practised on a large scale.

The process of training machine learning algorithms involves humans labelling raw data ranging from simple tasks like identifying images to complex ones like content moderation. The easiest and most cost-effective way of getting this data-labelling work done is by outsourcing it to gig workers through online crowdsourcing platforms. While these platforms typically require workers to be over 18, some individuals, including minors, have managed to bypass these checks by using false details and corresponding payment methods. 

The global rush into AI has led to significant growth in the data labelling and collection industry, with projections of exceeding $17.1 billion by 2030. Crowdsourcing platforms like Toloka, Appen, Clickworker, Teemwork.AI, and OneForma connect millions of remote gig workers in various regions, predominantly in the global south, to tech companies worldwide. These workers perform micro-tasks, labelling and evaluating data for AI training sets. 

The tasks can range from categorising images to content moderation, and workers are often paid per task, with remuneration varying. Occasionally, the workers are required to upload audio, images, and videos which are used to train AI. While a task as such is not harmful, at times the workers are asked to moderate content where mere children are expected to distinguish between harmless content and content that perpetuates hate speech or inappropriate imagery. The nature of this work has raised concerns about digital servitude and ethical implications, especially when it involves personal or sensitive content.

Another story that was quite disturbing was how, to ensure that ChatGPT stuck to its status quo of being harmless and non-toxic, OpenAI exploited Kenyan labourers earning less than $2 per hour to get the job done. As stated in Times, as much as ChatGPT had the skill set to produce semantically and syntactically accurate sentences it was also susceptible to spouting offensive remarks (sexist, racist and violent ones). This was because the AI had been fed a hundred million words that were gathered from the internet which is a vast repertoire of human language. This extensive training dataset for GPT-3 contributed to its advanced linguistic capabilities but also posed challenges due to the inclusion of toxic and biased content from the internet. OpenAI, in an effort to address these issues, implemented an additional AI-powered safety mechanism. 

To label the data, OpenAI sent numerous text snippets to an outsourcing firm in Kenya, starting in November 2021. The content included graphic descriptions of sensitive topics like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. Sama, the outsourcing partner based in San Francisco with workers in Kenya, Uganda, and India, labelled the data for OpenAI. Sama identifies itself as an “ethical AI” company, claiming to have contributed to lifting over 50,000 people out of poverty. 

Despite being a self-proclaimed ethical company, the data labellers who were hired on behalf of OpenAI were only given a take-home wage between $1.32 and $2 per hour. While the amount was meagre as it is, it varied depending on seniority and performance. While the work hours were tortuous, to say the least, the content that they had to sort through was traumatic. 

Evidently, as much as we are in a digital era, we are also in an era where there is digital slavery. The irony is that AI has become the problem and solution to modern slavery. While on the one hand, AI can be used to expose the exploitation of labour to the general public, at the same time it can also contribute to the growing number of sweatshops. Sadly, the latter is taking the lead and endangering the lives of many people. 

(Sandunlekha Ekanayake)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.