<https://www.theguardian.com/technology/article/2024/jul/06/mercy-anita-african-workers-ai-artificial-intelligence-exploitation-feeding-machine>


Mercy craned forward, took a deep breath and loaded another task on her 
computer. One after another, disturbing images and videos appeared on her 
screen. As a Meta content moderator working at an outsourced office in Nairobi, 
Mercy was expected to action one “ticket” every 55 seconds during her 10-hour 
shift. This particular video was of a fatal car crash. Someone had filmed the 
scene and uploaded it to Facebook, where it had been flagged by a user. Mercy’s 
job was to determine whether it had breached any of the company’s guidelines 
that prohibit particularly violent or graphic content. She looked closer at the 
video as the person filming zoomed in on the crash. She began to recognise one 
of the faces on the screen just before it snapped into focus: the victim was 
her grandfather.

Mercy pushed her chair back and ran towards the exit, past rows of colleagues 
who looked on in concern. She was crying. Outside, she started calling 
relatives. There was disbelief – nobody else had heard the news yet. Her 
supervisor came out to comfort her, but also to remind her that she would need 
to return to her desk if she wanted to make her targets for the day. She could 
have a day off tomorrow in light of the incident – but given that she was 
already at work, he pointed out, she may as well finish her shift.

New tickets appeared on the screen: her grandfather again, the same crash over 
and over. Not only the same video shared by others, but new videos from 
different angles. Pictures of the car; pictures of the dead; descriptions of 
the scene. She began to recognise everything now. Her neighbourhood, around 
sunset, only a couple of hours ago – a familiar street she had walked along 
many times. Four people had died. Her shift seemed endless.

We spoke with dozens of workers just like Mercy at three data annotation and 
content moderation centres run by one company across Kenya and Uganda. Content 
moderators are the workers who trawl, manually, through social media posts to 
remove toxic content and flag violations of the company’s policies. Data 
annotators label data with relevant tags to make it legible for use by computer 
algorithms. Behind the scenes, these two types of “data work” make our digital 
lives possible. Mercy’s story was a particularly upsetting case, but by no 
means extraordinary. The demands of the job are intense.

“Physically you are tired, mentally you are tired, you are like a walking 
zombie,” said one data worker who had migrated from Nigeria for the job. Shifts 
are long and workers are expected to meet stringent performance targets based 
on their speed and accuracy. Mercy’s job also requires close attention – 
content moderators can’t just zone out, because they have to correctly tag 
videos according to strict criteria. Videos need to be examined to find the 
highest violation as defined by Meta’s policies. Violence and incitement, for 
instance, are a higher violation than simple bullying and harassment – so it 
isn’t enough to identify a single violation and then stop. You have to watch 
the whole thing, in case it gets worse.

“The most disturbing thing was not just the violence,” another moderator told 
us, “it was the sexually explicit and disturbing content.” Moderators witness 
suicides, torture and rape “almost every day”, commented the same moderator; 
“you normalise things that are just not normal.” Workers in these moderation 
centres are continually bombarded with graphic images and videos, and given no 
time to process what they are witnessing. They’re expected to action between 
500 and 1,000 tickets a day. Many reported never feeling the same again: the 
job had made an indelible mark on their lives. The consequences can be 
devastating. “Most of us are damaged psychologically, some have attempted 
suicide … some of our spouses have left us and we can’t get them back,” 
commented one moderator who had been let go by the company.

“The company policies were even more strenuous than the job itself,” remarked 
another. Workers at one of the content moderation centres we visited were left 
crying and shaking after witnessing beheading videos, and were told by 
management that at some point during the week they could have a 30-minute break 
to see a “wellness counsellor” – a colleague who had no formal training as a 
psychologist. Workers who ran away from their desks in response to what they’d 
seen were told they had committed a violation of the company’s policy because 
they hadn’t remembered to enter the right code on their computer indicating 
they were either “idle” or on a “bathroom break” – meaning their productivity 
scores could be marked down accordingly. The stories were endless: “I collapsed 
in the office”; “I went into a severe depression”; “I had to go to hospital”; 
“they had no concern for our wellbeing”. Workers told us that management was 
understood to monitor hospital records to verify whether an employee had taken 
a legitimate sick day – but never to wish them better, or out of genuine 
concern for their health.


Job security at this particular company is minimal – the majority of workers we 
interviewed were on rolling one- or three-month contracts, which could 
disappear as soon as the client’s work was complete. They worked in rows of up 
to a hundred on production floors in a darkened building, part of a giant 
business park on the outskirts of Nairobi. Their employer was a client of 
Meta’s, a prominent business process outsourcing (BPO) company with 
headquarters in San Francisco and delivery centres in east Africa where 
insecure and low-income work could be distributed to local employees of the 
firm. Many of the workers, like Mercy herself, had once lived in the nearby 
Kibera slum – the largest urban slum in Africa – and were hired under the 
premise that the company was helping disadvantaged workers into formal 
employment. The reality is that many of these workers are too terrified to 
question management for fear of losing their jobs. Workers reported that those 
who complain are told to shut up and reminded that they could easily be 
replaced.

While many of the moderators we spoke to were Kenyan, some had migrated from 
other African countries to work for the BPO and assist Meta in moderating other 
African languages. A number of these workers spoke about being identifiable on 
the street as foreigners, which added to their sense of being vulnerable to 
harassment and abuse from the Kenyan police. Police harassment wasn’t the only 
danger they faced. One woman we interviewed described how members of a 
“liberation front” in a neighbouring African country found names and pictures 
of Meta moderators and posted them online with menacing threats, because they 
disagreed with moderation decisions that had been made. These workers were 
terrified, of course, and went to the BPO with the pictures. The company 
informed them they would see about enhancing security at the production 
facilities; aside from that, they said, there was nothing else they could do – 
the workers should just “stay safe”.

Most of us can hope never to experience the inhumane working conditions endured 
by Mercy and her colleagues. But data work of this kind is performed by 
millions of workers in different circumstances and locations around the world. 
At this particular centre, some of the working conditions changed after our 
research was conducted. However large companies such as Meta tend to have 
multiple outsourced providers of moderation services who compete for the most 
profitable contracts from the company. This data work is essential for the 
functioning of the everyday products and services we use – from social media 
apps to chatbots and new automated technologies. It’s a precondition for their 
very existence – were it not for content moderators constantly scanning posts 
in the background, social networks would be immediately flooded with violent 
and explicit material. Without data annotators creating datasets that can teach 
AI the difference between a traffic light and a street sign, autonomous 
vehicles would not be allowed on our roads. And without workers training 
machine learning algorithms, we would not have AI tools such as ChatGPT.

One such worker we spoke to, Anita, worked for a BPO in Gulu, the largest city 
in northern Uganda. Anita has been working on a project for an autonomous 
vehicle company. Her job is to review hour after hour of footage of drivers at 
the wheel. She’s looking for any visual evidence of a lapse in concentration, 
or something resembling a “sleep state”. This assists the manufacturer in 
constructing an “in-cabin behaviour monitoring system” based on the driver’s 
facial expressions and eye movements. Sitting at a computer and concentrating 
on this footage for hours at a time is draining. Sometimes, Anita feels the 
boredom as a physical force, pushing her down in her chair and closing her 
eyelids. But she has to stay alert, just like the drivers on her screen. In 
return for 45 hours of intense, stressful work a week – possibly with unpaid 
overtime on top – annotators can expect to earn in the region of 800,000 
Ugandan shillings a month, a little over US$200 or approximately $1.16 per hour.

On the production floor, hundreds of data annotators sit in silence, lined up 
at rows of desks. The setup will be instantly familiar to anyone who’s worked 
at a call centre – the system of management is much the same. The light is 
dimmed in an attempt to reduce the eye strain that results from nine hours of 
intense concentration. The workers’ screens flicker with a constant stream of 
images and videos requiring annotation. Like Anita, workers are trained to 
identify elements of the image in response to client specifications: they may, 
for example, draw polygons around different objects, from traffic lights to 
stop signs and human faces.


Every aspect of Anita and her fellow annotators’ working lives is digitally 
monitored and recorded. From the moment they use the biometric scanners to 
enter the secure facilities, to the extensive network of CCTV cameras, workers 
are closely surveilled. Every second of their shift must be accounted for 
according to the efficiency-monitoring software on their computer. Some workers 
we spoke to even believe managers cultivate a network of informers among the 
staff to make sure that attempts to form a trade union don’t sneak under the 
radar.

Working constantly, for hours on end, is physically and psychologically 
draining. It offers little opportunity for self-direction; the tasks are 
reduced to their simplest form to maximise the efficiency and productivity of 
the workers. Annotators are disciplined into performing the same routine 
actions over and over again at top speed. As a result, they experience a 
curious combination of complete boredom and suffocating anxiety at the same 
time. This is the reality at the coalface of the AI revolution: people working 
under oppressive surveillance at furious intensity just to keep their jobs and 
support their families.

When we think about the world of AI development our minds might naturally turn 
to engineers working in sleek, air-conditioned offices in Silicon Valley. What 
most people don’t realise is that roughly 80% of the time spent on training AI 
consists of annotating datasets. Frontier technologies such as autonomous 
vehicles, machines for nanosurgery and drones are all being developed in places 
like Gulu. As tech commentator Phil Jones puts it: “In reality, the magic of 
machine learning is the grind of data labelling.” This is where the really 
time-consuming and laborious work takes place. There is a booming global 
marketplace for data annotation, which was estimated to be worth $2.22bn in 
2022 and is expected to grow at around 30% each year until it reaches over 
$17bn in 2030. As AI tools are taken up in retail, healthcare and manufacturing 
– to name just a few sectors that are being transformed – the demand for 
well-curated data will increase by the day.



The majority of workers in the global south work in the informal jobs sector. 
Photograph: Yannick Tylle/Getty Images

Today’s tech companies can use their wealth and power to exploit a deep 
division in how the digital labour of AI work is distributed across the globe. 
The majority of workers in countries in the global south work in the informal 
sector. Unemployment rates remain staggeringly high and well-paid jobs with 
employment protections remain elusive for many. Vulnerable workers in these 
contexts are not only likely to work for lower wages; they will also be less 
ready to demand better working conditions, because they know how easily they 
can be replaced. The process of outsourcing work to the global south is popular 
with businesses not because it provides much-needed economic opportunities for 
the less well off, but because it provides a clear route to a more tightly 
disciplined workforce, higher efficiency and lower costs.

By using AI products we are directly inserting ourselves into the lives of 
workers dispersed across the globe. We are connected whether we like it or not. 
Just as drinking a cup of coffee implicates the coffee drinker in a global 
production network from bean to cup, we should all understand how using a 
search engine, a chatbot – or even something as simple as a smart robot vacuum 
– sets in motion global flows of data and capital that connect workers, 
organisations and consumers in every corner of the planet.

Many tech companies therefore do what they can to hide the reality of how their 
products are actually made. They present a vision of shining, sleek, autonomous 
machines – computers searching through large quantities of data, teaching 
themselves as they go – rather than the reality of the poorly paid and 
gruelling human labour that both trains them and is managed by them.

Back in Gulu, Anita has just arrived home from work. She sits outside with her 
children in plastic chairs under her mango tree. She’s tired. Her eyes start to 
close as the sun falls below the horizon. The children go to bed, and she will 
not be long after them. She needs to rest before her 5am start tomorrow, when 
she will be annotating again.

Nobody ever leaves the BPO willingly – there’s nothing else to do. She sees her 
ex-colleagues when she’s on her way to work, hawking vegetables on the market 
or trying to sell popcorn by the side of the road. If there were other 
opportunities, people would seize them. She just has to keep her head down, hit 
her targets, and make sure that whatever happens, she doesn’t get laid off. 
Maybe another project will come in; maybe she could change to a new workflow. 
That would be a relief, something a bit different. Maybe labelling streets, 
drawing outlines around signs and trying to work out what it would be like to 
live at the other end of the lens, in a country with big illuminated petrol 
signs and green grass lawns.

This is an edited extract from Feeding the Machine: The Hidden Human Labour 
Powering AI, by James Muldoon, Mark Graham and Callum Cant (Canongate £20). To 
support the Guardian and Observer, order your copy from guardianbookshop.com. 
Delivery charges may apply

Reply via email to