Are Marginalised Women in India Paying the Price for Safer AI?

By Soumyashree Mohanty, Research and Documentation Unit, CYDA

We use digital platforms every day. We scroll, watch, and react without thinking about how the system works. Many times, we see a warning before sensitive content. It tells us that the content may include violence, abuse, or disturbing visuals. We are given a choice. We can decide whether to continue or not. This feature helps protect us.

But not everyone has this choice. For some people, watching such content is part of their daily work. They are known as data annotators or content moderators. Their job is to review large volumes of images, videos, and text. They label this content so that AI systems can learn what is harmful and what should be blocked. Their work helps make online platforms safer for millions of users like us.

They are often called “ghost workers” because their contribution remains invisible. While users benefit from cleaner and safer platforms, the people behind this work are rarely seen or acknowledged.

India has become one of the largest centres for data annotation work. According to NASSCOM, around 70,000 people were engaged in this sector in 2021. A large share of this workforce comes from rural and semi rural areas. Many belong to marginalised communities with limited access to stable employment. Within this workforce, a significant number are women. Reports by The Guardian show that many of these women are from rural, Dalit, and Adivasi backgrounds. This raises an important question. Why are marginalised women forming such a large part of this workforce?

Why Women?

One reason is the lack of employment opportunities in rural areas. Many women have limited options for income. Social restrictions and household responsibilities also limit their mobility. Data annotation work offers a way to earn from home. It requires only a basic internet connection and minimal infrastructure. For many women, this seems like a practical option.

From the perspective of companies, this workforce is cost effective. They can hire workers at lower wages. There is no need for office spaces or long term contracts. In many cases, labour protections are weak or unclear. This creates an imbalance where companies benefit more while workers carry the burden.

How does the system work?

At first, the job may appear simple. Many workers begin with basic tasks such as sorting data, tagging images, or filtering spam messages. However, the nature of work can change without clear warning. Over time, workers may be assigned to review more disturbing content. This includes violence, abuse, and explicit material.

For example, Monsumi Murmu, a content moderator from Jharkhand, while speaking to the Guardian, shared that she reviews more than 800 images and videos each day. Many of these contain harmful or disturbing visuals. Constant exposure to such content has a deep psychological impact. Workers often report feeling numb, anxious, or emotionally exhausted.

There is also a lack of transparency in job roles. Raina Singh from Uttar Pradesh, a data annotator, shared that her initial tasks involved simple text based screening. Later, without proper notice, she was assigned to review content related to child sexual abuse. She was not mentally prepared for this shift. This sudden exposure was traumatic for her.

Gaps in the system

Workers are not allowed to speak about their tasks. They are bound by strict Non-Disclosure Agreements. They cannot share their experiences with anyone, even when the work affects their mental health. Fear of losing their job or facing legal action keeps them silent.

Despite the nature of this work, mental health support is often missing. Many companies do not recognise the emotional burden carried by these workers. There are limited systems in place to support them. As a result, people from already vulnerable backgrounds face further risk.

This situation highlights a difficult reality. AI systems depend on human judgment. Machines cannot fully understand harm without human input. However, the responsibility of training these systems falls on those with the least protection.

What can be done?

Clear and transparent job roles
Companies must clearly explain the nature of the work before hiring. Workers should know what type of content they will handle. Any change in role must be discussed in advance. Training should be provided, and consent should be taken before assigning sensitive tasks.

Mental health support
Regular counselling services should be made available. Workers need safe spaces to discuss their experiences. Scheduled breaks and rotation of tasks can help reduce continuous exposure to harmful content.

Recognition of invisible labour
The contribution of data annotators and content moderators should be acknowledged. Their work is essential for building safe digital spaces. Recognising their role can also push companies to take more responsibility.

Fair wages and labour protection
Workers should be paid fairly based on the nature of their work. Labour laws must be applied to ensure safety and dignity. Contracts should include clear terms related to working conditions and rights.

Informed consent and choice
Workers must have the right to refuse certain types of content without fear of losing their jobs. Consent should not be assumed. It must be actively taken and respected.

Stronger policies and accountability
Governments and organisations should create clear guidelines for ethical AI practices. Companies must be held accountable for the well being of their workers.

AI may be the future, but it is built on human effort. Protecting the people behind it is not optional. It is a necessary step towards building ethical and humane technology.

Leave a Reply

The Podcast

Stay tuned here for listening and viewing to our amazing Podcasts with amazing & inspiring people.

Impact Jobs

Lastest Stories