Skip to content
Forced labor/Supply chain

Human layer of AI: Protecting human rights in AI data enrichment work

Natalie Runyon  Director / Sustainability content / Thomson Reuters Institute

· 6 minute read

Natalie Runyon  Director / Sustainability content / Thomson Reuters Institute

· 6 minute read

Rapid AI growth has fueled demand for data enrichment work, but to mitigate harm to workers facing significant human rights risks, companies need to implement robust, value‑chain‑wide human rights due diligence and responsible contracting to protect people while ensuring data quality

Key highlights:

      • Human rights risks are elevated for data enrichment workers — Data enrichment workers can face low and unstable pay, overtime pressure driven by buyer timelines, harmful content exposure with weak safeguards, limited grievance access, and uneven legal protections that hinder workers’ collective voice.

      • Human rights due diligence is essential for companies — Companies as buyers of these services must map subcontracting tiers, assess risk by employment model, document worker protections down to Tier-2 and Tier-3 suppliers, and audit and monitor their own rates, timelines, and payment terms to avoid reinforcing harm to workers.

      • Responsible contracting and remedy are a necessity — Contracts should embed shared responsibility, and include fair rates, predictable volumes, realistic deadlines, funded health & safety and mental‑health supports, effective grievance channels, and remediation.


Demand for data enrichment work has surged dramatically with the rapid development and expansion of AI technology. This work encompasses collecting, curating, annotating, and labeling data, as well as providing model training and evaluation — all of which are critical activities that improve how data functions in technological systems.

However, the workers performing these tasks currently operate under different employment models, according to Keri Lloyd from Article One Advisors, a corporate human rights advisory firm. Some workers are in-house employees at major AI developers, others work for business process outsourcing (BPO) companies, and many are independent contractors on gig platforms on which they bid for tasks and get paid per piece.

Human rights issues in data enrichment work

Data enrichment workers sit at the sharp end of the AI economy, yet many struggle to earn a stable, decent income. In particular, pay for gig workers often falls short of a living wage because tasks are sporadic, payments can be delayed, and compensation is frequently piece‑rate. Because work flows through multiple intermediaries, which include BPOs, subcontractors, and platforms, fees and margins get skimmed at each layer and shrink take‑home pay — another area of exploitation for today’s digital labor workforce.

In addition, another human rights issue at work is their right to rest, leisure, and family life and, in some places, even breaching guidance from the International Labour Organization (ILO) or local labor laws. Buyer purchasing practices with aggressive deadlines are a significant upstream driver of this overtime pressure.


National labor protections vary widely, and platform workers in particular often fall through regulatory gaps.


For many, the work itself carries health risks. Labeling and moderation can require repeated exposure to violent or graphic content, with well‑documented mental‑health impacts. Yet safeguards are uneven. Indeed, workers may lack protected breaks, task rotation, mental‑health support, adequate insurance, or the option to switch assignments. Even when content is not graphic, strain shows up as ergonomic problems, stress, and disrupted sleep.

When harm occurs, remedy can be hard to access. Platform-based work setups often provide no clear, trusted point of contact, and reports of retaliation deter complaints. Effective operational grievance mechanisms can be missing, and this leaves workers without credible paths to redress.

Finally, national labor protections vary widely, and platform workers in particular often fall through regulatory gaps. Because work is individualized and online, forming unions or works councils is harder. This weakens workers’ collective voice just where and when it is most needed to identify risks, negotiate improvements, and secure remedies.

Due diligence for companies buying data enrichment services is essential

When companies procure data enrichment services, they must recognize that respecting human rights extends throughout the entire value chain and not just with themselves and their direct suppliers. Companies creating trusted partnerships with their suppliers helps to identify issues before they become harmful and create mutual accountability for the humans behind the algorithms.

Article One Advisors’ Lloyd explains that the mandatory baseline starts with human rights due diligence, and can be found in areas such as:

      • Risk identification and assessment — The first step for companies is to identify and assess risks by understanding their suppliers’ model. This means knowing which groups of workers are full-time employees, contracted workers, or platform-based gig workers. Each model carries different risk profiles.
      • Subcontractor ecosystem mapping — Tracing the subcontracting chain to see how many layers exist between the supplier and the workers is essential. Fees and pressures compound at each tier of the value chain, says Lloyd.
      • Documentation of worker protections in Tier 2 and Tier 3 suppliers — Assessing and promoting worker protections for every layer of the value chain — which includes making sure the wage structures are clearly defined and equitable, health and safety measures are adequate, and protections for exposure to harmful content and effective grievance mechanisms exist — are baseline elements of human rights due diligence.
      • Examination of company’s own practices — Finally, it is necessary for companies to ensure that their own procurement standards and contracts are not reinforcing human rights harms. This includes companies confirming that their contract terms, timelines, and payment schedules are not inadvertently forcing suppliers to cut corners.

Responsible contracting and remedy mechanisms

Companies as buyers of data enrichment services also must instill shared responsibility in owning worker outcomes among themselves, BPOs, platforms, and model developers. Comprehensive, clear human-rights standards, living-income benchmarks, and shared responsibility are essential elements of good purchasing practices. More specifically, these require fair rates for work, predictable volume expectations, and realistic timelines to make sure suppliers do not push excessive hours. In addition, budgets should include cost-sharing for audits, key risk management measures (such as mental health support), and occupational health and safety controls.

Smart remediation turns harmful situations into improved conditions by providing back-pay for underpayment, medical and psychosocial care after exposure to harmful content, contract adjustments to remove perverse incentives, and time-bound corrective action plans co-designed with worker input. As a last resort when buyer and supplier need to part ways, a responsible exit is planned with notice, transition support, and no sudden contract termination that strands workers.

Similarly, grievance mechanisms for platform workers — who are often dispersed across geographics, classified as independent contractors, and lack line managers or union channels — need to be contractually documented. Effective grievance redressal needs to include confidential mechanisms and remediation processes, in-platform dispute tools, independent individuals to investigate complaints, multilingual facilitation, and joint buyer-supplier escalation paths to bridge gaps in labor-law protection and deliver credible remedies at scale, Lloyd notes.

Promoting quality through worker well-being

Protecting data enrichment workers is not only an ethical imperative but also essential for AI quality itself. When workers face excessive hours, inadequate pay, or harmful content exposure without proper support, the resulting stress and burnout directly impact data quality outcomes. Companies must recognize that responsibility for worker well-being and quality data outcomes extend throughout the entire value chain and does not solely rest with BPOs providers alone.


You can find more about the challenges companies and their workers face from forced labor in their supply chain here

More insights