By embedding human-rights due diligence across the entire AI lifecycle, engineering and development teams can better anticipate harms, build safeguards by design, and earn durable user trust without stalling innovation
Key takeaways:
-
-
-
Build due diligence into the process — Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.
-
Identify risks early on — Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines
-
Use due diligence to build trust — Treat due diligence as an asset and not a compliance box to tick by using it to de‑risk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.
-
-
AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. “Users are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,” says Chloe Poynton, co-Founder & Principal at Article One, a management consulting firm that specializes in human rights and responsible technology use.
These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.
To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.
Understanding human rights risks in the AI lifecycle
Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower‑income markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.
During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.
Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.
A surprising field pattern exacerbates the risk when people increasingly use AI for therapy‑like support and disclose issues related to emotional crises and self‑harm. In particular, this intimacy widens product and policy obligations, which include age‑aware safeguards and clear limits on overriding protections.
Why human rights due diligence is urgent
That’s why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, “What happens if this technology gets in the hands of a bad actor?”
From there, the process demands an analysis of severity — which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.
The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One’s Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.
How to embed safeguards without slowing teams
The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the “engineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,” Poynton explains. More specifically, this includes:
Identifying unexpected harms — One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, “What are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?” Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front‑page failures. Then, ship with these protections in place, pre‑launch.
Implementing concrete controls — Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age‑aware and self‑harm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse‑response pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.
Treating due diligence as value-creating, not box-checking — Crucially, frame due diligence as an asset rather than a liability. “Make your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.
Additional considerations
Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.
Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.
Finally, remember that AI’s environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.
You can find out more about the ethical issues facing AI use and adoption here