To bridge the gap between lofty responsible AI principles and real product decisions, companies should operationalize human rights by systematizing due diligence across the AI lifecycle
Key highlights:
-
-
-
Principles need a repeatable process — Responsible AI commitments become real only when companies systematize human rights due diligence to guide decisions from concept through deployment.
-
Policy and engineering teams should co-own safeguards — Ongoing collaboration between policy and technical teams can help translate ideals like fairness into concrete requirements, risk-based approaches, and other critical decisions.
-
Engage, anticipate, document, and improve continuously — Involving impacted communities, running regular foresight exercises (such as scenario workshops), and building strong documentation and feedback loops make human rights accountability durable, instead of a one-time check-the-box exercise.
-
-
More and more companies are adopting responsible AI principles that promise fairness, transparency, and respect for human rights, but these commitments are difficult to put into practice when it comes to writing code and making product decisions.
Faris Natour, a human rights and responsible AI advisor at Article One Advisors, works with companies to help turn human rights commitments into concrete steps that are followed across the AI product lifecycle. He says that the key to bridging the gap between principles and practice is embedding human rights due diligence into the framework that guides product development from concept to deployment.
Operationalizing human rights
Human rights due diligence involves a structured process that begins with immersion in the process of building the product and identifying its potential use cases, whether it is an early concept, prototype, or an existing product. This is followed by an exercise to map the stakeholders who could be impacted by the product, along with the salient human rights risks associated with its use.
From there, the internal teams collectively create a human rights impact assessment, which examines any unintended consequences and potential misuse. They then test existing safeguards in design, development, and how and to whom the product is sold. “Typically, a new product will have many positive use cases,” explains Natour. “The purpose of a human rights impact assessment is to find the ways in which the product can be used or misused to cause harm.” In Natour’s experience, the outcome is rarely a simple go or no-go decision. Instead, the range of decisions often includes options such as go with safeguards or go but be prepared to pull back.

The use of human rights due diligence in the AI product lifecycle is relatively new (less than a decade old) and as Natour explains, there are five essential actions that can work together as a system:
1. Encourage collaboration between policy and engineering teams
Inside most companies, responsible AI is split between policy teams, which may own the principles, and the engineering teams, which own the systems that bring those principles to life. Working with companies, Natour brings these two functions together through a series of workshops to create structured, ongoing collaboration between human rights and responsible AI experts and the technical teams to better co-develop responsible AI requirements.
In the early stages of the collective teams’ work, the challenges of turning principles into practice emerge quickly. For example, the scale of applications and use cases for an AI product can make it difficult to zero in on those uses that present high risks of adverse impacts. Not all products or use cases need to be treated equally, says Natour, and companies should identify those that could potentially cause the most harm. Indeed, these most-harmful uses may involve a “consequential decision” such as in the legal, employment, or criminal justice fields, he says, adding that those products should be selected for deeper due diligence.
2. Consider the principles at each stage of the development process
Broad principles and values, such as fairness and human rights, should be considered at each stage of the lifecycle. For the principle of fairness, for example, teams may assess which communities will use this product and who will be impacted by those use cases. Then, teams should consider whether these communities are represented on the design and development teams working on the product, and if not, they need to develop a plan for ensuring their input.
3. Engage with impacted communities and rightsholders
Natour advocates for companies to actively engage with impacted communities and stakeholders, including those who are potential users or who may be affected by the product’s use. This could be the company’s own employees, for example, especially if the company is developing productivity tools to use internally in their workplace. Special consideration should be given to vulnerable and marginalized groups whose human rights might be at greatest risk.
External experts, such as Natour and his colleagues, hold focus groups with such stakeholders as part of the human rights due diligence process. The feedback from focus groups can then be used to influence model design, product development, as well as risk mitigation and remediation measures. “In the end, knowing how users and others are impacted by your products usually helps you make a better product,” he states.
4. Establish responsible foresight mechanisms
To prevent responsible AI from becoming a one-time check-the-box exercise, Natour says he uses responsible foresight workshops and other mechanisms as a “way to create space for developers to pause, identify, and consider potential risks, and collaborate on risk mitigations.”
The workshops use personas and hypothetical scenarios to help teams identify and prioritize risks, then design concrete mitigations with follow-on sessions to review progress. Another approach includes developing simple, structured question sets that push product teams to pause and think about harm. For example, Natour explains how one of his clients includes the question: What would a super villain do with this product? in order to help product teams identify and safeguard against potential misuse.
5. Create documentation and feedback loops for accountability
As expectations around assurance rise from regulators, customers, and civil society, strong documentation and meaningful, accessible transparency are essential, says Natour. Clear, succinct, and accessible user-facing information about what a model does and does not do, about data privacy, and other key aspects can help users understand “what happens with their data, as well as the capabilities and the limitations of the tool they are using,” he adds.
Further, transparency should enable two-way communication, and companies should set up feedback loops to enable continuous improvement in the ways they seek to mitigate potential human rights risks.
The hardwired future
Effectively embedding human rights into the AI product lifecycle starts with a shared governance model between a company’s policy and engineering teams. Together they can collectively hardwire human rights into the way AI systems are imagined, built, and brought to market.
You can find more about human rights considerations around AI in our Human Layer of AI series here