Skip to content
Governance

AI & human rights: The importance of explainability by design for digital agency

Natalie Runyon  Director / ESG content / Thomson Reuters Institute

· 5 minute read

Natalie Runyon  Director / ESG content / Thomson Reuters Institute

· 5 minute read

AI technologies are reshaping decision-making across critical areas of life, raising structural challenges to privacy, autonomy, and access to basic human rights

AI systems increasingly shape access to rights, services, and opportunities, which makes the ability to understand, evaluate, and respond to AI-driven decisions a structural requirement for exercising human rights. This condition, called digital agency, ensures that individuals retain autonomy and accountability in environments governed by automated systems.

Shoshana Rosenberg, a recognized AI governance and data protection expert and Co-Founder of Women in AI Governance, calls for the formal recognition of digital agency as a fundamental human right. Securing digital agency requires embedding explainability into AI systems at the design level, making system outputs understandable, accessible, and actionable. Without digital agency, individuals are exposed to systems that decide without visibility, affect without consent, and deny the possibility of meaningful redress.


Join us for a free online Webinar: World Day Against Trafficking in Persons to learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


Today, many AI systems operate without meaningful explanation, creating an explainability gap that prevents individuals from recognizing or responding to the impact AI-driven decision may have on their lives. This unchecked deployment of opaque AI can systematically displace individual agency, creating environments in which decisions are made without visibility or contest, Rosenberg warns.

Current legal frameworks, including the European Union’s AI Act, attempt to mitigate systemic risks through classification and documentation requirements. However, they do not secure operational explainability for individuals affected by AI-driven decisions. Rosenberg argues that recognizing digital agency as a human right is essential to correcting this failure. She advocates embedding explainability into AI systems as a condition for preserving autonomy within increasingly automated governance structures.

Preserving digital agency through explainability

AI governance frameworks often conflate transparency with explainability, although the two concepts serve different functions. Transparency provides limited information about systems’ existence or purpose; while explainability ensures that individuals can understand how decisions are made, what influences them, and how they can respond. Most legal frameworks mandate transparency but do not compel explainability, leaving individuals without the means to navigate or challenge AI-driven outcomes.

Embedding explainability by design requires systems to support functional understanding from the outset. Rosenberg defines this threshold as minimum viable explainability: ensuring that AI systems make influencing factors and decision outcomes intelligible enough for individuals to assess, understand, and act upon meaningfully, if necessary. Systems designed without explainability embed opacity as a structural feature, cutting individuals off from seeing how decisions affect them, questioning outcomes, and seeking correction when needed.

Mandating minimum viable explainability ensures that individuals retain agency within AI-mediated environments. Digital agency must serve as the foundation of regulatory frameworks because, without such agency, legal protections remain abstract and unenforceable, Rosenberg explains.

Learning from the history of privacy

The human right to privacy was recognized internationally in 1948, but it did not meaningfully shape digital regulation before systemic harms emerged. AI systems now operate in a similarly underregulated space, necessitating a way to anchor AI regulation to digital agency, warning that without this foundation, systemic harms will again outpace regulatory response, Rosenberg says.

In the area of privacy, for example, the United Nations’ Special Rapporteur role helped consolidate regulatory momentum already underway. A Special Rapporteur for AI & Human Rights would be tasked with accelerating global recognition and protections that have yet to fully emerge. Establishing this role requires a UN Human Rights Council resolution that has not been formally proposed, reflecting the delayed global response to technologies already impacting individual rights.

Privacy protections emerged reactively, and digital agency protections must be built proactively to prevent further erosion of autonomy. Recognizing digital agency as a human right is a crucial step to ensure that digital agency protections are established before dependencies erode autonomy beyond repair.

Enshrining digital agency

As AI evolves, protecting human agency becomes imperative. However, recognition must come first: enshrining digital agency as a human right will create the foundation for systemic accountability.

To get there, we need to pursue a three-part strategy that includes:

      1. Recognizing the right to digital agency — Concerned individuals and organizations need to advocate for the establishment of a UN Special Rapporteur for AI Governance and the formal recognition of digital agency as a protected human right. Advocates should also mobilize support from human rights organizations, policymakers, and legal experts to initiate and advance a UN Human Rights Council resolution affirming digital agency as fundamental to autonomy and dignity.
      2. Establishing minimum viable explainability standards — Next, supporters should define standards for AI systems that set clear guidelines for what individuals need to preserve agency. International collaboration is essential to develop these standards and integrate them into certification and compliance processes.
      3. Mandating explainability by design — Requiring that new AI systems embed explainability from the outset, ensuring usability and intelligibility, is a critical step. Regulatory frameworks must ensure that explainability becomes a baseline condition for AI deployment, with voluntary leadership strengthening early adoption.

Today, AI is reshaping the systems that govern individuals, determine rights, and affect autonomy. Protecting digital agency ensures that individuals can understand, navigate, and challenge the decisions that shape their lives. Securing digital agency now is essential to ensuring that technology strengthens human dignity rather than eroding it.


You can find more information here about where current regulations are going concerning AI and its impact

More insights