spot_imgspot_img

The digital crown: Reclaiming human dignity in the age of AI

Illustration of a woman sitting cross-legged with various items associated with each arm: a dove of peace, a megaphone, scales of balance, a camera and a plant. Image by for APC, used with permission.

Image by Sabeen Yameen for APC, used with permission.

By Rebecca Ryakitimbo

This article is part of the series “Don’t ask AI, ask a peer,” a collaboration among Global Voices, the Association for Progressive Communication, and GenderIT. The series aims to re-emphasise the importance of knowledge sharing among people, as has been done for decades. You can follow the series on APC.org, GenderIT.org, and globalvoices.org. It is also part of Global Voices’ April 2026 Spotlight series, “Human perspectives on AI.” You can support this coverage by donating here.

AI, like any other technology, is built for people and meant to be used by people. From a human rights perspective, the closest idea that shapes my view of AI comes from a concept my civics teacher taught me: democracy is government of the people, by the people, and for the people. In the same way, AI guided by human rights should embody these principles and be “of the people, by the people, and for the people.” Such an approach ensures that technology supports human dignity, freedom and well-being, rather than undermining them. It places human beings at its core and reflects the realities of human life.

Human-centred AI should embed humanity as the foundation upon which systems are designed, built, used and governed. Regardless of who someone is, where they live, what they believe, or how they choose to live their life, AI should treat everyone with fairness and dignity. With this in mind, there are several important ways we can create and foster a human rights-based approach to AI.

Human rights in practice

Anchored in human rights law and practice, I am fascinated by how the idea of “human rights” developed over time. In ancient history, early human rights laws were created to prevent those in power from mistreating others. One of the earliest examples is the Cyrus Cylinder (539 BC), which records King Cyrus the Great freeing slaves and declaring that people had the freedom to choose their own religion. During the Middle Ages, another milestone emerged with the Magna Carta in England, when rebels forced King John to accept that no one, not even the king, was above the law. This introduced the important concept of due process. In the 1700s, philosophers like John Locke argued that people possess natural rights simply because they are human, not because rulers grant them. These ideas influenced the United States’ Declaration of Independence (1776), which declared that all men are created equal, and France’s Declaration of the Rights of Man and of the Citizen (1789), which recognised individuals as citizens rather than as subjects of a king.

After the horrors of World War II, the 1948 Universal Declaration of Human Rights (UDHR) was adopted. It later inspired two binding treaties, the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR), collectively known as the International Bill of Human Rights, which continues to shape laws and societies today.

Human rights have been the base and continue to be the base on which other laws, including the constitutions of nations, are shaped and have a greater influence on how AI should be built and governed. We can and should approach the regulation of AI by taking into account the fundamental rights we have been advocating for centuries. Here, I offer an overview of this approach, considering five non-negotiable rights:

The right to life and liberty, human-centred

For AI to adhere to human rights, it must protect the right to life and liberty, which in this sense must never ignore the human-in-the-loop aspect. This includes putting human urgency as key, but also having human oversight. This aspect condemns the usage of AI in militarisation, given its power to perpetuate inhumane acts, including genocide. AI needs to be built, used and governed in a manner that broadly follows a holistic safety approach that protects life and people’s freedom.

The right to equality, algorithmic fairness and anti-bias

The right to be treated the same as everyone else, regardless of race, gender or religion. Where AI treats all as equal, there are power imbalances in its design that need to be addressed. This starts from the data that trains the AI models to access and use AI-enabled infrastructures, such as compute power and frameworks. AI builders and/or regulatory frameworks/stakeholders need to be aware of systemic and structural biases in their datasets that could lead to exclusion of demographics of society, and seek to counter them through approaches that are inclusive and justice-oriented. One way to do this is by performing bias audits on training data, developing feedback loops and documenting to embody the principles of ethical and responsible AI that is explainable, accountable, transparent and fair.

The right to speak freely

This is often at risk in the age of AI-driven social media, search engines and generative AI. Users have a right to know why certain information is being promoted to them and why other information is being hidden, but this is often not the case. With major languages mainstreamed and others not given the platform, more and more of speech is being limited, even blocked by robots and algorithms. AI is increasingly also being used to curb free speech, with bots and online trolls rising. It needs to be created with the belief that people have a right to speech.

The right to essentials, equitable access and resource allocation

AI can optimise how we distribute food, manage power grids, and provide remote health care. Human-centred AI ensures these benefits don’t just go to the wealthiest nations or individuals. The main question to ask ourselves at this point is whether AI is being used to make essentials more affordable and accessible, or is it creating a digital divide where only those with high-speed tech get the best services? To ensure equity, inclusive design must be at the centre of how AI is built, used and governed.

The right to privacy, data sovereignty and consent

AI is built of data, but there should be guardrails to how much data and what data it can use. It needs to prioritise data minimisation techniques like differential privacy, where noise is added to data so individuals can’t be identified, or federated learning, where you train AI on your device without your data ever leaving it. We must push for users to have the right to be forgotten by the AI. We must remember that privacy is a gateway right; without it, we cannot have freedom of expression or assembly. Without privacy being part and parcel of AI, we risk mass surveillance and even identity theft.

If an AI system violates any of these five rights, there must be a legal way to seek redress. By grounding AI in the right to life, equality, speech, essentials and privacy, we ensure it serves as a mirror of our highest values, rather than a magnifier of our oldest biases. It is we, humans, who give AI purpose and urgency. The struggle for human rights has always been about shifting power from the few to the many, and today, the field is digital with tools such as AI.

Rebecca Ryakitimbo is a feminist technologist and researcher working at the intersection of AI, language data, gender justice and digital equity. She has led community-driven initiatives like the Community-Based Wildlife Network, held fellowships with Google, Mozilla, the Stimson Center and the Internet Society, and supports feminist tech spaces such as the African Women School of AI and the Gendering AI conference, which she curates. As part of the Local Networks initiative (LocNet), led by APC and Rhizomatica, she supports community-centred connectivity initiatives by facilitating communities of practice and researching community-centred connectivity and local services for equitable, locally led digital ecosystems.

Sabeen Yameen is an architect and artist from Karachi, Pakistan. Sabeen is inspired by the enchantment and whimsy hidden in the everyday, ordinary moments. She practices independently in Lahore, and also works as a designer and animator for Miss O and Friends, a safe social platform for young girls.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Popular Articles

0
Would love your thoughts, please comment.x
()
x