Image by Sabeen Yameen for APC, used with permission.
By Hija Kamran
This article is part of the series “Don’t ask AI, ask a peer,” a collaboration among Global Voices, the Association for Progressive Communication, and GenderIT. The series aims to re-emphasize the importance of knowledge sharing among people, as has been done for decades. You can follow the series on APC.org, GenderIT.org, and globalvoices.org. It is also part of Global Voices’ April 2026 Spotlight series, “Human perspectives on AI.” You can support this coverage by donating here.
My work constantly pushes me to interrogate new technologies, to ask how they are designed, who they are built for, how they are governed, who benefits from them, and who is left to deal with the consequences on their own. That scepticism has made me a late adopter of new technology more times than I can count.
Last year, I seriously started questioning whether an ethical and feminist AI is even possible. Could there be a version of this technology that allows us, as feminists and human rights advocates, to engage with it without compromising our politics or our commitment to defending intersectional human rights?
That hesitation and that lack of urgency to jump onto every new tech trend come from years of watching how tech companies operate. Again and again, they have established that their primary commitment is to their business models, not to the people using their platforms. “Senator, we run ads,” as Mark Zuckerberg once said. But beyond the statements that go viral, there are also those said in closed meetings. For example, I recently asked a tech company representative at a rights conference how their social media platform differentiates between harmful and harmless engagement when serving ads. They told me, “I encourage people to read our terms of service.” The assumption that people have not read these documents, especially in a room where everyone has spent years interrogating the business models of tech companies, told me how little appetite there is for meaningful engagement and transparency, let alone accountability.
So when I question whether AI can ever be ethical or feminist, even in some alternate timeline, it doesn’t come from cynicism for the sake of it, but from years of watching patterns repeat. The promises to do better, the carefully crafted language of responsibility, the performance of listening, none of it has meaningfully shifted how these systems are built or who they serve. If anything, it has made clearer that when it comes to protecting our rights, we are largely on our own. No government or tech company is coming to defend them for us or with us.
So when we talk about a human rights approach to AI, it means confronting and dismantling the systems that are intentionally built to serve power, not people, and refusing to accept that this is the only way these technologies can exist.
Challenging the tech status quo
This proposal starts from a place we must not ignore: tech is not neutral. It actively reinforces the same patriarchal and colonial power structures that many of us have been living under for generations — a powerful entity (in this case, a tech corporation) will dictate what people’s social interactions will look like, and the public will comply without question. These systems are designed by specific actors, in specific locations, with very particular worldviews. So it matters who is building them, who is controlling them, where they are being built, who gets to define what counts as a “gold standard,” and who is expected to fall in line for these systems to continue to work uninterrupted. All of this shapes how power is exercised over both narratives and lives.
One of the first things we need to be honest about is that tech has never been neutral. That idea has always been more of a marketing strategy that many of us have fallen for, because every system carries the values, priorities and biases of the people and institutions that build it. And AI is no different, as it is built on infrastructure and trained on data that reflects the world as it is, i.e. unequal, biased and often violent, and then it reproduces those patterns at a larger scale, with a layer of authority that makes them harder to question.
Take training data, for instance. These systems are fed enormous amounts of information scraped from the internet, from public records to everyday interactions we don’t even realise we’re part of. And that data is shaped by histories of exclusion, racism, sexism and economic inequality. When AI learns from that, it encodes them, sometimes amplifies them, and then presents them back to us as if they are neutral outputs. Content that is labelled as “intelligent” is often just a more efficient way of repeating what already exists.
Then there are corporate biases, which are far less subtle. The companies building these systems are not public institutions with their own biases; rather, they are profit-driven entities with shareholders and growth targets. That shapes everything, from what problems are worth solving to how quickly systems are deployed, often without fully understanding their consequences. Whose lands are going to be acquired for vast data centres that leave lasting ecological impact but increase shareholder value, whose knowledge is going to be stolen and appropriated for the corporate race to develop the most advanced AI model, whose life will have to be labelled “collateral damage” in the war to be precise, who will be pushed into silence and invisibility by being loud about the innovation that these systems promise?
All of this leads to inequality becoming harder to trace because it is hidden behind layers of algorithms, code, automation, complexity and carefully crafted marketing strategy. The digital world extends the problems of the physical one, often in ways that are more difficult to see and challenge.
And perhaps most troubling is the way AI contributes to the growing sense of dehumanisation. In militarised contexts, especially, people are reduced to data points, targets to be identified, tracked and eliminated within seconds. Decisions that carry life-and-death consequences are increasingly mediated through systems that do not understand context, history or humanity. The reduction of humans to inputs and outputs, signals to be processed, targets to be achieved, makes the harms easier to justify in reports and briefings. The deaths and destruction become a success rate, and people are reduced to just numbers on file, which is a deliberate political choice about how we value human life.
It can’t be us
All of this also feeds into a growing narrative that AI will eventually replace humans. But if we take a step back and look at the reality, this idea starts to fall apart. What AI can do is recognise patterns, process large amounts of data, generate responses that feel convincing, and replicate a tone that feels human. What AI cannot do is be human. It cannot understand context in the way we live it, feel care, hold relationships, or carry the weight of lived experience.
The replication of patterns to depict human-like connection is not the same as human connection. It doesn’t come from understanding or empathy; instead, it comes from predictions that are just statistical guesses based on what data has been fed into the system. And when we start treating that as equivalent to human interaction, we end up devaluing what it actually means to relate to one another. Care work, community building, resistance, empathy, joy, showing up for each other — these are not functions you can automate.
Even the idea of “conscious” AI, which keeps resurfacing in tech conversations, is often just a rebranding of more complex prediction systems. At the end of the day, these models are still generating outputs based on probabilities shaped by the data. They don’t know what they’re saying, nor do they understand the consequences, because they don’t carry responsibility towards another being. That distinction matters, especially when these systems are being positioned as decision-makers in areas that significantly affect people’s lives.
And perhaps that’s where this connects back to the bigger issue. When we start to believe that humans can be replaced, it becomes easier to accept systems that already treat people as data points, whether it’s in research, hiring, policing, health care, welfare or warfare. The same logic that reduces people to inputs also makes them disposable. So pushing back against this narrative is about holding on to the idea that human lives, empathy, experiences, realities and relationships cannot be reduced to something a machine can simulate and optimise.
If we are serious about even imagining a human rights approach to AI, we first have to let go of this idea that technology can replace us, or can be prioritised over us, or can define what it means to be human. It can’t, and it shouldn’t. Because everything this technology draws from — land, environment, resources, time, energy, entire ecosystems — comes from a world that exists for the living beings — humans, flora, fauna — that have inhabited it since long before any of these systems were even conceptualised. Reframing that relationship means recognising that AI should never come at the cost of life or dignity, or the conditions that make life possible in the first place.
Need for accountability
However, the responsibility of protecting the world we live in cannot be pushed onto the very people who have the least power to shape these technologies. Right now, decisions are being made in rooms that most of us will never have access to, and still, we are the ones expected to live with the consequences. A human rights approach means challenging that imbalance and insisting that accountability sits where power does.
And maybe that also means we should be sceptical from the very beginning, not after harm has already been done, but at the point where these technologies are introduced and commercialised. It is in our interest to ask uncomfortable questions early on, like where does this come from, who built it, how does it work, and who actually benefits from it? Because without that, we end up accepting these systems as if they were inevitable, instead of seeing them for what they are: choices. And in doing so, we lose sight of the fundamental idea that the future should be built around living beings, and not optimised for machines.
Hija Kamran (she/her) is the lead editor of GenderIT.org and advocacy strategist within APC’s Women’s Rights Programme. Through her policy and campaign work, she specialises in the intersection of technology, gender and human rights.




