Image by Ellena Ekarahendy for APC. Used with permission.
By Debora Prado
This article is part of the series “Don’t ask AI, ask a peer,” a collaboration among Global Voices, the Association for Progressive Communication, and GenderIT. The series aims to re-emphasise the importance of knowledge sharing among people, as has been done for decades. You can follow the series on APC.org, GenderIT.org, and globalvoices.org. It is also part of Global Voices’ April 2026 Spotlight series, “Human perspectives on AI.” You can support this coverage by donating here.
I am Brazilian, and here we have an iconic song about our largest city, São Paulo, in which the composer describes witnessing “the power of money, which builds and destroys beautiful things.” The idea of “the power of money” could easily be replaced by “human action.” Human activity is rife with contradictions. The very same human being who creates art, who imagines futures, who develops technologies, is the one who uses them for their own destruction.
But if we take a closer look, we will realise that humans are not all the same. Recently, the “QuitGPT” boycott gained traction in the face of OpenAI’s close collaboration with the United States administration, including a partnership for surveillance and war. The human signing the deal with the Pentagon was billionaire Sam Altman, a gay white man from the United States, and the CEO and co-founder of this big corporation. On the other hand, recently, in Brazil, a local civil society network launched TybyrIA, the first open source AI model in Portuguese to detect anti-LGBTQ+ hate speech. The people behind this network, named “Código Não Binário” (Non-Binary Code), are a Brazilian transvestite/transfeminine hacker, Veronyka Gimenes, and a bisexual cisgender woman, Amanda Claro.
These two examples — a multi-billion-dollar corporation fuelling global violence under the guise of the “future of tech,” and another attempting to counter violence with limited resources in a particular geographical area — serve as evidence. They reveal that some “humans” remain far more visible than others, and that our values, dreams and decisions can be vastly different. In fact, “human” has become a label created within white, Western, male, cisheteronormative domination, based on this idea of a universal “human” that defines the “others” — those who would be deemed less human or less deserving of basic rights. The few human beings who created this artificial divide between themselves and nature are the same ones who destroy communities, rivers and forests to build data centres. Who use AI to dehumanise “targets” in war, as a tool for surveillance and attacks in acts of genocide. In the hands of colonisers, authoritarian leaders and billionaires, technology becomes an act of concentrating and monetising power that draws on a false narrative/performance of progress, urgency and imperative necessity. But they don’t represent all humans.
At this moment, you are reading an article that is part of a series called “Don’t ask AI, ask a peer.” Here we — APC, GenderIT.org and Global Voices — are affirming the value of human diversity and creativity, and going a bit further by trying to demonstrate it in practice. In this series, we developed two basic questions, phrased as questions people might ask AI, but in this case, answered by authors and illustrators from diverse countries, positionalities, and backgrounds. Throughout April, we published the articles that formed part of the series, and what we learned on this journey full of contradictions was revealing. Here’s a brief overview from my perspective.
In our editorial, we “re-emphasise the importance of knowledge sharing among people, as has been done for decades: communal exchange of information and expertise based on years of lived experiences, informed by local realities, and motivated by the need to feel connected with one another.” In this series, we also turn to the human. But here we tried to have some diversity. We turn to activists, journalists, writers and researchers. To dissenting bodies. To artists. To perspectives from the Global Majority.
We also had many absences, starting with a confirmed author from Lebanon who was not able to submit their article amid Israel’s colonial attacks in the south of the country. As you read these lines, millions of people are suffering violence and rights violations in the Southwest Asia and North Africa (SWANA) region.
We received and published a total of nine articles in this first round of the series, thus leaving out countless countries, lived realities, and perspectives. Even so, what we achieved was thought-provoking. The authors, illustrators and editors involved in the series brought different perspectives when prompted by the same question. This alone confirms our premise that human creativity, when we consider people in all their diversity, can offer provocative ideas, different perspectives, and points of agreement and disagreement that generative AI homogenises. And this is something that AI, created by large corporations — which are managed by and enrich normative bodies, those bodies that see themselves as the epitome of humanity and position themselves as the benchmark for marginalising many others — is incapable of providing.
Learning with peers
Daria Dergacheva answered our first question, “Why is it vital to value human creativity and connection in the age of AI?” by saying that, no matter what tech companies try to sell us, there is no creativity or connection other than that of humans. “Generative AI does not write, does not design and does not paint: it generates statistically closest patterns; these are probabilistic automation systems, which make them fundamentally different from human cognition or creativity,” she said in her article.
But Kira Xonorika reminded us that “for centuries, countless forms of non‑human intelligence, from animal and ecological systems to ancestral and spiritual intelligences, have been dismissed or subordinated within a hierarchy built around a narrow notion of ‘the human.’” In their article, there is a powerful reminder: “That notion has historically been structured by white supremacy, patriarchy, ableism and cisheteronormativity. It assumes that rational, productive, self‑contained subjectivity is the universal human standard. To centre ‘the human’ in AI often ends up centring that particular template.”
How can this be avoided? While we reaffirm the need for human connection and creativity, how can we rethink our own conception of humanity so that it does not perpetuate this legacy? Was this series’ second question – “What could be done to create a human rights approach to AI?” – enough? Should we be asking ourselves which technologies can sustain life and knowledge on the planet, in all their variety and diversity? What narratives and performances would be shaped by such technologies?
In any case, for me, the articles that addressed the first question in the series raised a number of doubts about some very comfortable certainties and raised new questions in my mind, in a spiral that a pre-programmed answer designed to please me and hold my attention would never have triggered. Fortunately, instead of palatable answers, I found food for thought and no answers at all.
After that, I started reading the articles prompted by the question about a human rights approach to AI. Hija Kamran explained that replicating patterns to depict human-like connection is not the same as human connection, wisely pointing out: “Care work, community building, resistance, empathy, joy, showing up for each other – these are not functions you can automate.” While questioning whether AI could ever be ethical or feminist, she explained that her hesitation comes “from years of watching patterns repeat. The promises to do better, the carefully crafted language of responsibility, the performance of listening, none of it has meaningfully shifted how these systems are built or who they serve.”
Rebecca Ryakitimbo emphasised in her article that it is we, humans, who give AI purpose and urgency. “The struggle for human rights has always been about shifting power from the few to the many, and today the field is digital with tools such as AI,” she pointed out, adding: “We can and should approach the regulation of AI by taking into account the fundamental rights we have been advocating for centuries.”
Then, we delved a bit deeper into lived realities. Rezwan pointed out how surveillance and facial recognition restrict civic space, hamper human rights, and harm marginalised communities in India. Oiwan Lam, when examining the Chinese lesson on the human rights approach to AI, stressed the need to rebalance the power among the corporate-state, the machine, and the people through a decision-making framework that places people back at the center. Mariana Tamari highlighted the case of a traditional community in Brazil where families who have lived on their lands for almost a century are seeing their homes threatened by new eviction tactics employed by land grabbers, while unrestricted digitalisation reinforces a model in which human presence no longer prevails. “Empirical knowledge of the land, honed over centuries and passed down from generation to generation by traditional communities, is summarily dismissed as obsolete,” she highlighted in her article, published on Earth Day. Those who live in alliance with the land are pushed out, while in the “parallel universe created by the corporate sector, the imminent climate collapse simply does not exist. The promised future is always one of control and abundance, guaranteed by digital technology and the precision that only AI can provide.”
Grounded in lived realities and backed by evidence, these articles have provided a very practical demonstration of the ongoing violations, while pointing the way towards curbing them. A call for accountability runs through them all. Their content serves as a reminder that, while we seek to free our capacity to think and imagine futures from the constraints imposed by the dominant narrative in AI, we must address the very real problems and profound injustices already taking place. It’s not a case of either/or; both things need to happen at the same time.
In a similar vein, we both affirmed and questioned the value of human action throughout the series. While we have drawn on the idea of a human-versus-machine dichotomy, we have once again exposed its fallacy.
Opening up new questions
We often seek to dismantle false narratives by asserting the opposite, but perhaps a very valuable lesson from this series is precisely to highlight the value of not finding quick answers, of not resorting to comfortable solutions and familiar places, but rather, to recognise the unique value of generating more questions.
Some say AI is good and will help people and the planet. We can say it is not, and we can back up this statement with plenty of evidence. But we can also ask, which AI are we talking about? Who creates the AI tools, based on what criteria, and for what purposes? Are they designed to reinforce — while helping to create a narrative of justification for — war, land grabbing, surveillance, human rights violations, and economic exploitation?
Some say AI is inevitable. We can say it isn’t. But we can also question which lived realities we have in mind when making this claim. Certainly not the one of the traditional communities under attack in Brazil, to share one example from this series. And if this is not the case for everyone, who is pushing this narrative and with what purpose? Where will AI be and how can it be regulated?
In an age of certainties shaped by opaque algorithms and narratives that reinforce biases, uncertainty may well hold power. Raising questions certainly helps expose false assumptions. It may also help to dismantle a web of deception designed to divert attention from urgent issues, such as the lack of transparency, participation in decision-making, and accountability mechanisms, even when we are faced with such serious violations.
But what about you, reading this now? If you or your community were to consider questions like the ones explored in this series, what would the answers be? Did you experience any discomfort along the way when reading these articles? Did they raise any new questions? Have you found inspiration in disagreeing? Have you spoken to a peer recently?
Débora Prado (she/her) is the lead editor of APC.org and a journalist with a background in strategic communications, feminism and human rights.
Ellena Ekarahendy is an Indonesian creative strategist weaving art and design as catalysts for community building. Through creative direction, visual design, and illustrations, she pursues radical imaginations of liveable futures that center gender justice, emancipatory politics, liberatory knowledge sharing, and interspecies care. Pondering meanings between the machines, she fosters conversations around feminist technology with PurpleCode Collective. Off the screen, she treasures solitary conversations with her houseplants, her books, the moon, or the waves and the trees. This art is created as she wonders: maybe her ancestors have been trying to speak to her for guidance through the stars or the wind and the mountains, but she’s too distracted by the AI-boost interfaces. What if the immediate AI we actually need to click with in this present moment is Ancestral Intelligence?




