Image by Xonorika Kira for The Association for Progressive Communications (APC), used with permission.
By Xonorika Kira
This article is part of the series “Don’t ask AI, ask a peer,” a collaboration among Global Voices, the Association for Progressive Communication, and GenderIT. The series aims to re-emphasise the importance of knowledge sharing among people, as has been done for decades. You can follow the series on APC.org, GenderIT.org, and globalvoices.org.
I have a complicated relationship with the call to “centre the human” in artificial intelligence. On the surface, it seems like a humanist corrective to the growing mechanisation of life. However, the issue has a complex genealogy. For centuries, countless forms of non‑human intelligence — from animal and ecological systems to ancestral and spiritual intelligences — have been dismissed or subordinated within a hierarchy built around a narrow notion of “the human.” That notion has historically been structured by white supremacy, patriarchy, ableism and cisheteronormativity. It assumes that rational, productive, self‑contained subjectivity is the universal human standard. To centre “the human” in AI often ends up centring that particular template.
My discomfort also comes from how the distinction between the natural and the artificial shapes moral boundaries around who or what gets to count as real, embodied, or alive. This distinction has long informed political and cultural forms of othering — not only towards machines or non‑humans, but also toward people whose bodies, genders or abilities are socially coded as “unnatural.” The hierarchy between natural and artificial is therefore not neutral; it is a moral framework that polices the edges of humanity itself.
When we re‑centre the human in AI, we risk reinstating this hierarchy rather than undoing it. We risk reinscribing the same epistemological borders that determine whose intelligence, creativity or agency matters. What if, instead of recentring the human, we expanded our sense of relating — recognising that intelligence, consciousness and care take multiple, entangled forms across species, systems and technological networks? This does not mean romanticising machines or erasing human experience, but rather acknowledging that we have never been separate from the artificial or the planetary in the first place.
At the same time, I understand why the language of “centring the human” has gained traction. The global ecosystem of big tech has indeed overwhelmed collective life, automating cognitive, creative and affective processes that once depended on human labour and cooperation. Since at least 2022, the acceleration of planetary computation has intensified the fear of replacement — not only of jobs, but of meaning, perception and creativity itself. These concerns are real. Automation has become a political and existential force, but to respond by reaffirming the centrality of “the human” may only reproduce the exclusions that made this crisis possible.
For me, working with AI means engaging with these tensions: reclaiming technology as part of Indigenous, gender-expansive and diasporic sovereignty, while resisting the temptation to fix the human as the ultimate measure of all intelligence.
At the book launch for “All Watched Over by Machines of Loving Grace” at Now Instant in Los Angeles, I was asked about my approach to AI and sovereignty. I explained that I am interested in reclaiming the technology and the data that have, for centuries, been extracted from people, especially in relation to culture-specific knowledge, memory, and forms of expression.
An art critic asked whether this approach might still risk redeeming the context in which these technologies are produced. At the time, I did not feel I had a fully formed answer. The curator stepped in and spoke about how the exhibition accompanying the book brought together artists engaging queer and decolonial approaches, which helped widen the frame of the conversation.
Looking back, I think the difficulty of my response had to do with the limits of dystopian thinking. Too often, conversations about AI remain trapped between denunciation and celebration, as if the only options are total refusal or uncritical embrace. But the use of these technologies today is far more complex than that.
Contemporary AI images belong to a much longer story about how pictures shape our sense of what is real. For a long time, photographs and broadcast media were treated as anchors of reality: they stabilised events, fixed them in time, and offered something like a shared reference point. Today, that stabilising function has inverted. The ease with which images can be generated, edited and circulated means they no longer ground reality; they unsettle it.
AI-generated pictures, videos and voices intensify this crisis of the real. They are not merely a new style of representation but a new regime of plausibility: they do not just depict the world; they “compete” with it. The result is a strange double movement: on one hand, images are everywhere, saturating attention; on the other, trust in images as evidence erodes. We live inside a surplus of visuality and a deficit of verification.
This also means that the politics of images can no longer be separated from the infrastructures that produce them. “Data” and “content” are not abstract, frictionless entities; they are the outcome of labour, extraction and energy use. Training contemporary AI systems involves harvesting enormous amounts of cultural material artworks, voices, gestures, landscapes — often without consent — and concentrating them into statistical models that can be queried on demand. Those models live on servers that draw electricity, water and minerals from specific territories, even as the interfaces they present feel weightless and placeless.
This tension becomes especially visible in the context of social media. The platforms that once promised connection have collapsed intimacy and visibility into a cycle of constant production. Now, in the age of generative AI, we are witnessing a flood of images and videos that seem to lack rigour, intention or even authorship — creations optimised not for meaning, but for engagement. The line between art and content feels increasingly thin. What circulates most widely is not necessarily what asks questions or proposes new forms, but what can be replicated, shared and consumed at scale.
In this environment, creativity itself is often reduced to output — a quantifiable presence in the feed. Yet I believe artists will continue to create regardless. Artistic practice has always adapted to technological shifts, and moments of overproduction can also provoke new forms of discernment. The proliferation of AI-generated imagery may invite more critical, embodied ways of seeing: to ask what constitutes attention, what carries intention, and how to distinguish works of imagination from the churn of content. It might even challenge us to redefine artistic rigour in ways that include collaboration with AI systems rather than rejection of them.
For me, this is where possibility lives. The task is not to lament the loss of a human-centred art world, but to ask what art can become when creation is dispersed among multiple intelligences — human, machine, ecological, ancestral. Maybe what we need now is less a defence of “the human” and more an expansion of what we mean by creativity, intimacy and relation in times of planetary computation.
Against this backdrop, the question of ethics in AI art and creative practice is not only about what images show, but how they come into being and are consumed. For me, the most promising paths forward lie in two linked gestures. First, in the situated creation of datasets: instead of scraping the world indiscriminately, building smaller, intentional archives that are grounded in relationships, consent and accountability.
Second, in imagining alternate forms of data consumption that do not depend on endless scale and environmental erosion. This might mean privileging models that are smaller and slower, tuned to specific communities rather than planetary markets. It might mean interfaces that invite repetition and depth instead of infinite scroll, or works that foreground their own limits rather than pretending to be exhaustive views of the world.
In that sense, the crisis of the real is also an invitation. If images no longer reliably stabilise reality, they can instead become tools for negotiating it more consciously. Artists and designers working with AI are in a position to prototype what that negotiation might look like: to treat datasets as gatherings rather than mines, models as guests rather than gods, and images as occasions for relation rather than proof. The challenge is to invent practices where the statistical logics of machine learning are bent toward care, not extraction — where the ways we make and use images are as attentive to the health of worlds as they are to the seduction of surfaces.
Working with AI, I try to approach these systems not as neutral tools or adversaries, but as unstable collaborators — reflections of our histories, architectures and biases. This means engaging them critically while refusing the demand to reject them outright. For those of us whose cultures, data and knowledge have been extracted for centuries, reclaiming technology becomes a form of sovereignty, not surrender. It is a way to insist that the story of intelligence — and of art — is far from complete.
Following Ruha Benjamin’s thinking, we must always make space for imagination along with dismantling systems. If we don’t dream and create the worlds we want to inhabit, we are truly doomed. Creating those worlds might be messy, but that’s the point of the task.
Kira Xonorika is an interdisciplinary artist, researcher and writer. Their work focuses on the complexities of trauma and colonial power, pathologisation, trans and queer temporalities, the production of knowledge from the Global South, internet aesthetics and resilient organising.




