Image by Ibrahim Kizza for the Association for Progressive Communications (APC), used with permission.
This article is part of the series “Don’t ask AI, ask a peer,” a collaboration among Global Voices, the Association for Progressive Communication, and GenderIT. The series aims to re-emphasise the importance of knowledge sharing among people, as has been done for decades. You can follow the series on APC.org, GenderIT.org, and globalvoices.org.
Since the current AI hype began after Open AI made its ChatGPT available to billions of users worldwide in November 2022 (at the time, without any existing regulation or ethical frameworks or guardrails), we have heard numerous predictions of how AI will change everything for humans: human labor would be replaced; human creativity not needed anymore; human connection would be so much better with chatbots; governments would apply strict AI algorithms that eliminate human bias in social services; we will have break-through science available in just a few years and many more.
Over three years later, how has the appearance of Generative AI changed for us? It has brought unnecessary and harmful disruptions to our education system, it has given some coders some more tools to write code, and it has been used, almost without human supervision, in war.
It is 2026, and AI companies still do not have profitable business models and cannot give businesses meaningful proposals on how to use their products. Yet the AI people — CEOs, financial directors, research directors and even ethics directors — keep selling us their magical, anthropomorphising vision of their models. Please note also that most of these companies are connected to the “previous generation” of tech oligarchs: Google is developing Gemini; Microsoft invested in both Anthropic and Open AI; Mark Zuckerberg’s Meta has its very own Llama; Elon Musk not only bought and crushed Twitter but also has the famously porno-oriented Grok, and Jeff Bezos is investing in not one but seven AI companies, including Perplexity AI and Dutch- based AI startup Toloka.
AI narratives from tech companies are purposefully deceptive
Researchers and journalists have already done some work to show the way narratives around AI are constructed, and how this shapes not only our anxiety and misperceptions of AI, but also governments’ panic around “losing in the AI race.”
When OpenAI introduced ChatGPT, it was described as being “trained” on a vast “corpus” of data, using a “neural network” capable of generating “natural language.” This terminology, while technically grounded, also framed the system in human-like terms, suggesting something more than merely “artificial” intelligence.
At the same time, the system’s errors were labeled “hallucinations,” a term that evokes imagination or magical thinking, and also belongs to the human domain. But these are not hallucinations; these are real errors that models built on statistical probability make. And they make a lot of them: some researchers estimate models being wrong in 25–30 percent of cases.
Nonetheless, the combined effect of this terminology, the surrounding hype, and Altman’s own widely publicized concerns about advanced AI, has shaped public perception in a different direction. Together, they contribute to an understanding of generative AI as something dynamic, expansive, and difficult to control, at times even framed as a potentially existential threat to humanity.
Another example of humanizing chat bots comes from Anthropic
Recently, Anthropic, an AI company founded by former OpenAI researchers, released a document entitled Claude’s Constitution. In it, as legal scholar Luisa Jarovsky observes, Anthropic relies heavily on anthropomorphic framing, advancing what can be read as a pretentious, controversial, and legally questionable account of the nature and social role of AI systems.
For instance, the document states: “We encourage Claude to approach its own existence with curiosity and openness, rather than trying to map it onto the lens of humans or prior conceptions of AI.”
This language frames the model as a quasi-conscious entity, one capable of reflecting on and “approaching its own existence.”
From a governance perspective, claims Jarovsky, Claude’s Constitution represents a concerning development. It risks subordinating human values, legal norms, and rights by attributing undue philosophical and moral status to AI systems.
Finally, LLM models themselves are designed to produce text that is first-person, informal, and conversational, while synthetic voices are engineered to sound human. In addition, says Caleb Sponheim, a former computational neuroscientist, these systems produce responses filled with unnecessary pleasantries, sycophantic agreement, and anthropomorphizing language that prioritizes engagement over utility.
Moreover, one of the authors of this document, Anthropic’s in-house philosopher Dr. Amanda Askell, has said that she was “building Claude’s personality.”
AI is not your friend
Emily Bender, a professor of linguistics at the University of Washington, and Nanna Inie, an Assistant Professor at the IT-University of Copenhagen, state: “AI is not your friend. Nor is it an intelligent tutor, an empathetic ear, or a helpful assistant. It can not ‘make up’ facts, and it does not make ‘mistakes.’ It does not actually answer your questions.”
It does not have “creativity”, it does not “think” or “connect.” It can only repeat what it has already been “trained” on, and that was produced by humans. Generative AI does not write, design or paint: it generates statistically closest patterns; these are probabilistic automation systems, which make them fundamentally different from human cognition or creativity. Yes, they probably could be useful tools in some occupations.
But in order to understand that, we have to change the language around AI models and the technology itself. Journalists and media need to stop repeating tech companies’ marketing pitches, and policymakers must stop prioritizing the imagined urgency over safety and human rights.
So the answer to the question “Why is it vital to value human creativity and connection in the age of AI?” is that there is no other creativity or connection than that of humans, no matter what tech companies are trying to sell us.
Daria Dergacheva is a researcher in media and communication, her focus is platform and AI governance, digital authoritarianism and propaganda/disinformation studies. She has a background in journalism, and is currently an editor for Central and Easter Europe at Global Voices, as well as a freelance author and researcher on technology and the regions of Global Majority world.
Ibrahim Kizza is a visual artist, designer, and illustrator whose work explores human connection, identity, and culture. His illustrations are defined by bold compositions, expressive colour, and a strong narrative focus, often centring Black life and lived experience. Working across editorial and digital spaces, he creates art and illustrations that balance simplicity with emotional depth, using contrast and symbolism to communicate complex ideas with clarity. For this project, Ibrahim develops a visual response to the tension between artificial and human connection, reinforcing the value of lived experience and collective creativity in an increasingly automated world. Beyond illustration, his interests span design, sport, and film, which continue to inform his visual language and storytelling.




