spot_imgspot_img

The challenge of civic trust as AI plays a bigger role in public management

smart government

Screenshot from Hong Kong’s Smart Government Innovation LAB. Fair use.

This post is part of Global Voices’ April 2026 Spotlight series, “Human perspectives on AI.” This series will offer insight into how AI is being used in global majority countries, how its use and implementation are affecting individual communities, what this AI experiment might mean for future generations, and more. You can support this coverage by donating here.

I teach at and live on a public university campus in Hong Kong. At the beginning of every academic year, there is a steady stream of scam warnings: posters on campus, emails, student notices, reminders from the institution, posters splattered across MTR stations, and newspaper headlines. Be careful with suspicious links, fraudulent calls, fake payment requests, manipulated identities, messages that sound urgent and plausible enough to make you double-check. Especially after the headline-making incident in which an employee was duped by a deepfake video call with his company’s financial chief into transferring millions, there has been increased and repeated messaging that AI is here to stay and that we are not prepared for it.

I find it ironic that one of the demographic groups most targeted by this message of potential AI-based scam is young students, who are often described as digital natives, instinctively literate in the languages of platforms, devices, and online life. Something has changed in our digital practices where caution has become the latest buzzword, because we can no longer trust whether what we see is what we get, and I do not think it is merely about the proliferation and persistence of fake news and misinformation. It is about the conditions of trust: not only our capacity to trust the digital information we encounter, but our capacity to trust our own judgment in everyday AI interactions.

anti-scam-hk

An anti-scam poster targeting students in Hong Kong. Screenshot from the Anti-deception Coordination Centre. Fair use.

Earlier debates about digital misinformation often focused on circulation: a rumour spreading too quickly, a manipulated image going viral, a false claim taking hold before it could be corrected. Those were serious problems, but they still assumed that falsehood was something inserted into an otherwise readable environment. With generative AI, we have entered a space where plausibility, not falsehood, has become cheap, scalable, ambient, and normal. It floods the digital with things that are not obviously fake, but also not reliably true, producing a kind of cognitive fatigue and surrender.

Human collectives can survive disagreement, but we have not yet figured out how to live with continuous distortion. What is being eroded is not only trust in the information we encounter, but trust in our own capacity to deal with and negotiate these generative AI plausibilities. Hong Kong’s recent emphasis on anti-fraud measures shows that deepfake scams and other forms of digitally mediated deception have become serious enough that the response now includes technical systems, such as scameter+, designed to identify or interrupt them. Financial authorities have openly described this as a new terrain in which AI must be used to fight AI. It means that trust is no longer restored by human recognition, but increasingly depends on technical verification layered on top of technical production.

This is the civic paradox at the heart of the current moment. The more synthetic and artificial plausibility overwhelms everyday judgment, the more verification becomes a specialised service. Institutions of reliability, or persons of trust known to us through proximity and relationality, are being replaced by a stack of systems that claim to tell us what is authentic. When a voice needs forensic checking, a message needs a platform signal, and a public interaction has to pass through a chatbot, a filter, a detection layer, and only then return as something we are meant to accept as reliable, civic trust begins to migrate away from shared social processes and into managed technical infrastructures.

While much of this is dealt with at the level of fraud detection and prevention, the consequences are far broader. Consider what happens when AI becomes embedded in the ordinary interfaces of public life: enquiry systems, service kiosks, governance application processes, translation tools, payment gateways, automated assistance, and a wider rollout of AI across the workflows of everyday management of the city. The presence of AI is justified through a reorganisation around speed, legibility, and efficiency, things that Hong Kong characteristically folds into its image as a dense, high-speed, hyper-connected city of neon-hued optimisation.

However, in this new scenario, where AI systems hold the power to verify and manage our interactions with each other and the city, civic encounters become increasingly processual and procedural. The citizen becomes a user. The public question becomes a query. And in the current policy direction for building a smart city, AI becomes central to an accelerated and expanded public-service delivery system. But efficiency, on its own, does not create civic trust. A public institution does not become more civic merely because it becomes more automated.

As is becoming apparent in the public hearings investigating the disaster of the fire in a residential estate in Tai Po, civic trust requires people to understand how decisions are made, where errors can be challenged, and how responsibility is distributed when something goes wrong. The challenge of civic trust in the face of AI is going to be temporal as much as technical. Calls for AI literacy, fluency, and caution are only going to be effective if we also change our public vocabularies for describing what these systems do: For instance, if we start calling AI systems systems of decision making rather than systems of automation, or think about input as capture, it changes how we relate to these technologies. Because by the time a harm is visible, the system already seems to have moved. In work around technology-facilitated gender and sexual violence, we know that by the time a synthetic image is debunked, the social effect has already travelled. By the time safeguards are written, the interface has already been normalised. We are always catching up. And this catching up produces a collective mistrust in our own ability to recognise what is happening while it is happening.

Verification is often offered as the necessary solution to these questions, but it is insufficient. If civic life is reduced to an endless cycle of detection, correction, and response, then public trust becomes permanently defensive. We stop asking what kinds of relations technology should make possible, and instead settle for damage control.

What would it mean to think differently? At the Digital Narratives Studio, which I direct, we begin with a modest but important shift: moving from asking how AI can improve civic services to asking what kinds of civic relations those services should sustain. Efficiency and safety are necessary, but so are hesitation, explanation, and the possibility of remaining answerable to one another, even when our encounters are increasingly shaped by synthetic systems.

We call this relational infrastructuring. If trust is being displaced from shared social processes into technical systems of synthetic plausibility and verification, then the response cannot simply be better detection or louder warnings. I also do not hold a nostalgic wish to return to a time before technological mediations, as if that were even real. We have to stay with the technologies that now organize civic life, but refuse to accept the narratives of decision-making and trust that they offer. For us, it means creating embodied, shared processes through which people can name what they are encountering, compare experiences, share interpretations, test judgment together, and build a collective vocabulary for the new conditions that AI has normalized. 

In our work, this takes the form of small experiments in language, relation, and possibility. Through informal and semi-structured gatherings, always over food, we create spaces where people can describe how AI-driven systems enter their everyday lives, and where new civic vocabularies can emerge from lived experience rather than technical prescription alone.  I do not want to prescribe a language as if there is one settled one, but do try and think about how you would understand the futures of AI, the histories they consolidate, and the experience they unsettle if you had to incorporate terms like the unsettled, entanglement, dirty mirror, toxic intimacy, grief, and companionship as a part of describing AI in your everyday life. We extend this into practices of relational language modelling — a practice that doesn’t mimic machine correlations but human relationship building as a way of shaping and interpreting the data set, where human intention, social context, mutual accountability, and shared anxiety can be named, and become central to how we imagine civic engagement with AI. And we pair this with a commitment possibility: insisting that AI is still unsettled, still open to being shaped otherwise, and that the language and vocabulary it comes with, is not something we just accept. 

This bottom-up approach is not scalable or easy, but it is grounded in the idea that we live with and through technologies and don’t merely stand outside and critique them. We think through rebuilding the relational conditions within which technologies are encountered, interpreted and made accountable so that we are not merely looking to verify what is real, but to create publics capable of making meaning together. 

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Popular Articles

0
Would love your thoughts, please comment.x
()
x