spot_imgspot_img

Deadly deepfakes: A survival guide for the age of algorithmic war

Artificial intelligence is reshaping modern warfare.

During the recent U.S.-Israel war on Iran, AI was used to identify and strike targets, accelerate the speed of attacks, and get weaponry recommendations. Far from the conflict zones, AI also enabled the creation and spread of disinformation and deepfakes that confused public opinion and made it hard to understand on-ground realities.

Social media streams are flooded with AI-generated content masquerading as real footage, including false clips of Burj Khalifa on fire and fabricated scenes of missiles striking the streets of Tel Aviv.

Rachel Adams, CEO and founder of the Global Center on AI Governance, believes such misinformation risks influencing life-or-death decisions — from where civilians seek shelter to how aid is distributed.

A woman with long, straight blonde hair and hazel eyes, wearing a green sweater, smiles at the camera indoors.

Rachel Adams

Adams is an expert on AI governance and the political economy of emerging technologies. She was one of the lead drafters of the African Union Commission’s Continental AI Strategy, a framework designed to harness AI for socioeconomic transformation and technological sovereignty. She is also the author of The New Empire of AI: The Future of Global Inequality. The Global Center on AI Governance is an Africa-based research collective dedicated to promoting responsible and equitable AI practices globally. 

In a conversation with Rest of World, Adams outlined how AI is amplifying long-standing global inequities in information and power, and how this trend could have a lasting impact on the world even after the wars are over. She also offered tips on how individuals can identify AI-generated content and protect themselves.

The conversation has been edited for length and clarity. A recording of the live Rest of World event is available here. 

Could you talk about the use of AI in recent wars and its real-world implications?

One of the first things that we’re noticing is the sheer volume of false or misleading wartime images or information being shared. These are AI-generated clips presented as footage or satellite images from conflict regions.

It’s also not always a deepfake and is often old footage from other conflicts being relabeled by AI and presented as new.

When truth becomes contestable and uncertain, it’s those with the greatest institutional power … that are better positioned to assert their version of events.”

The other thing to really stress is the kind of emotional engineering that is going on with such content. It’s trying to invoke fear, triumph, or helplessness. Very often, it feels like that’s the whole point of such content: to intensify, polarize, and manipulate people’s emotions about the conflicts and international opinion. 

What we have to remember is that this affects people inside the conflict zones. They need to make decisions about where to move, and where aid and shelter might be available. For those caught in the conflict, it’s a question of life and death, and if there’s disinformation circulating about where they can get shelter or aid, it becomes something much more serious.  

Your work is focused on inequity. Could you explain how inequities play out during conflicts with rampant AI-generated misinformation?

In the Global South, many countries have historically been vulnerable to information manipulation. They have weaker media ecosystems, or they are newer democracies and have experienced external narrative control.

This has been an issue at the heart of decolonization and who gets to write history. Experiences are often written over by official accounts, which are usually Western. Now, AI enables that writing over in ways that deny or invisibilize authorship and therefore accountability. It’s not clear who’s doing that writing over, but it’s still happening.

For me, a lot of these questions are wrapped up with critical questions about power. Who gets to define truth in times of war? Whose voices are being trusted? Whose realities are being dismissed? 

When truth becomes contestable and uncertain, it’s those with the greatest institutional power — whether it’s platforms, particular states, particular militaries — that are better positioned to assert their version of events, which can sideline citizens and civilians, particularly sideline local journalists, and smaller, less powerful states. 

Several AI tools are directly or indirectly aiding and advancing harm. Where does the accountability for these rest — with governments, companies, or users?

Platforms have a responsibility not to profit from the virality of content.”

On the one hand, we have the latest AI tools that critically reduce the barriers to entry for creating and distributing fake propagandist information. On the other hand is the infrastructure through which this disinformation is spread. This makes the question of accountability more straightforward in some ways: We have to look at the frontier model developers and we have to look at the platforms through which this content is shared.

Platforms have a responsibility not to profit from the virality of content. And model developers, companies like OpenAI and Anthropic, have a responsibility to build traceability to trace where the content was made and how it was made.

Many of the problems around social media use in conflict contexts have been because social media companies have not put enough content moderation resources outside of the U.S., where there’s a richer cultural context, more languages and dialects. The AI takedown technologies are not so accurate in these contexts, and platforms have simply not put enough human resources behind ensuring the credibility of content in these places.

Do you think there is a need to gatekeep Western AI models to reduce their use for harmful content generation?

I discussed this idea a little in my book that there is a small set of actors who are largely white men from Silicon Valley who manage, handle, and make the decisions about the world’s most powerful technologies. Allowing these companies to be the gatekeepers of information and truth around the world is a very problematic idea.

Within the African context, in which a lot of my work takes place, we want open-source technologies. We need to be able to cut the costs and build these technologies. To make them more equitable and accessible. I think there are a lot of benefits to open-sourcing that outweigh the potential benefits of having these capabilities locked within a small handful of companies.

What are some of the ways in which individuals can identify AI-generated content on their own?

I don’t rely on AI to check whether something is AI-generated. 

You have to verify through context, provenance, and authentication. Some of the layperson clues would be inconsistent text or insignia, implausible lighting and reflections, and weird movements at the side of frames.

Look at whether the footage includes a precise claim that can be independently verified, such as a place, a date, a building, and particular landmarks.

Treat screenshots very suspiciously — especially those of “official statements” — because they’re very easy to manipulate. They’re not as traceable, so if it’s something that’s been forwarded many times, it is always something to be particularly wary of.

Also watch for very emotional context or content with theatrical audio and sound effects in the background. You have to think, “Is it trying to manipulate me?”

Always look for where the information is coming from. Be wary of footage that appears first on anonymous, highly partisan, and low-history accounts. Watch out for new websites that don’t have a clear company or author behind the content or seem to be promoting a one-sided view of the issue.

Check if reputable fact-checkers or journalists have already debunked it.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Popular Articles

0
Would love your thoughts, please comment.x
()
x