spot_imgspot_img

Decolonizing AI at the U.S. border

Footsteps walking a path made of code. Image courtesy of UntoldMag

Image courtesy of UntoldMag

This article by Tsion Gurmu, Hinako Sugiyama and Sobechukwu Uwajeh was first published by UntoldMag on November 25, 2025. An edited version is republished on Global Voices as part of a content-sharing agreement. This post is also part of Global Voices’ April 2026 Spotlight series, “Human perspectives on AI.” You can support this coverage by donating here. 

From grocery shopping to streaming services, schools to workplaces, warzones to governance — artificial intelligence (AI) is springing up everywhere.

But as AI becomes more embedded in governance and security, its role in border enforcement and immigration control has grown rapidly. These technologies often reproduce and intensify racial discrimination, particularly through algorithmic bias. This is no less relevant in the U.S. government’s utilization of the so-called “smart border.”

What happens when AI is deployed to decide who can move, who is detained, and who is excluded at the border?

A human rights framework

In response to a 2023 meeting with United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, the Black Alliance for Just Immigration (BAJI) and the Immigrant Rights Clinic and International Justice Clinic at UC Irvine (UCI) School of Law submitted a report detailing how AI disproportionately harms Black migrants and migrants of color and giving suggestions for change in the future.

There are already legal frameworks governing how states should use AI under international human rights law. Chief among them is the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), which the United States ratified in 1994.

ICERD requires states to: Prevent racial discrimination in all forms (Art. 2(1)(a)); amend policies and laws that perpetuate racial discrimination (Art. 2(1)(c)); guarantee equal treatment before the law (Art. 5); ensure remedies for victims (Art. 6), and hold private actors accountable (Art. 2(1)(d)).

By these standards, the U.S. is legally obligated to ensure that AI does not reinforce racial inequities.

Surveillance before the border

In reality, however, BAJI and the UCI Clinic detail how the U.S. AI Border Enforcement Policy violates many of these rules at every stage of the immigration process.

Even before migrants reach any land border, AI systems track their movements. Customs and Border Patrol (CBP) deploys autonomous surveillance towers and drones to identify “objects of interest,” replacing human patrols.

The rapidly expanding use of surveillance towers and Small Unmanned Aircraft Systems (sUAS) at the U.S.-Mexico border raises grave concerns about racial equity. To begin with, those under surveillance include large numbers of people fleeing violence, persecution, and even torture, who are entitled to seek protection in the U.S. under domestic and international law. However, because of their more limited access to formal immigration procedures, migrants of color are forced to risk their lives to cross the border.

Second, the use of Anduril Towers, sUAS, and other forms of AI-powered surveillance systems at that border perpetuates discrimination by marking those migrants as lawbreakers and threats to national security rather than people seeking safety and security.

The disproportionate surveillance on migrants of color translates to a disproportionately high death rate for those same groups as they get pushed into more dangerous terrain.

CBP claims new AI-powered systems are more responsible and humane than physical border walls. According to CBP, the smart border can help deter irregular crossings and increase migrant safety by having the capability to detect, capture, and safely deport migrants who find themselves lost in the desert or mountains.

Yet, the data has shown the opposite is true — increased implementation of “smart border” technology has led to historically high rates of migrant deaths.

Algorithmic risk scoring

Formal entry routes are also shaped by algorithmic bias. The CBP One app, implemented by the Biden administration to streamline immigration processes, was once required for all entry applications and demanded a selfie to verify applicants. Yet the system frequently failed to recognize darker skin tones, misidentifying Black faces at a rate 10 to 100 times more often than white faces, according to legal scholar Priya Morley in “AI at the Border: Racialized Impacts and Implications.”

The app was also inaccessible to many communities: it lacked translations into key languages spoken by Black migrant populations, adding another barrier. Although CBP One is no longer available, debates about its reinstatement continue under the current administration.

Even if migrants pass the first stages, they face the Automated Targeting System (ATS), which compiles domestic and international databases to predict who might overstay a visa.

Though risk assessments are commonplace in immigration systems, the ATS system perpetuates already existing bias. For example, when Nigeria was added to a list of countries facing heightened travel restrictions in 2020, Nigerians became disproportionately flagged as high risk by the ATS.

Officials claim these systems are preventive, not punitive. Yet their very design perpetuates structural racial discrimination, contradicting U.S. commitments under ICERD.

ICE enforcement inside the U.S.

Once inside the U.S., migrants encounter further AI-driven discrimination from Immigration and Customs Enforcement (ICE) during detention and interior enforcement.

ICE uses predictive algorithms such as a “Hurricane Score” to determine who merits heightened surveillance. There is a lack of transparency on the factors that affect one’s Hurricane Score. Because the algorithm is provided by a private company, B.I. Incorporated, which has strong ties to the prison industry, the government has not had to disclose what factors influence this score.

ICE also uses the Repository for Analytics in a Virtualized Environment (RAVEn) platform to analyze trends and patterns across a series of data sources to further assess the risks migrants may pose in the U.S. RAVEn draws from biased local law enforcement data and international databases from offices across 56 countries. Migrants cannot opt out or even consent to data collection.

The lack of transparency and the absence of avenues for redress in these systems have raised grave concerns among rights watchdogs about compliance with ICERD articles and anti-discrimination regulations.

Decolonizing AI

Finally, under immigration relief systems, AI is being used by the US Citizenship and Immigration Services (USCIS) to sort evidence and detect fraud in applications. The training model Asylum Text Analytics (ATA) is a system that identifies fraud by reading asylum application text.

Oftentimes, the ATA may prejudice non-English speaking applicants. This is especially true for those who speak more niche languages and translate through the same providers, because ATA may weed out those with legitimate claims whose applications contain similar phrases or narratives as other applications.

Rather than simplifying its application process, USCIS also uses an AI-powered Evidence Classifier to “review” millions of pages of evidence ranging from birth certificates to medical records and photos for USCIS adjudicators. These AI reviews can negatively impact migrants who may have atypical documentation, oftentimes exacerbating racial discrimination.

BAJI and UCI argue that addressing these harms requires a decolonial approach to AI. They invoke Cosmo uBuntu, an African philosophical framework rooted in collectivism and shared humanity rather than individualism. This involves the voluntary embracing of uBuntu (personhood) as “a foundational value system in our participation in planetary conviviality, without forcing universality.”

In contrast to the Western-centric, individualistic views on humanity, African cosmology embraces the humanity of all humans.

To align with ICERD and truly decolonize AI, African and diaspora communities must be actively involved in conceptualizing, inventing, innovating, and operating AI systems.

Policy recommendations

Individuals who may be negatively affected by the use of AI must be promptly notified of such decisions and, when appropriate, given the option to opt out of AI systems.

U.S. Federal laws governing DHS’s use of AI must prohibit and prevent any AI use that would result in racially discriminatory results or exacerbate structural racial discrimination. They should mandate effective discrimination-prevention measures,  independent oversight on implementation, robust public disclosures, stakeholder consultation with diverse populations, and access to effective remedies by those who are negatively impacted by DHS’s use of AI.

City policies must include an explicit pledge not to share information with DHS if it is expected to be used for AI development or deployment by DHS or its vendors.

Embedded in each of these calls is a clear message: Until AI systems are free of discrimination and until diverse perspectives are meaningfully included in their development and use, they must not be allowed to be used on any border.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Popular Articles

0
Would love your thoughts, please comment.x
()
x