spot_imgspot_img

Brazil: A warning on how AI and deepfakes can become an ’excessive risk‘ to women and girls

Image shows a person's silhouette and several avatars coming out of an algorithm type of sphere

Image created on Canva by Global Voices.

This post is part of Global Voices’ April 2026 Spotlight series, “Human perspectives on AI.” This series will offer insight into how AI is being used in global majority countries, how its use and implementation are affecting individual communities, what this AI experiment might mean for future generations, and more. You can support this coverage by donating here.

In November 2023, a group of parents from a school in Rio de Janeiro reported to the police that teenagers were creating and sharing nudes created using Artificial Intelligence (AI) featuring other classmates. Less than a year later, in September 2024, in the state of Bahia, another group of teens was also suspected of using AI to create pornographic images of other classmates, while in Mato Grosso state, students were expelled after sharing AI images featuring a teacher and other students in pornography communities on social media.

These are some recent cases reported on Brazilian media and mentioned in a technical note published by the independent research center Internetlab in early April, 2026. The document aims to discuss “ways of combating online violence against girls and women in Brazil” and recommend regulatory discussions within the country’s context.

According to the Brazilian Public Security Forum, in 2025 alone, Brazil registered 1.568 femicides, a 4.7 percent increase compared to the previous year and the highest number recorded since the law naming this type of crime was signed 10 years before. An increase in gender violence cases reported in the news led to protests at the end of 2025. And the growing amount of misogynistic content online is “feeding the violence,” Deutsche Welle reports.

To Clarice Tavares, Internetlab’s research director and one of the technical note’s authors, who talked to Global Voices, gender violence cases both online and offline are all part of the same misogynistic structure, coexisting and codependent on each other. “We are going through a complex moment of growing violence against women, and we need public policies to look at these contexts, in their specificities,” she analyzes.

Talking about AI and gender issues, Tavares notes that databases used to feed platforms can be biased and, therefore, influence their results. Internetlab chose to look at violence related to AI in the technical note, she explains, because of its broader impact and the possibility of addressing it through proper regulation.

Among other points, Internetlab’s document points to risks associated with “non-consensual sexual content deepfakes” and argues for possibility of this happening to be classified as “excessive risk,” noting that “these technologies affect women and girls disproportionately”:

Uma pesquisa conduzida pela Security Hero demonstrou que deepfakes sexualmente explícitas representam 98% de todos os vídeos de deepfake online, e que 99% das pessoas alvo desses conteúdos eram mulheres. A pesquisa também indicou um aumento de 464% do número de deepfakes sexuais entre 2022 e 2023.

A research conducted by Security Hero showed that sexually explicit deepfakes represent 98 percent of all deepfake videos online, and that 99 percent of the people targeted with this type of content are women. The study also indicated an increase of 464 percent in sexual deepfakes between 2022 and 2023.

Although Tavares states that the problems are not exclusive to one platform, she points to Grok, X’s embedded AI, as the tip of the iceberg, exposing a broader problem already spread online.

I think this case calls for attention, since the AI that made it possible to create this sort of content was already part of its own social media, which helped to amplify the reach of such content. It was very much on the surface. I think it’s a systemic issue, but Grok’s case made it impossible not to discuss it or worry about the current state of things. It was evident there had been several flaws in the tools and the platforms that allowed these things to happen on such a large scale.

What does the law say in Brazil?

In the past decade, Brazil has had advances in legislation regarding online rights. Last year, 11 years after the Marco Civil da Internet — a bill that works as a Brazilian civil rights framework for the internet — was signed, the Federal Supreme Court (STF) voted to consider one of its articles partially unconstitutional. Article 19 ruled on platforms’ civil responsibility for content published by third parties (users). The majority of the justices understood that, under the current scenario, the norm wasn’t enough to protect fundamental rights and democracy.

“There is still a long way for us to understand how this ruling will be operationalized. A new regime regarding platforms’ responsibility established rules for how they shall function until a more robust regulatory legislation is approved. Platforms now have more obligations with content moderation, so the ruling put a new paradigm for us to think of platforms’ accountability and obligations,” Tavares says.

In the context of gender violence online, she also stresses the importance of having a legal definition and concept of what is understood as misogyny to put the Supreme Court’s decision into practice. “We want to adapt this decision to the already existing public policies, legislation regarding gender violence to this new digital violence scenario,” she explains.

Internetlab’s analysis mentions a rise in proposed bills aiming to criminalize misogynistic behavior and the so-called red pill movement and manosphere, but notes that, although they are an important step to face the problem, choosing criminalization as the main and only solution could limit the efficacy of strategies to prevent and make reparations to victims.

Their recommendation regarding AI includes, for instance, creating curriculum guidelines about digital literacy to develop critical analysis of how algorithms, platforms, artificial intelligence tools, and other digital ecosystem elements work.

As for the platforms, the note recommends adopting safety measures since the early conception of their projects (safety by design), “to prevent the creation and dissemination of this type of content” that mainly targets women and children as victims. Nonconsensual sexual content deepfakes should be considered an “excessive risk,” with their use and application prohibited, and rules regarding accountability and obligations of digital platforms and AI agents should be established.

An estimation by the Movember Foundation, a men’s health organization, points out that two-thirds of young men regularly engage with masculinity influencers online. An article published by UN Women says “experts are finding that the popularity of extreme language in the manosphere not only normalizes violence against women and girls, but has growing links to radicalization and extremist ideologies.”

As the election season approaches in Brazil, with cases of AI-generated videos featuring violence against women voters being reported, Tavares believes there are also new layers to think about gender political violence now. “With video production or even chatbots such as Gemini, ChatGPT, or Claude, what we’re seeing is that [AI] will likely be a tool used as a means to have access to information, and there could be gender biases, reproducing gender violence and gender political violence. We’re still trying to figure out the impact it can and will have from now on.”

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Popular Articles

0
Would love your thoughts, please comment.x
()
x