header text

Fix Blur

Far-Right Party in Italy Faces Backlash Over AI-Generated Hate Speech

April 19, 2025

Introduction

In a troubling development in Italy, opposition parties have raised significant concerns over the use of AI-generated imagery by Matteo Salvini’s far-right League party. A recent complaint submitted to the Italian communications regulatory authority (Agcom) claims that these images propagate hate speech, specifically targeting immigrants and people of color. This controversy highlights not only the potential for AI technology to be exploited in political discourse but also the pressing need for regulation in the use of artificially created content.

The Nature of the Complaint

The complaint, filed by the center-left Democratic Party (PD), along with the Greens and Left Alliance, accuses the League party of disseminating images that contain "almost all categories of hate speech." These AI-generated photos, which have circulated on popular social media platforms such as Facebook, Instagram, and X, depict men of color engaging in violent activities, often armed with weapons, targeting vulnerable individuals. Such imagery raises serious ethical questions about the manipulation of technology for political ends.

Impact of AI Imagery in Political Campaigns

Senator Antonio Nicita from the PD expressed his outrage, stating that the images are not just violent but intentionally deceptive. By obscuring the faces of victims, it appears that the aim is to mislead the viewers into believing that these portrayals are factual representations. This is not merely a case of creative content creation; it is a targeted strategy that seeks to incite fear and foster societal division. As AI technology becomes more advanced, the banality of such racist propaganda, utilizing hyper-realistic images to stoke xenophobia, poses a significant threat to social cohesion.

The League’s Defense

In response to the backlash, representatives of the League party acknowledged that while some images were "generated digitally," they contended that each post stemmed from real reports covered by Italian media. The party maintains that it is simply reporting the facts—characterizing their images as necessary for highlighting crime involving foreigners. They argue that this trend of ensuring the harsh realities of crime are brought to light is critical for public awareness.

Examining the Underlying Issues

Francesco Emilio Borrelli, an MP for the Greens and Left Alliance, places blame squarely on the League’s messaging strategy, suggesting that there is a deliberate effort to invoke fear among citizens. The AI-generated visual narratives are being weaponized to promote a political agenda that thrives on division and discrimination. Notably, the complaint references specific instances where fabricated imagery was paired with misleading texts, further complicating the narrative surrounding crime and ethnicity in Italy.

Legislative and Ethical Considerations

As these AI-generated campaigns gain traction, questions arise about the responsibilities social media platforms hold in regulating such content. Under the EU's Digital Services Act, Agcom has the power to remove offensive posts and sanction platforms that fail to manage user-generated content appropriately. Previous fines against platforms like Meta for breaching advertising regulations signal that regulatory bodies are beginning to take matters seriously. Yet, the current complaint highlights a gap in enforcement concerning AI-generated media.

Potential Regulatory Actions

The possibilities of Agcom taking action against the League's content could set a precedent for how AI is used in political contexts. As demonstrated by the most recent complaints, there is a pressing need for clearer regulations governing the creation and use of AI-generated content. If such images continue to circulate without oversight, we risk normalizing narratives that can lead to discriminatory practices and societal harm.

The Future of AI in Politics

This incident underscores a growing trend where artificial intelligence tools are increasingly employed for propaganda, a phenomenon that reached new heights during recent European and American elections. As AI tools improve in quality, they become more appealing for political messaging, particularly among far-right factions seeking to manipulate public perception. It is imperative that we confront these developments head-on, advocating for increased transparency and accountability in AI-generated content.

Conclusion

The allegations against Italy's far-right League party highlight the complex interplay between AI technology and political communication. With increasing sophistication in AI-generated media, regulatory bodies must step up to ensure that these tools are not hijacked to perpetuate fear and hatred. If the intent is to enrich public discourse, a strict ethical framework around AI usage in political contexts is urgently needed. For individuals concerned about the implications of AI-generated content, it's crucial to advocate for transparency and authenticity while addressing these challenges head-on. If you want to learn more about how to manage the impact of AI responsibly, click here.