Studies show that Meta and X authorized advertisements containing hate speech and incitements to violence prior to Germany's federal elections.
A recent study by a German corporate responsibility organization has uncovered that social media platforms Meta (
Facebook) and X (formerly Twitter) approved advertisements featuring anti-Semitic and anti-Muslim messages ahead of Germany’s federal elections.
Researchers submitted 20 ads that included violent rhetoric and hate speech aimed at minority groups.
The findings indicated that X authorized all 10 ads it received, whereas Meta approved 5 out of 10. The advertisements contained messages inciting violence against Jews and Muslims, likening Muslim refugees to 'viruses' and 'rodents,' and included calls for their extermination or sterilization.
One advertisement even encouraged burning synagogues to 'stop the Jewish globalist agenda.' The researchers noted that these ads were flagged and removed prior to publication, but the results raise concerns regarding the content moderation policies of social media platforms.
The organization responsible for the study has presented its results to the European Commission, which is likely to initiate an investigation into possible breaches of the EU Digital Services Act by Meta and X. This revelation comes at a particularly sensitive time with Germany's federal elections on the horizon, heightening concerns about the potential impact of hate speech on the democratic process.
Facebook has previously encountered controversy in the Cambridge Analytica scandal, where a data intelligence firm was found to have manipulated elections globally through similar tactics, resulting in a $5 billion fine.
Furthermore,
Elon Musk, the owner of X, has been accused of interfering in the German elections, including promoting the far-right AfD party.
It remains ambiguous whether the approval of such ads reflects Musk's political biases or his broader commitment to 'free speech' on X. Musk has dismantled X’s content moderation framework and replaced it with a 'community notes' system, allowing users to add context to posts and present alternative viewpoints.
Mark Zuckerberg, CEO of Meta, has announced a similar feature for
Facebook, but he mentioned that AI-based content moderation systems will stay in place to combat hate speech and illegal content.
Nonetheless, this shift raises concerns, particularly as reports suggest that extremist right-wing material is receiving greater amplification on platforms like X and TikTok, influencing public perception.
The economic downturn and escalating violence associated with attacks related to Muslim migrants in recent months have further escalated tensions.
It is unclear whether the rise in extremist content is a result of real-world conditions or if social media algorithms are boosting such messages to enhance user engagement.
Regardless, both Musk and Zuckerberg have shown a readiness to reduce content moderation in the face of pressure from the European Union and German authorities.
Whether this investigation will prompt the EU to impose stricter regulations on X,
Facebook, and TikTok is uncertain, but it highlights the ongoing challenge of balancing free speech with curbing the spread of extremist content.
The study emphasizes the broader issue that hate speech often aligns with political agendas, complicating the role of social media platforms in content moderation.
While discussions about regulatory measures may continue, the question of who should oversee digital speech—private corporations or governmental bodies—remains unresolved.
Like traditional media outlets, social platforms may increasingly be scrutinized regarding how they manage user-generated content.