The rise of artificial intelligence (AI) music generators has led to a troubling increase in the creation of homophobic, racist, and propagandistic songs, according to new research. These AI tools, which include features like AI music generators with vocals and AI music generators from lyrics, have been exploited by malicious actors to produce and disseminate offensive content, as reported by TechCrunch.
ActiveFence, a service dedicated to managing trust and safety on online platforms, has identified a significant surge in discussions within hate speech-related communities since March. These communities are actively sharing methods to misuse AI song generators to craft hateful songs targeting various minority groups. The AI-generated music found in these forums is aimed at inciting hatred towards ethnic, gender, racial, and religious groups, as well as glorifying acts of martyrdom, self-harm, and terrorism.
While the phenomenon of hateful songs is not new, the accessibility and ease of use of AI music generators have amplified the problem. With tools such as Udio and Suno, even individuals without musical expertise can create and distribute offensive content on a large scale. This mirrors the way AI-generated images, voice, video, and text have facilitated the spread of misinformation and hate speech.
“These trends are intensifying as more users learn how to generate and share these songs,” an ActiveFence spokesperson told TechCrunch. “Threat actors are quickly identifying specific vulnerabilities to abuse these platforms and generate malicious content.”
Generative AI music tools like Udio and Suno enable users to add custom lyrics to the generated songs. Although these platforms have implemented safeguards to filter out common slurs and pejoratives, users have devised workarounds to bypass these filters. According to ActiveFence, users in white supremacist forums have shared phonetic spellings and altered spacings to evade detection. For example, using “jooz” instead of “Jews” or “say tan” instead of “Satan,” and replacing “my rape” with “mire ape.”
TechCrunch conducted tests on these workarounds using Udio and Suno. The results showed that Suno allowed all the tested offensive terms through, while Udio blocked some but not all of the homophones.
The report highlights the need for more robust safeguards and proactive measures by platforms hosting AI music generators. As the capabilities of AI song generators continue to advance, it is crucial to address these vulnerabilities to prevent the proliferation of harmful and hateful content.