TikTok’s Nazi problem: How extremists are thriving on the platform
TikTok’s Nazi problem: How extremists are thriving on the platform
A new report unveils the tactics and reach of extremist content that thrives on TikTok
Their tactics are sophisticated and up-to-date, their activities are well-coordinated, and their content utilizes generative artificial intelligence (GenAI) and current music trends. The messages they spread are full of hatred and violence. TikTok’s response not only fails to stop them but sometimes even empowers and promotes them to new audiences. Their videos, which include Holocaust denial, glorification of Hitler and Nazism, and support for terrorists who commit mass slaughter, receive millions of views.
A report published this week reveals how a network of hundreds of pro-Nazi TikTok accounts, working in coordination both on and off the platform, reach millions of viewers, manage to outwit TikTok's weak blocking efforts, and even receive a boost in circulation thanks to the platform's powerful algorithm.
A report from the Institute for Strategic Dialogue (ISD), an independent non-profit research organization specializing in analyzing extremist discourse, reveals that these pro-Nazi accounts are operating with a level of sophistication that makes them difficult to detect. These accounts are adept at quickly amplifying their content and taking advantage of TikTok’s procedures, such as warnings before blocks, to activate backup accounts and maintain continuous activity.
For their research, ISD began by examining a single TikTok account that openly identified as pro-Nazi. From there, using a method called "snowball sampling," they were able to identify hundreds more accounts connected by shared ideologies, imagery, and slogans. The researchers compiled a list of 200 accounts that were actively promoting Nazi or far-right ideologies. To further understand the scope and influence of these networks, the ISD researchers also created a dummy TikTok account to test how the platform's recommendation algorithm responds to interactions with such content. Additionally, they investigated the role of Telegram in coordinating activities across different platforms.
Trends in pro-Nazi content creation
Two significant trends emerged in the report. The first is the widespread use of GenAI to create content, particularly images that are caricatured and demeaning to non-white ethnic groups, often depicting them as threats to white communities. For example, the report highlights how AI-generated translations of Hitler's speeches are gaining traction on TikTok, where excerpts are used in videos that glorify figures like Joseph Goebbels, promote Nazi documentaries, or incite violence against Jews. These videos have garnered millions of views.
The second trend is the strategic use of TikTok's extensive library of sounds and songs as a form of coded communication, which helps extremist users evade content monitoring systems. Songs become reference points within the group, and members create posts using these audio clips, sometimes without overtly extreme content, to contribute to the trend. In their research, ISD found that certain songs associated with far-right extremism led to thousands of videos, many of which glorified terrorist acts or promoted neo-Nazi ideology.
Tactics used by pro-Nazi networks
The report also identifies four primary tactics used by these pro-Nazi accounts to spread their messages. The first is the use of coded language—seemingly innocent words and phrases that carry hidden meanings understood by members of the group. For example, images of ovens and food are used to convey Holocaust denial messages, implying there were not enough "ovens" to "cook" six million Jews.
Another tactic involves leveraging seemingly innocent images, such as European architecture, to lure viewers into consuming extreme content. These videos often start with benign themes that quickly shift to promoting Nazi propaganda and expressing hatred toward minorities.
Coordination among pro-Nazi users is another key tactic. These groups often operate through TikTok channels where they rally support for their videos by encouraging views and likes. For instance, one channel with 12,000 members was dedicated to promoting a neo-Nazi documentary, and its members were encouraged to create response videos to help the film go viral. This coordinated effort led to numerous videos promoting the documentary, with some accumulating over 100,000 views.
Within TikTok itself, these networks also coordinate to avoid account bans. When an account is about to be blocked, users often direct their followers to backup accounts, allowing them to quickly regain their audience and continue spreading their content. They use tactics like creating nearly identical accounts with similar usernames and profile pictures, which helps them maintain a continuous presence on the platform despite TikTok's efforts to shut them down.
The final tactic identified in the report involves recruiting TikTok users for offline activities. Some extremist accounts openly recruit followers for far-right organizations or encourage them to distribute anti-Semitic materials. These accounts share detailed instructions for illegal activities, such as creating weapons and explosives, and direct followers to encrypted messaging apps like Element and Tox for further coordination. Some accounts have amassed thousands of followers and represent a serious threat that extends beyond the digital space.
TikTok's response and algorithmic promotion
TikTok's response to this growing problem has been criticized as insufficient. The ISD report indicated that while TikTok eventually removes some offending accounts, this often happens only after the content has reached significant viewership. Even when accounts are blocked, users quickly recreate new ones and regain their follower base, allowing them to continue spreading their messages with minimal disruption.
More concerning is that TikTok's algorithm appears to promote this extremist content. The ISD researchers created a dummy account that interacted minimally with pro-Nazi content and found that the "For You" feed quickly became filled with similar videos. This included AI-generated translations of Nazi speeches, propaganda videos, and even recruitment materials for white supremacist groups. The speed and extent to which the algorithm pushed this content suggest that TikTok’s recommendation system may inadvertently be aiding the spread of extremist ideologies.
TikTok has stated that it removes over 98% of hateful content before it is reported, but the continued prevalence and spread of pro-Nazi material on the platform indicate a significant gap in the effectiveness of these measures. As these extremist networks become more sophisticated in their tactics, TikTok faces increasing pressure to develop more robust and proactive strategies to combat the spread of hate on its platform.
TikTok responded to Calcalist: "There is no place on TikTok for hateful or ideological behaviors and organizations that encourage hatred. We remove over 98% of this type of content even before it is reported to us in the application. We work with experts to detect in advance the development of such trends and work to strengthen our defenses against ideologies and hate groups."