According to Cameron Hickey, project director of the Algorithmic Transparency Institute, TikTok is inherently harder to moderate than many other social media platforms. The brevity of the videos and the fact that many videos may include audio, visual, and text elements makes human discernment even more necessary when deciding whether something violates platform rules. Even advanced artificial intelligence tools, like using speech-to-text to quickly identify problematic words, are more difficult “when the audio you’re dealing with includes music,” Hickey says. “The default mode for people creating content on TikTok is to embed music as well.”
That becomes even more difficult in languages other than English.
“What we generally know is that platforms perform best at tackling problematic content where they are located or in the languages that the people who created them speak,” says Hickey. “And there are more people making bad things than there are people at these companies trying to get rid of the bad things.”
Many pieces of misinformation that Madung found were “synthetic content”, videos that looked like they came from an old news broadcast, or they use screenshots that appear to be from legitimate news channels.
“Since 2017, we have noticed a nascent trend at the time to appropriate mainstream media brand identities,” said Madung. “We’re seeing rampant use of this tactic on the platform and it seems to work exceptionally well.”
Madung also spoke with former TikTok content moderator Gadear Ayed to get a better understanding of the company’s moderation efforts in general. While Ayed TikToks from Kenya did not moderate, she told Madung that she was often asked to moderate content in languages or contexts she was unfamiliar with, and would not have had the context to see if a piece of media had been manipulated.
“It’s common for moderators to be asked to moderate videos that were in languages and contexts that were different from what they understood,” Ayed told Madung. “For example, I once had to moderate videos that were in Hebrew, despite not knowing the language or the context. All I could rely on was the visual image of what I could see, but I couldn’t moderate anything written .”
A TikTok spokesperson told WIRED that the company prohibits misinformation about elections and promoting violence and is “committed to protecting the integrity of [its] platform and have a dedicated team working to protect TikTok during the Kenyan elections.” The spokesperson also said it is working with fact-checking organizations, including Agence France-Presse in Kenya, and plans to roll out features to connect its “community with authoritative information about the Kenyan elections in our app”.
But even if TikTok removes the offensive content, Hickey says that may not be enough. “One person can remix, duet, and re-share someone else’s content,” Hickey says. That means that even if the original video is removed, other versions can live on undetected. TikTok videos can also be downloaded and shared on other platforms such as Facebook and Twitter, which is how Madung first came into contact with them.
Several of the videos flagged in the Mozilla Foundation report have since been removed, but TikTok has not responded to questions about whether it removed other videos or whether the videos themselves were part of a coordinated effort.
But Madung suspects so. “Some of the most egregious hashtags were things I would find when researching coordinated campaigns on Twitter, and then I would think, what if I searched for this on TikTok?”