#SEGGSED: SEX, SAFETY, AND CENSORSHIP ON TIKTOK

Algorithmic White-Supremacist Beauty Ideals

Unlike many SNS that are popular in the United States, TikTok is owned by the Chinese company, ByteDance. The TikTok platform is the legacy of the Musical.ly which launched in 2014 and was bought by ByteDance in 2018 to be rebranded as TikTok. From the getgo, the company had strong ties to US business and quickly became popular in the United States. While it has been sling-shotted into global popularity, TikTok has received many criticisms from users, businesses, and governments as it's perceived as a threat to privacy and safety. Similar privacy and safety concerns have been raised for data-hungry SNS like Facebook, but unlike its US-based rivals, TikTok claims to be willing to offer unprecedented transparency about its data collection and algorithms.

In 2020, TikTok’s former “Chief executive Kevin Mayer, an American former Disney executive, said it would allow experts to examine the code behind its algorithms. Though such transparency is hugely significant in an industry where data and code is closely guarded,” the move did not begin to cover the plethora of concerns people have had for the app's data collection and algorithmic moderation techniques (Tidy and Smith Galer, 2020). BBC News reports, “TikTok is one of the first platforms many young people will come to share social activism content,” citing May of 2020 when #BlackLivesMatter was trending. “But even as the hashtag drew in billions of views, there were criticisms that content from black creators was being suppressed and that hashtags related to the protests were being hidden” (Tidy and Smith Galer, 2020). This is one example of an ongoing issue centering TikTok’s potential for perpetuating algorithmic violence, a form of automated digital violence. 


 In 2020, The Intercept published two internal TikTok moderation documents recreated with minor redactions that illustrate TikTok’s biases against fat, disabled, poor people, and people of color. TikTok spokesperson Josh Gartner explained that the leaked rules “Represented an early blunt attempt at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them” (Biddle et al., 2020). Even if these rules were already retired, they serve as a roadmap to understanding the biases that continue to impact who experiences overmoderation or the suppression of content and how. The “Ugly Content Policy” instructed TikTok moderators to suppress uploads by users who did not match the profile of the users that the platform desired to attract. The rules include filtering for content that contains physical markers of difference from normative beauty standards like:


“Abnormal body shape, chubby, have obvious beer belly, obese, or too thin (not limited to: dwarf, acromegaly)” and, “Ugly facial looks (not limited to: disformatted face, fangs, lack of front teeth, senior people with too many wrinkles, obvious facial scars) or facial deformities (not limited to: eye disorders, crooked mouth disease and other disabilities).”

There was additional guidance for filtering content that may indicate the creator experiencing poverty like,

The shooting environment is shabby and dilapidated, such as, not limited to: slums, rural fields (rural beautiful natural scenery could be exempted), dilapidated housing, construction sites, etc. (For internal housing background which has no obvious slummy character, only those cases as specified should be labeled: crack on the wall, old and disreputable decorations, extremely dirty and messy) (Biddle et al., 2020).

The document has no actual guidance on anti-bullying (in fact it could be argued that it perpetuates bullying sentiments), nor does it seem interested in the safety of users. TikTok’s moderation seeks to create an air of desirability and popularity around the app by curating a feed of beautiful, talented, and wealthy influencers who embody many teens' fantasies. In the process of creating TikTok’s desired digital landscape, those othered by the app’s clumsy application of the community guidelines experience algorithmic oppression. As Safiya Noble asserts in her 2018 book, Algorithms of Oppression: How Search Engines Reinforce Racism, “Algorithmic oppression is not just a glitch in the system but, rather, is fundamental to the operating system of the web,” where existing power imbalances and structural violence are reproduced.  

TikTok has its own reporting as to how its algorithm operates for users on the app. On the page “How TikTok recommends videos #ForYou” TikTok describes how its algorithm both creates and is created by users’ experiences on the app. They describe the for you page’s individualized curation as a “Part of the magic of TikTok,” further mystifying the elusive technology. The algorithmic curation of TikTok is rendered a god-like force with power that can’t be understood by the average user. It acts authoritatively to produce the reality of TikTok’s learning ecologies. On this page, TikTok offers another answer to the question “what is safety” according to the platform. In the section, “Safeguarding the viewing experience,” TikTok explains, “Our recommendation system is also designed with safety as a consideration. Reviewed content found to depict things like graphic medical procedures or legal consumption of regulated goods, for example – which may be shocking if surfaced as a recommended video to a general audience that hasn’t opted in to see such content – may not be eligible for recommendation.” This suggests that viewers being shocked by content is equivalent to their safety being compromised. Whether or not something is shocking is entirely subjective, much like what is considered to be sexually explicit. If TikTok is indeed moderating content based on its potential shock value, how is it measuring or determining what is considered shocking for a majority of its community? TikTok users are living within the collective imagination of its creators and what they find to be shocking, pornographic, inappropriate, sexually explicit, or unsafe. As mentioned in this document, users opt-in to streams of content through algorithmic curation generated by user preferences indicated through interactions with the app, video information (captions, sounds hashtags), and device and account settings (language preference, country setting, and device type). With that in mind, it is likely that users who come across content about sex, sexuality, and relationships on TikTok are seeing it because they either intentionally seek it out, or their preferences suggest that it is within their interests.

In a play to reduce users’ anxiety about the app’s technological potential to breach privacy and facilitate harm, in March of 2020 TikTok announced the creation of its “Transparency Center,” which continues to be developed. The TikTok for Business article “What's Next: Building for Brand Safety,” published in February 2022, introduces “TikTok’s Four Pillars of Brand Safety,” with the first one being “Keeping Our Community Safe.” The app states that its business approach starts with keeping the TikTok community safe. They argue, “When we do so, we not only create a safe place for our users to authentically express themselves but, in turn, a positive environment to build brands and reach our community in a meaningful way” (TikTok For Business, 2022). With the mention of “building brands,” TikTok executives reveal that their concern for user safety is at least partly rooted in capitalism. Enough so that it is the first pillar of their plan for brand safety. They boast about strides towards embodying their commitment to safety In 2021, “By introducing an array of products and initiatives that reflect our ongoing dedication to the safety of the TikTok community, like age-appropriate privacy and safety settings, tools to promote kindness, combat bullying and curb the spread of misinformation, as well as campaigns to promote awareness around bullying” (TikTok For Business, 2022). The post doubles down on the platform’s argument that overmoderation on the app happens by mistake because the algorithm is designed to protect users from experiencing bullying, therefore censoring marginalized people and bodies under the guise of protecting their emotional well-being. To be optimistic, perhaps TikTok will genuinely continue to be responsive to users' critiques and shift how SNS practices transparency and accountability in the future. Though the current state of the platform suggests that the company is saying the right things without modeling the behaviors necessary to make a change. Online surveillance, shadowbanning, algorithmic violence, silencing and erasing non-normative non-hegemonic groups all are examples of digital violence. Sex workers disproportionately experience digital violence as the targets of legislation like SESTA-FOSTA. Sex workers have been integral in the development and widespread adoption of telecommunication technologies. Despite being early adopters and significant contributors to the economy of the digital world, sex workers have been targets of virtual sweeps attempting to “clean up” the internet, implying that the selling and facilitation of sexual fantasy and desire is dirty and dangerous. On TikTok, safety is used as an alibi to promote policies that are actually based in capitalist gain, rather than safety.
 

This page has paths:

This page references: