

Users also frequently play the role of content monitors, reporting content to the platform for moderation to occur (Gillespie, 2018). Many platforms, including Reddit, contain communities which self-moderate, with volunteer moderators regulating content based on community-defined rules. Platforms also rely on the labour of users themselves in order to moderate content (Matias, 2019). Facebook, for example, has created an independent fact-checking unit (Facebook, 2020), while Twitter made headlines by fact-checking a Tweet from President Donald Trump (BBC News, 2020). In addition to this, many platforms have begun to engage in fact-checking processes in order to stop the spread of disinformation. As noted already, many platforms have become more active in deleting hateful content and banning or suspending the profiles of far-right speakers. In doing so, in addition to the content policies identified above, Reddit has also implemented a unique approach to dealing with hateful content on the platform, that is the quarantine function.Ĭontent moderation from large digital platforms, in particular in regards to hateful material, comprises an array of different techniques (Gillespie, 2018). Reddit has faced dueling pressures from a user base, which finds Reddit’s anything goes approach as core to the platform’s appeal, versus other users who increasingly wish to constrain much of the more extreme material on the site (Massanari, 2015).
#Sites like reddit stating with a free
Reddit has frequently found itself in an uncomfortable position, straddling a fine line between maintaining a culture of free speech, while at the same time not allowing the flourishing of violence and hateful material (Copland, 2018). Reddit’s argument, that it was a free speech site that would not intervene, has slowly come undone. Since 2012 Reddit has slowly changed its content policies, including banning: any suggestive or sexual content featuring minors (u/reddit, 2012) the sharing of nude photos without the subject’s consent (u/kn0thing, 2015 1) attacks and harassment of individuals (u/5days et al., 2015) the incitement of violence (u/landoflobsters, 2017) and attacks and harassment of broad social groups (u/landoflobsters, 2019).

However, under increasing pressure from its users, advertisers, lawmakers and the general public in recent years Reddit has slowly begun to shift this approach (Copland and Davis, 2020). We’re a free speech site with very few exceptions (mostly personal info) and having to stomach occasional troll reddit (sic) like /r/picsofdeadkids or morally questionable reddits like /r/jailbait are part of the price of free speech on a site like this. In 2011 for example the former general manager of Reddit, Erik Martin, addressed growing controversies over hateful content, stating: Built upon a reputation of being a bastion of free speech (Ohanian, 2013), Reddit has historically resisted censoring its users, despite the prominence of racist, misogynistic, homophobic and explicitly violent material on the platform (for examples, see Massanari, 2015, 2017 Salter, 2018 Farrell, Fernandez, Novotny, and Harith, 2019). In 2020 a number of platforms even began regulating material from President Donald Trump, with Twitter placing fact-checks and warnings on some of his tweets and the platform Twitch temporarily suspending his account (Copland and Davis, 2020).Īs one of the largest digital platforms in the world, Reddit has not been immune from this pressure. In 2018 for example, a number of large social media companies banned the high-profile conspiracy theorist Alex Jones and his platform InfoWars from their platforms (Hern, 2018), while in 2019 the web infrastructure company Cloudflare deplatformed the controversial site 8chan (Prince, 2019). This has occurred in particular through banning and restricting users and channels (Marantz, 2019). In recent years, in response to increasing pressure from the public, lawmakers and advertisers, many large social media companies have given up much of their free speech rhetoric and have become more active in regulating abusive, misogynistic, racist and homophobic language on their platforms. While social media companies position themselves as platforms which offer unlimited potential of free expression (Gillespie, 2010), these same sites have always engaged in some form of content moderation (Marantz, 2019). This paper is part of Trust in the system, a special issue of Internet Policy Review guest-edited by Péter Mezei and Andreea Verteş-Olteanu.Ĭontent moderation is an integral part of the political economy of large social media platforms (Gillespie, 2018).
