twitter

Illustration by Mahgul Farooqui

Twitter Rules

People want freedom on social media, but they also want protection. Is it possible to have both?

Nov 11, 2017

From the best cat memes of 2017 to violent public executions, there is nothing you cannot find on social media. The internet can be an amazing tool for communication and sharing ideas, but it may also be used for hate speech, abusive behavior or spreading violent content to millions of users. User-generated content on websites such as Facebook and Twitter are at the forefront of this online content struggle. People want freedom, but also want protection. Is it possible to have both?
While Twitter and Facebook are quick to take credit for the positive impacts of their sites, there is definitely a lack of accountability when violent or abusive content is spread through their feeds. In light of recent criticism, the social media world has made some drastic changes in the attempt to combat toxic online content. Twitter recently changed its Twitter Rules to fight hate speech and online abuse. However, the practical enforceability of the rules, the application of human judgment on content and the potential infringement on free speech remain questionable.
The Twitter Rules are a collection of regulations designed to protect users from hateful content. To this end, Twitter's content managers initially focused on large generalized steps such as banning Nazi propaganda groups or large-scale pornogphic distributers. CEO Jack Dorsey claims that the new rules focus more on specific content that gets published, introducing a red flag system where users can report abusive behavior. There has been an increased focus on content related to sexual harassment and self harm. Twitter even claims to be able to identify users posting content relating to self harm, disable their account and provide them with resources for mental health assistance.
But it is not an easy task to monitor millions of tweets being published every day. If Twitter pledges to take on such a task, it must be able to follow through. The recent updates to the rules include promises to create a standalone help center that should improve the appeal process for content that is against the rules and reduce the reviewing time. As of now these are still just promises but users can be skeptically optimistic based on the sheer attention to detail Twitter has employed in the past.
Proponents of the new rules would say that Twitter is tackling the problem of abusive content at the root. The new rules and mechanisms in place are meant to prevent such content from being published, or at least act quickly before large scale harm occurs.
The implementation of these new rules should be able to combat hate speech and dangerous propaganda to some extent, but at what cost? Twitter's strategy to remove abusive content currently involves a team of human beings, not algorithms. A group of employees reviews claims of abuse and hate speech online to determine if the tweets violate Twitter's Terms of Service or Twitter Rules. The specialization of these new rules increases the level of human judgement and editorial oversight employed by Twitter.
There is no denying that hate speech and violent threats are dangerous and can cause real harm, but the fundamental tradeoff is allowing people with their own sets of biases and motivations to curate content for users. You are “allowed” to view content rather than being free to control it on your own terms. How much leverage are users willing to give to Twitter over what they see, read and ultimately believe? The phrase “Twitter Rules” instills a feeling of being monitored, having to submit to some omnipotent blue bird who controls what we can and cannot say. However, it might also be the much needed protection some users want.
Other platforms such as Instagram and Facebook have also been struggling with these issues. Recently Instagram stirred up controversy over its censorship of a menstruation-themed photograph series published by Rupi Kaur. The big question is where to draw the line between protection and censorship. Twitter was also criticised for not deleting an inflammatory tweet made by the President of the United States. The Twitter Public Policy account defended its passivity by mentioning the tweet’s “newsworthiness” value.
More Twitter controversies have cropped up recently, causing users to boycott the site entirely. Following the Harvey Weinstein sexual assault allegations, actress Rose McGowan’s twitter account was banned for 12 hours. While it's not clear which tweet led to the ban, during that week there were a collection of tweets calling out the people that knew of Weinstein's actions.
The issue of Twitter and its updated rulebook touches on the debate of free speech versus hate speech, but it’s also more than that. The McGowan and Trump events outline the grey areas that prevail. Introducing human judgement will help stop the spread of hate speech, but it might also censor social activism for the sake of having a “safe” website. Twitter needs to reconsider what it wants to achieve as a company. Will it still be a bastion of free speech or is this just good business?
Where does the road ahead lead? The spread of hate speech and abusive behavior must be stopped, and Twitter appears to be taking the right measures. What remains to be seen, however, is how the company handles its power over free speech on its platform. Is the pursuit of complete online freedom worth more than the suffering that victims of online abuse have to endure? It's a question worth asking, and the world has yet to figure out the answer.
Taj Chapman is a Campus Columnist. Email him at feedback@thegazelle.org.
gazelle logo