Gab.com is a social network that promotes itself as a safe haven for free speech and an alternative to Twitter and Facebook. It was also used by the man responsible for the Pittsburgh Synagogue shooting last month.
In response to the shooting and wide spread criticism of the platform, Gab CEO and founder Andrew Torba said, “The answer to bad speech, or hate speech, however you want to define that, is more speech. And it always will be.”
Due to it’s lack of content monitoring and restrictions around hate speech the site has become a popular platform for antisemitism, racism, Nazism and sexism. After the shooting at the synagogue Gab was banned from PayPal and the site’s hosting provider Joyent pulled its service. On October 28th the site went down, and last week it went back online under a new hosting service.
While the site has loose rules around hate speech, it does ban threats of violence that clearly infringe on the safety of another user.
Where is the line between protecting free speech and protecting society from the spread of harmful ideas like those of white nationalism?
Some might argue that the rhetoric spread by white nationalists qualifies as infringing on the safety of other users, and should be banned from the site. Others might argue that restricting any form of speech is a slippery slope to private platforms making other infringements on the free speech.
Platforms like Facebook and Twitter have rules that ban hate speech on their platforms, and yet they still face problems monitoring the content of their users, including posts containing divisive and extreme views and the spread of fake news stories.
While Gab only has 800,000 users, it has sparked a important debate about the role of social platforms in stopping the spread of hate. While the answer to this issue isn’t clear cut, social media companies need to consider the real world impact their policies surrounding hate speech have in our society.
This is a really interesting topic. I like how you brought up that sites like Facebook and Twitter are still struggling to ban hate speech and fake news from their platforms; it really goes to show that even extensive monitoring policies can have dangerous forms of speech slip through the cracks. Cases like these really force us to consider where social media platforms need to draw the line, if such a thing is possible–at what point do policies that restrict hate speech begin to infringe on the expressions of opinions that are merely controversial? Great post!