Social media companies are in a bit of a pickle. The future remains bright for platforms like Twitter, Facebook, and YouTube - stock prices are high, ad revenue is solid, and users can't look away for even a moment. However, some major concerns are bubbling to the surface. Beginning with Russia's use of Facebook to influence the 2016 elections, the topic of what does and doesn't deserve to belong on social media has begun to pick up steam. Recent stories about Facebook and Youtube banning aggressive conservative personality Alex Jones and Twitter's refusal to do the same raise the central question in the debate: should social media platforms be censoring their own users' content?
'No' is the pretty obvious answer here. Social media is meant to be an open exchange between users across the globe, ideally bringing a diverse global network into conversation with technology that we can forget is still incredibly new. A "marketplace of ideas" is a phrase that hurts to type but gets at the basic goal of the platforms. The reality is that most of the time, social media instead turns us into content-craving zombies with short attention spans... but that's a complaint for a different day!
If social media companies begin censoring their own users based on whatever criteria they choose, then we have a serious problem. First and foremost, it's ideologically against what these companies stand for. If we can ignore that, it's still difficult to create fair and enforceable rules about what can and can't be said. The reality is that corporations would be silencing voices on major platforms according to whatever set of rules spills out of a few backroom meetings. There are plenty of valiant reasons - inciting violence, hate speech, spreading misinformation - but each individual case will be wrought with hellish idiosyncrasies that will make users feel like they deserve their day in court, not a suspended account.
Let's pause for a second. I heard a 40-year-old man tear off his Hanes t-shirt and yell, "It's a free country, bro!" Is it?! An often-cited reason for why online voices shouldn't be silenced is the one, the only, the hugely misunderstood, the First Amendment: the right to free speech. Could it be that, regardless of how our debate turns out, it's actually illegal for Twitter to shut down your racist account? Of course not! The First Amendment protects citizens from Congress making a law that infringes on, among other things, your right to free speech.
Here are some things that the First Amendment does not protect: yelling 'FIRE' in a crowded building, berating retail workers, being punched in the face for screaming obscenities in public, any form of corporate censorship at all. Companies have the right to delete all of your posts and replace them with dogs in blackface, user agreement permitting.
It seems, then, that these companies can do as they please and were even built on the belief that everyone should be heard. Here's my problem: not everyone should be heard. Alex Jones, who has denied the existence of the Sandy Hook massacre, shouldn't be given a platform. Accounts deliberately created to spread false news and build anger among political bases shouldn't exist. Companies have universally taken a stand against bots - automated, fake accounts that only exist to influence real users. They are antithetical to the purpose of social media and prey on users. That's exactly why some very real accounts need to disappear, too. For social media users, being heard is a privilege and caustic behavior is more than enough reason for the mic to be turned off.
Companies need to be careful about how they travel this road. Many users exist in a middle ground. Let's call it the "unfortunate asshole" category. These are accounts that spread vaguely racist messages, attack other users, or constantly distort genuine information. Those people, unfortunately, shouldn't be kicked out. Those aren't voices we need to hear but they're not voices that we have any right to silence. Unfortunately, they're just assholes.
The right thing to do, without question, is to censor people who are inciting violence, focused on spreading dangerous hate, or embarking on misinformation campaigns. Social media companies are shying away from their moral obligation to turn off those angry voices who are lowering the level of interaction for everyone else.
The truth is... it won't make everyone happy. Vitriolic voices are an issue in today's world for a reason - they have plenty of listeners. Even those who don't share the same hateful views may find themselves refreshing their feed again and again. Many large accounts survive on "hate clicks" from dissenting users. That attention means clicks, which means advertising, which means money in the pockets of the companies.
Shutting down insidious voices is a tricky task, filled with the potential for unwarranted censorship, ultimately meaning less money for the companies making that very choice. If it sounds like a raw deal, that's because it is. But it is also, without question, the right thing to do.