Communication Currents

Keyboard featuring a "Report Abuse" button in red

Tolerating Extreme Speech on Social Media

October 1, 2016
Freedom of Expression

In the world of digital media, giants such as Facebook, Twitter, and YouTube commonly remove extreme or harmful speech from their platforms after individual users have “flagged” that speech posted by other users, who have no First Amendment rights against these companies. This power over public discourse in the hands of Facebook and Twitter representatives and users raises concerns regarding online freedom of expression. In his recent essay, Brett G. Johnson of the University of Missouri asks how First Amendment principles can be applied to assess this system of private governance of extreme speech.

With ongoing press coverage about the removal of certain videos on Facebook and Twitter, Johnson noticed that reporters and their sources were having a difficult time articulating the phenomenon of social media platforms having to balance the promotion of freedom of speech with a commitment to protect their users from harmful content. Johnson began to explore existing theories of freedom of expression so that scholars, journalists, and the general public could better conceptualize how powerful private entities control speech—and make sound judgments about whether such control is categorically good or bad, or whether it depends on the type of speech.

Dusting off Tolerance Theory 

In his article, Johnson argues that Lee Bollinger’s Tolerance Theory is particularly useful for understanding private governance of speech as it occurs on social media. Individuals, Johnson says, have the tendency to want to silence speech they don’t like or find offensive. Platforms such as Facebook, Twitter, and YouTube activate this tendency by allowing users to flag speech they don’t like, and by affording users the ability to engage in mob-like behavior and shout down unpopular speech. Bollinger, however, sees extreme speech as playing an essential role in American democracy, and Johnson maintains that extreme speech can create a greater critical awareness among people toward all of humanity—the good, the bad and the ugly. “The most extreme types of speech will allow human beings the ability to survive alongside the most extreme people in our messy and complex global society,” he writes.

Tolerance, in the context of Bollinger’s theory, is not used in the same way we use tolerance to refer to promoting respect toward marginalized groups in society—often the targets of extreme speech. However, Bollinger’s version of tolerance also does not mean a blind acceptance of all speech, no matter how extreme or offensive. Tolerance involves allowing extreme speech into the public discourse out of a desire to improve one’s mental faculties through active and critical engagement with that speech.

Tolerance theory posits that there is an inseparable link between social and legal constraints upon speech. According to Bollinger’s theory, as legal tolerance for extreme speech increases, social tolerance for extreme speech will also increase. “The catalyzing agent that makes this process work is knowledge, spread throughout society, of the benefits of extreme speech,” writes Johnson. The government can act as a guide for its citizens on how to uphold the First Amendment and how to protect the values of extreme speech.

Social Media and the Private Governance of Speech 

Today, people rely on social media sites to create, share, and consume content of various types, including text, photos, and videos. But this user-generated content does not always belong to the individuals who create it. Instead, it belongs to the social media channel that facilitates it. For example, YouTube describes itself as “a forum for people to connect, inform, and inspire others across the globe and acts as a distribution platform for original content creators and advertisers large and small.” Because individuals are increasingly dependent on these digital intermediaries for creating, consuming, and sharing content, individual freedom of expression hinges on online infrastructures and their policies.

The extent to which users are dependent on digital intermediaries is most evident when individuals seek to express extreme viewpoints. Intermediaries have a concern for maintaining users on the platform and balancing their economic, professional, and ideological aspirations.

However, as social media usage has become commonplace, determining what counts as extreme or offensive speech is not easy and ultimately is subjectively defined.

Digital intermediaries have little legal obligation to do anything about the harmful speech their users publish. In fact, they are required to remove content only if it infringes on copyright or if it violates criminal law (e.g., photos of child abuse). Although social networking sites enjoy such freedom, they find themselves in a dilemma. “They must find the optimal balance between protecting some users from the harmful speech of others, and protecting other users’ ability to freely speak extreme messages,” Johnson writes. This dilemma has caused years of vague policies and inconsistent enforcement of those policies among Facebook, Twitter, and YouTube.

Digital Intermediaries, Models for Tolerance 

Why should digital intermediaries care about tolerance theory? Johnson asks. “Promoting speech and minimizing harm are the two foremost goals of digital intermediaries (aside from making money, of course),” he explains. Extreme speech has the power to turn people off and create both fewer speakers and fewer social media consumers, which is the fear of digital intermediaries. Yet, Johnson maintains, tolerance toward extreme speech can be a prudent policy for social networking sites. Drawing a bright line between extreme speech and abuse can help digital intermediaries protect their users while still affording people the ability to speak freely and openly.

Johnson argues that digital intermediaries have a responsibility to be role models for tolerance. This requires a firm commitment to protecting freedom of expression. It also requires that intermediaries stop depending on vague policy statements to justify those times they choose to remove speech from their platforms. One step digital intermediaries could take to make their standards clear is to create distinctions based on the type of harm the speech could cause, Johnson notes.

Ultimately, intermediaries serving as role models will also help individuals using their services to practice tolerance. Practicing tolerance requires individuals who may have a natural tendency to censor extreme speech through flagging it to refrain from doing so to allow the speech to compete in the marketplace of ideas. In addition, tolerance is an active and educational process, one which involves allowing extreme speech into the public discourse to improve the public’s ability to critically engage with that speech. 

About the author (s)

Brett G. Johnson

University of Missouri

Assistant Professor