The world evolves around us. Technology is driving some of that change, but so too are likely generational, cultural, and individual factors. Recently, Fortune reported that the CEO of Twitter will appear before a Congressional committee in September to discuss "Twitter’s evolving, and sometimes vague, policies regarding what content is and is not allowed." Google CEO Sundar Pichai has declined to appear according to Bloomberg, leading to "bipartisan criticism."
Free speech is an intriguing subject. In the current conversation, it is couched in terms of Internet platforms and logarithms, it also appears that some part of the national curiosity surrounding speech and the Internet may be based on our lack of universal agreement as to definitions and terms (what is "hate," or even "inappropriate" speech?). America seems unable to reach universal agreement as to what speech is protected and what constraints should be imposed. There seems no shortage of those who would limit speech, but there is periodically criticism for particular individual's motivation, or his or her focus on some specific topic (each seems eager to protect her his own expressive rights, but as eager sometimes to stifle other's).
This is an issue for social media certainly; it affects platforms like Facebook, Twitter, Instagram and more. However, the Internet is full of platforms with similar opportunities for commenting and rating of various professionals or businesses. There are platforms like Avvo, Yelp, Amazon, Better Business Bureau, and more. Hubspot provides a pretty good list. These all provide a platform to expound, to complain, but also for endorsement. Others see those postings and may comment either in agreement with or opposition to those thoughts. There is an effect of generating debate, which draws eyes, and thus sells advertisement,
Most of us realize that those platforms/softwares are protected from defamation liability by federal law. In 2015, I touched on the potential for defamation in The Internet, Evidence, and Defamation, describing some of the history of defamation decisions regarding the Internet, and the implementation of the Communications Decency Act of 1996. In 2013, The Atlantic contended that this law "gave us websites like Reddit, Craigslist, Digg, and perhaps all of social media." While The Atlantic acknowledges this specific impact of the 1996 act, it expounds further in laudatory terms. The simplicity of this act, it contends, sent the message to entrepreneurs to "go innovate."
The Digital Media Law Project provides an overview of this Act, and describes how this law deviates from "common law," our process of law constructed case upon case in which we strive to hold true in each case to the cases previously decided (we call each decision "precedent" and our striving to remain consistent with it "stare decisis"). While America was founded upon a common law model, over time we have changed, or "abrogated," common law repeatedly by legislation. The Project notes that:
Under standard common-law principles, a person who publishes a defamatory statement by another bears the same liability for the statement as if he or she had initially created it.
Thus, in publishing a statement, a "book publisher or a newspaper" could generally be held liable for the statement under the common law tort of defamation.
As the Project explains, this is based on the fact that ultimately a book or newspaper publisher has control over the content that is published. Such a publisher has the ability to read content, change content, even reject content. Because the publisher has such control, it is seemingly appropriate for the publisher to share liability for publication of false information.
This is not true for "distributors." As the Project explains, this logic does not support holding a bookstore liable for the content of a book. It notes "that it would be impossible for distributors to read every publication before they sell or distribute it." Furthermore, more practically, even if a book store owner did read the publication, would society expect that the owner would invest the resources necessary to fact-check the representations or statements in each book, newspaper (it used to be common for publishers to print news stories and pictures on paper and the public would purchase it for home delivery or from a vendor or machine), or magazine? The distinction between "publisher" and "distributor" struck a compromise, based on facts and behavior.
In 1996, Congress stepped into the debate of Internet communications. The defamation subject at that time was not new. Back in those days, the online environments or platforms were sometimes referred to as "bulletin boards." These precursors of the current social media platforms, these "bulletin boards," allowed people to locate opportunities and conversations that were associated with their interests.
For example, there might have been a bulletin board dedicated to modems, software, knitting, or something more mundane like workers' compensation law. People interested in a particular topic would search for and visit such a topic-specific bulletin board. There, she/he could read about the topic, share views, ask questions, and generally interact. Somewhat like a huge but specific "group chat" today, various visitors would post their thoughts, positive and negative, on these virtual bulletin boards.
As the Project points out, Internet platform/software providers had been sued in the early 1990s for defamation based upon what Internet users had posted on their bulletin boards. The platform liability was analyzed under the same common law applied to others. Those providers had defended themselves arguing they were "like a distributor" as they "did not review the contents" of postings (as a "publisher" might).
The Project describes how courts reached different decisions about whether such platforms were more like publishers (newspapers) or like distributors (book stores). The extent to which platforms exercised "editorial control" whether by examining each post or through "content guidelines and software screening program(s)," was of some influence on the determination of the existence and extent of "control." What the Decency Act brought was more legally absolute. It merely said:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
And thus the platform/software/environment purveyors were essentially given immunity. The law made them "distributors" as a matter of federal law, and precluded the states from passing any law to the contrary. Congress can do that, because the Supremacy Clause of the U.S. Constitution allows it to have exclusive authority in some instances. Admittedly, there are instances in which federal supremacy is nonetheless ignored.
The Project contends that this Act thus eliminated a "perverse upshot" that existed prior to the Act. It contends that this "perverse" outcome resulted when a provider attempted to keep things civil through the guidelines or software solutions. By attempting to keep things civil or to avoid the expression of hate or lies, the advocacy of violence or worse, the platform was seen as departing from the "distributor" role and assuming more of a "publisher" role. Thus, before the Decency Act, a platform was perhaps disinclined to exercise any control.
The evolution of liability may be seen as some as a useful lens through which to appreciate the current state of the Internet in America. Certainly, there are "social media failures," instances in which companies have attempted to use (leverage) those platforms and failed to their detriment. There are also instances in which merely listing a business on a platform has led to regret.
One interesting example is a frustrated small business owner complaining about Yelp on Yelp. One critical point of that post is the perception Yelp does "not filter negative remarks or comments, they do not verify them, they do not even know anything about whom or why the negative remark has been left ." In effect, the user complaint is that Yelp seems to act as a distributor rather than a publisher as regards negative reviews. However, the author of that post contends that Yelp contrarily filters positive remarks or reviews. Thus, that reviewer perceives that one platform selectively chooses which role to play depending upon the nature of content.
Note, however, that this Yelp user elected to list that business on Yelp. Perhaps not as purposeful as some of the more newsworthy social media failures that make the news, but intentional. There are also platforms which include businesses which have not elected to be included. Avvo is a lawyer rating platform. The National Jurist recently explained why all lawyers should claim their Avvo profile, noting "Nothing you can do, aside from abandoning your license, can remove your Avvo profile." If you are an attorney, you will participate in this platform. There is no election, no choice.
On Avvo, similar to other online review platforms, "attorneys cannot prevent a disgruntled client from leaving a negative review." There is the opportunity to respond to such a review, but Avvo chooses to engage a profession involuntarily. The ABA Journal has reported on various lawsuits filed against Avvo, mostly unsuccessful. That may be seen by some as a significant distinction from Facebook, Twitter, and even Yelp.
There are perceptions that platforms have evolved to now routinely exert editorial control over content. As in the Yelp user that contends that negative comments are unfiltered while positive comments are not. There are critics who contend that social media platforms like Twitter make value judgments about appropriate content, in effect undertaking a role more akin to a publisher than a distributor.
In effect, critics argue that the immunity afforded by the Communications Decency Act of 1996 affords online publishers free reign to editorialize and control content with immunity from consequences. Whether a platform is a publisher or a distributor is not influenced by those editorial decisions regarding content or even access. The Act removes any consequence for editorial decisions. Thus, after the Decency Act, a platform is now perhaps overly inclined to exercise control, and enjoys near absolute immunity from the repercussions of those decisions.
There are those who contend that this Act in its simplicity is the only appropriate course. They argue that the phenomenon that is social media cannot exist without this artificial, arbitrary, and near absolute immunity for platforms. Others question that near absolute immunity, They contend that it artificially shields not only the neutral platform from the excesses of its users, but also shields the activist platform now unfettered and free to editorialize with no potential of liability or responsibility.
There will likely come debate over the Communications Decency Act of 1996; whether the broad protections are logical and appropriate, or whether they are too broad in protecting misfeasance, extending to absolutely also protect malfeasance. There may be those that see either the protection or the absence to each be a "perverse upshot." It may be that neither absolute immunity nor unfettered liability is appropriate, but instead some compromise that is more moderate? Perhaps there will be discussion of some more middle-of-the-road compromise between the absolute extremes of "all" or "nothing?"