The Northern Kentucky Law Review, founded in 1973, is an independent journal, edited and published entirely by the students of NKU Chase College of Law.

Communications Decency Act: Censored on the Internet?

By Christopher Brubaker, Associate Editor

Most, if not all, Americans use social media.[1] Google, Facebook, and others are primary destinations for internet users and are integrated with countless devices to bring information to people.[2] As these companies grow their user bases, they become more active in moderating the content posted by their users.[3]  This content moderation occurs internally and can often be imperfect.[4] Companies are challenged with encouraging platform engagement while not over-regulating user speech. Still, even if social platforms do not regulate their user’s content, their liability for such content is limited by a federal statute.[5] Allegations of bias have been levied against major tech companies regarding how these companies moderate their content.[6] Companies answered these allegations by asking the public to trust them.[7] Due to the size and resources of major internet companies, lawmakers should retool existing statutes to protect speech on the internet by further differentiating between what constitutes content moderation and publication.

The Communications Decency Act

Since 1964, media publishers have enjoyed free speech protections that set a high bar for plaintiffs bringing defamation suits.[8]  In 1996, Congress recognized that the internet would operate much differently than traditional media publishers and therefore needed different rules to govern content uploaded to the internet.[9] In response to this changing technology, Congress enacted a statute known as the Communications Decency Act (“CDA”), which prevents social media platforms and other internet websites from being held responsible for the speech posted on the platform or site.[10]  In essence, the CDA provides that a social media platform (or other “interactive computer services”) will not be treated as a publisher or speaker of information that is published on its website by third-party users. Thus, when defamation claims are brought against social media platforms, they will often file a motion to dismiss the action under the CDA.[11] The statute protects both large internet companies as well as small bloggers by treating them not as publishers but instead as platforms.[12]

Other countries such as Europe, Canada, and Japan do not have similar regulations and legal protections.[13]  The CDA makes the United States a safe haven for any website that wishes to host platforms for controversial or political speech.[14] Because U.S.-based platforms do not have to worry about the legal ramifications of others’ speech, the most popular online platforms are hosted in the United States.[15] The legal framework of this has allowed companies such as YouTube, Vimeo, Amazon, Yelp, craigslist, Facebook, and Twitter to host reviews and videos, advertisements, and social networking for hundreds of millions of users.[16]

The CDA has allowed some of the largest companies in the world to grow without fear of liability for user content. Without government intervention, these companies have been forced by public opinion to create their own standards of what they allow on their platforms.[17] The CDA protects platforms that voluntarily make good faith efforts to restrict access to or availability of material that the provider “considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”[18] The issue here is that the language of the statue grants a wide berth to platforms in deciding what content to remove or allow. It is not as large of a problem when a small blog or fledgling social media company does this. However, it is a much greater problem when it’s done by a company that makes up or controls 40% of the internet as Google did in 2013.[19] With such large market share and resources at their back, the largest tech companies can set their own rules and moderate content in a manner that looks very much like a publisher.

Content moderation often comes into the spotlight in the months leading up to elections.[20] Allegations of election interference and search engine manipulation surface from a variety of sources.[21] These, along with other reasons, have caused some to call for changes to the CDA.[22] While allegations of severe bias exist, tech companies should either exhibit greater transparency in their moderation or should have deterrents enacted by the government to impose greater liability. There will likely continue to be calls for changes to the statute as tech companies grow in size and influence. With the current social climate, tech companies are more likely to see statutory changes that hold them to a much higher scrutiny in regards to content moderation.


Footnotes

[1] Elisa Shearer, Social media outpaces print newspapers in the U.S. as a news source, Pew Research Center, Dec. 10, 2018, https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/.

[2] Most popular social networks worldwide as of January 2020, ranked by number of active users, https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/ (last visited Jan 2, 2020).

[3] Kalev Leetaru, Is Social Media Content Moderation An Impossible Task?, Forbes, Sept. 8 2018, https://www.forbes.com/sites/kalevleetaru/2018/09/08/is-social-media-content-moderation-an-impossible-task/#12be853f15fa.

[4] Raising the bar on content moderation for social and gaming platforms, Mar. 27, 2019, https://venturebeat.com/2019/03/27/raising-the-bar-on-content-moderation-for-social-and-gaming-platforms-vb-live/.

[5] 47 U.S.C. § 230.

[6] Dipayan Ghosh and Ben Scott, Why Silicon Valley tech giants can’t shake accusations of anticonservative political bias, CNBC, Oct. 17, 2018, https://www.cnbc.com/2018/10/17/why-silicon-valley-cant-shake-accusations-of-anticonservative-bias.html.

[7] Id.

[8] See generally New York Times Co. v. Sullivan, 376 U.S. 254 (1964) (finding that, in lieu of strict liability, public official plaintiffs must show that defendant published statement with actual malice)

[9] Supra, note 5.

[10] Supra, note 6.

[11] Id.

[12] Section 230 of the Communications Decency Act, Electronic Frontier Foundation, https://www.eff.org/issues/cda230 (last visited Jan. 14, 2020).

[13] Id.

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] Supra, note 5.

[19] Tim Worstall, Fascinating Number: Google is Now 40% of the Internet, Forbes, Aug. 17, 2013, https://www.forbes.com/sites/timworstall/2013/08/17/fascinating-number-google-is-now-40-of-the-internet/#12b5a45127c7.

[20] Vivian Michaels, Did social media influence the U.S. election?, engadget, Jan. 2, 2017, https://www.engadget.com/2017/01/02/did-social-media-influence-the-u-s-election/.

[21] Emily Birnbaum, Tulsi Gabbard sues Google over censorship claims, The Hill, Jul. 25, 2019, https://thehill.com/policy/technology/454746-tulsi-gabbard-sues-google-over-censorship-claims. (Where a candidate for president sued Google alleging campaign censorship) see also Robert Epstein, Why Google Poses a Serious Threat to Democracy and How to end That Threat, Jun. 16, 2019, https://www.judiciary.senate.gov/imo/media/doc/Epstein%20Testimony.pdf. (Where a research psychologist testified to the U.S. Senate alleging content suppression and bias in the 2016 election).

[22] Alina Selyukh, Section 230: A Key Legal Shield For Facebook, Google Is About To Change, npr, Mar. 21, 2018 (https://www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change)

June Medical: How Will the SCOTUS’ Opinion on Stare Decisis Shape the Future of Abortion?

Protecting Man’s Best Friend: Why Pit Bull Terrier Breed Bans Are Outdated, Detrimental Laws