On Publishers, Carriers, and Bookstores
A Monolog for Our Political Season about the Political and Legislative Background of Section 230
The next American election is just days away, so you’d think that most American politicians would be focusing on campaigning either for their own election (or re-election) or for their colleagues and allies who are running now to attain or retain elective office. But not this week. Weirdly enough, the United States Senate took time off from campaigning—even though the official election date is next Tuesday, and millions of American voters are voting in advance of the election or have already voted—to host a hearing whose nominal purpose was to discuss whether a formerly obscure but now hotly disputed statute known as “Section 230,” which plays a central role in limiting legal liability for internet services, needs to be updated.
(The actual reason the Senate scheduled this hearing is to complain that a dubious story in the New York Post—a Rupert Murdoch tabloid—wasn’t treated with enough respect and uncritical acceptance by the social-media companies.)
The key sentence in Section 230 is a short one, but like many other briefly phrased statutory provisions its impact has been vastly larger than its word count. Subsection (c)(1) of the Section 230 contains the 26 words that have given rise not only to Facebook and Twitter but also to my former employer, the Wikimedia Foundation, which operates Wikipedia: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The law also has played a profound role for many other U.S.-based internet sites, including the websites of traditional newspapers that host “reader comments” sections or other forums in which non-employees may engage in direct comments and responses to a journal’s traditionally produced edited and reported content.
Publisher or common carrier
American lawyers like me, who specialize in internet law, recognize Section 230 as a foundational statute that has given rise to most of what we love about today’s internet but also to much of what we find maddening about it. Most of us internet-law specialists are willing to discuss whether this statute, now close to a quarter-century old, needs to be revisited and possibly revised. But most of the proponents for Section 230 reform—whether they’re incumbent politicians or otherwise well-regarded legal scholars and commentators (or even the occasional Supreme Court justice)—have given little attention to why Section 230 was crafted or what problems it was trying to solve. The failure to do so is responsible for most of the misperceptions that afflict today’s debate about this law.
One of the roots of these misperceptions is the notion, frequently stated but almost never interrogated by American legislators, that internet services like Facebook and Twitter (or Google or Wikipedia) must be understood to fall into one side or the other of a binary taxonomy, which might be termed the “traditional publisher”/“common carrier” distinction. If you’re a traditional publisher, the thinking goes, you have the right to edit your publication as you see fit, including tailoring it to match your editorial biases, but you have also legal responsibility if your publication publishes (for example) false defamatory content, or certain kinds of privacy-invading content about individuals. (Also fitting this first model was broadcasting. Like the traditional press, broadcasting has a lot of First Amendment protections, but broadcasters are limited by a government-based regulatory framework via the Federal Communications Commission. When it came to issues like defamation, broadcasters could be held responsible for what other people said on their services, too.)
In contrast, if you’re a common carrier (a good example here is traditional telephone service), you’re generally free from liability for the content your users and subscribers send or receive through your service, but that freedom is based on the premise that as a provider of common carriage, you don’t impose editorial judgment on that content. Both the publisher and the common-carrier models have transitioned to the internet; almost every newspaper in the developed world has a website that carries the same (edited and proof-read) content that its print edition carries, and email services like Google’s Gmail typically act as common carriers.
Failing their obligations, either way
To some high degree, the public debate in the U.S. Congress (as well as the public debate in other legislative forums and tribunals around the world) has centered on some critics’ viewing the big internet companies as having failed as publishers, while other critics insist that the companies are failing in their obligations as common carriers. In the United States, a common theme among Republican critics of the companies has been that Section 230 is supposed to operate as a kind of common-carrier system imposed upon the internet, requiring that Twitter and Facebook and other companies be neutral as a condition of being free from liability. Democrats, broadly speaking, tend to base their criticisms on the claim that the companies—as publishers—need to do more to filter and/or remove terrible content.
It’s unlikely, however, that anyone unfortunate enough to have viewed Wednesday’s Senate hearing (mediated in these days by Zoom teleconferencing) came away with anything closer to an improved understanding of the formerly obscure, now hotly disputed law. The hearing was remarkable for several reasons, not all of which may be apparent to observers in the rest of the world who may be wondering why the senators decided to take some time off from the final days of a national election campaign to summon tech-company CEOs yet again to be criticized for their presumed negligence or misdeeds.
Incumbent politicians in both of the two major political parties in the United States have set their sights on Section 230 as a source of the actions (or inaction) that the critics have complained about. But the two parties starkly disagree as to which side of the publisher/common-carrier divide the Big Tech companies (not just social media but also search providers and even retailers like Apple and Amazon) should fall.
A false dichotomy
What the critics commonly fail to realize is that the choice between “traditional press” and “common carrier” models is a false dichotomy. As it happens, United States free-expression jurisprudence has for decades recognized a third model, which might best be characterized as the bookstore/newsstand model. That model was first recognized in a 1959 Supreme Court case that centered on an obscenity prosecution of a bookstore owner in Smith v. California. Only five years later, Smith v. California became a keystone of the Court’s decision in New York Times Co. v. Sullivan (1964), which concluded that defamation law as well as obscenity law needed to be constrained by the U.S. Constitution’s First Amendment. Applied to computer networks by a federal district court in Cubby v. CompuServe (1991), this model recognized that bookstores and newsstands (and, also, by the way, libraries) are themselves important institutions for First Amendment purposes. Under this model, we don’t insist that bookstore, newspaper stand owners, or library workers take legal responsibility for everything they carry, but we also don’t insist that they carry everything. They’re not publishers or common carriers.
The CompuServe decision arrived at the beginning of my career as a lawyer and an internet-law specialist, and the relatively few lawyers back then who were concerned about internet-service liability (we now use the catch-all term “intermediary liability”) met this development with a sigh of relief. This federal-court decision wasn’t “binding” precedent, but it was persuasively grounded in traditional First Amendment jurisprudence. The judge in the CompuServe case at least demonstrated that it was possible for courts to discern what was different about computer-based interactive media in which ordinary subscribers and users could speak directly to one another without an editorial intermediary.
But in 1995, when a state court judge misinterpreted the facts and the law in a case centering on the then-popular online service Prodigy, this model of First Amendment protection seemed to be slipping away from the online services, including AOL and CompuServe as well as Prodigy. The few members of Congress who were alert to these emerging issues saw the Prodigy case—which misread the CompuServe case as one that required a binary publisher/common-carrier taxonomy rather than as one that refuted that paradigm—as an event that required legislative intervention. The result was the drafting and passage of what eventually became Section 230 of the Communications Decency Act (a.k.a. the “CDA” component of the 1996 omnibus Telecommunications Act). The CDA as a whole was passed to limit the availability of sexual content on the internet, but Section 230 was designed to empower service providers to act on their own to remove inappropriate sexual content—or any other kind of “objectionable” content—without incurring legal liability as “publishers.”
In those years I was a lawyer for the Electronic Frontier Foundation, and EFF joined other civil-liberties groups (and many other stakeholders including individuals, NGOs, and then-dominant tech companies) to challenge the constitutionality of the CDA. We won that case decisively (nine justices voted to strike down the law), but we had been careful to challenge only the provisions that would have allowed the government to regulate or prosecute lawful content if it appeared on the internet. That is, we were careful not to challenge Section 230, whose purpose was to enable service providers to customize their content offerings by curating user-generated content to fit content policies and terms of service.
Barracks and swamps
In short, we left Section 230 in place because we knew from experience even then that most internet service providers—and most of their subscribers—would be unhappy if a new service like Facebook or Instagram were obligated to choose only between (a) the traditional top-down editorial model for publications like newspapers or (b) the common-carrier model that basically dodged any content curation choices in order to avoid assuming liability for what they didn’t remove or decided to keep. The sheer volume of user-generated content—and Americans share with individuals everywhere the profound desire to have their voices heard—guaranteed that even the most highly capitalized and editorially committed social network wouldn’t be able to review everything users submitted prior to publication—just as the most highly committed bookstore owners and librarians can’t be expected to have reviewed everything on their shelves.
Which is to say, Section 230 is based on the principle that allows bookstores, newsstands, and libraries to exist—these entities have the right to decide what to include and what to refuse to make available. But they don’t have the obligation to read and preview everything they distribute or else to refuse to make any editorial decisions at all. The top-down-publisher model, imposed on social media, would be a bit like requiring everyone to live in military housing—you have to obey all the rules and you don’t have any presumptive right to speak at all, much less protest. The common-carrier model, applied to social media, would be more like requiring everyone to live in a sewer—you have perfect freedom to drop anything you want into the stream, but you wouldn’t like having no control at all over what flows past you.
Put another way, you’d rather spend your time in a library or bookstore than in a barracks or a swamp.
So why is Section 230 even controversial?
As I’ve said, on one level the controversy derives from the fact that the critics, whichever side they’re coming from, typically haven’t investigated the problems that Section 230 was crafted to solve. On another level, the developed world (as well as much of the developing world) is enduring a moral panic driven by concern that the tech companies have intentionally or unwittingly or irresponsibly furthered social unrest. (As I argued in the book I published last year, this concern may be overblown, not least because it lets the traditional media ecosystem off the hook for political outcomes like Brexit and the election of President Trump in the USA or Duterte in the Philippines or Bolsonaro in Brazil.)
But at a deeper level, this most recent iteration of the Senate debate about Section 230 is driven by Republican politicians who once were happy with how easily a service like Twitter empowered President Trump (and other politicians) to stir up support. These same Republicans now are disturbed to find that their president and like-minded users are finding more and more of their social-media commentary flagged for problems like doubtful factuality or apparent encouragement of voter suppression and violence. Section 230 is what empowers Twitter and Facebook to make these choices—the choice not to be neutral at all, but instead to curate their content and policies to create spaces where users will want to engage with one another. So in that dim sense both the Republicans and the Democrats are right to have identified Section 230 as something worth discussing.
But what the president and his supporters don’t realize is repealing Section 230 would compel the services to choose between two models, neither of which will make the Republicans happy. The “publisher” model would entail blocking or filtering or flagging or “adding context to” more of President Trump’s tweets, not fewer of them. The “common carrier” model would entail making the services engage in essentially no choices at all with regard to removing content, except content that is expressly illegal (such as child pornography or copyright infringement or terroristic threats).
Careful what you’re asking for
In sum, then, probably the worst thing for Republicans or Democrats would be if, in their insistence in reviewing Section 230, they get what they’re asking for. The irony, however, is that in summoning the CEOs of Google, Facebook, and Twitter to a hearing about Section 230, they’re targeting the successful incumbent companies that would be best able to adapt to whatever new limits Congress might impose on their protections for intermediary liability. Google and Facebook and Amazon and Apple will almost certainly be able to invest in coping with new liability risks—and one of the coping mechanisms will be to silence a lot of ordinary user participation in things like status updates and product reviews. (Twitter is perhaps less well-positioned to invest in new content controls—it’s smaller than the other companies on Congress’s Most Wanted Big Tech Offenders list.)
But as uncomfortable as the CEOs are in being quizzed by hostile lawmakers, they know to a certainty that they have enough money in the bank to respond to whatever new restrictions are imposed on them. The flip side of this is that whatever new liability burdens Congress imposes on Big Tech’s incumbents will likely have a disproportionate impact on the Small Tech upstarts that hope to displace the incumbents someday. In effect, it will tend to lock in the market-dominant players for decades to come.
This explains why Mark Zuckerberg has made a point of saying he’s willing to embrace new laws, including Section 230 “reform” (without specifying what that reform would look like). Zuckerberg would love Congress to come up with new bright-line rules that define what Facebook and other social-media platforms should and shouldn’t be doing. Whatever that consensus is, Facebook knows it can afford to pay to meet the standard. If the standard is difficult for current competitors or new startups to meet, that’s not Zuck’s problem. He just wants the reflexive and frequently contradictory complaints about Facebook to go away on the one hand, and to preserve market dominance on the other.
But what Zuckerberg wants (a consensus about the rules so he can just meet the consensus standards and then relax) can be difficult for multinational companies like Facebook. Harmonizing content between U.S. and EU standards is hard; adding, say, Thailand and India is harder. In addition, companies have been able to compete with one another by setting different standards for content. If they’re all meeting the same lowest-common-dominator, then the race goes to the first mover with the most capital. (Probably Facebook, in other words.)
Facebook dominance forever?
The risk is that “clarifying” Sec. 230 may hurt competition and lock in Facebook market dominance (plus the dominance of other currently successful internet companies that rely on 230 but aren’t focused on social-media services). Sec. 230 was crafted to *enable* competition. One idea behind Sec. 230 was to empower different services to make different sets of content-curation decisions without incurring legal liability for doing so. So that, e.g., one internet service could be more like a Disney Channel while another could be more adult-focused.
But if you remove or hobble Sec. 230’s protections, so that all services that avail themselves of Sec. 230 have to make more or less similar content decisions, then all services will be more alike than different, and the biggest ones will be locked in as market leaders. That’s fine if you’re holding a lot of Facebook stock (or, for that matter, Alphabet or Amazon stock). So it’s no wonder that Zuck’s response if Congress says “jump” will be to ask “how high?” (so long as Congress makes everyone else do the same jump).
But if you want to see competition among the tech companies that host content—which certainly will result in consumer benefits in all sorts of ways and not just low prices—you don’t want to alter Sec. 230, which not only gave us Facebook and Twitter, but also Amazon’s and Yelp’s product reviews and Reddit’s cornucopia of special-interest forums. (And, of course, TikTok, which just recently catapulted Stevie Nicks back to the top of the Billboard charts.) What this means, of course, is that even if Facebook and Twitter were disproportionately targeting conservative content (they’re not), you’d want to support their prerogative to do that, not least because it creates the opportunity for President Trump (if and when he ever leaves office) to start his own social-media company that does the converse.
If President Trump does happen to leave office someday soon, I look forward to the former President’s development—perhaps in concert with his enthusiastic supporters—of new competitor platforms, TrumpBook (and its subsidiary service LindseyGram) and Twitler, which will likely give the former world leader a better perspective on what Section 230 already can do for him and his eager audience.