19 February 2024

A Hobgoblin Comes for Internet Regulation

Recent laws in the US, along with the Digital Services Act (DSA), seek to provide “due process” for individual content moderation decisions. Due process, understandably enough, often contains a component of treating like cases alike. It seems to follow, then, that if two relevantly similar users are treated differently, there is a problem of inconsistency, and that problem might be addressed by requiring more “due process” in the forms of appeals and clear rules and explanations of those rules to offenders. At least, the thinking goes, an appellate body can create coherent precedents and treat those who appeal consistently. And clearer rules are easier to apply; inconsistent applications should also be easier to detect than inconsistencies in the application of unclear rules.

But it is said that consistency is the hobgoblin of small minds. In internet regulation, it is a damaging goal if taken as a mandate to make individual decisions uniformly consistent with each other. Evelyn Douek has written about the need to focus on the overall system, not just the individual decisions that catch our attention, and Kate Klonick has explained that this has always been part of serious thinking about content moderation. The DSA, more promisingly, suggests a focus on overall processes and does not treat errors as evidence of lawbreaking. By contrast, the Florida and Texas laws—currently enjoined pending Supreme Court review—threaten platforms with large fines for each and every error.

Among many other things, Texas’s HB20 prohibits large platforms from making editorial choices based on the “viewpoint” of the expression or user. Tex. Civ. Prac. & Rem. Code §§ 143A.001(1), 143A.002(a). It can be enforced either by the state or by individuals, and allows courts to impose “daily penalties sufficient to secure immediate compliance.” § 143A.007. Similarly, Florida’s S.B.7072 requires a “social media platform” to “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” §501.2041(2)(b). The law does not define the phrase “consistent manner.” On top of exposing violators to civil and administrative actions by the state attorney general, §501.2041(5), the law creates a private cause of action that allows individual users to sue to enforce the “consistency” mandate and authorizes awards of up to $100,000 in statutory damages for each claim, as well as actual damages, equitable relief, punitive damages, and in some cases attorneys’ fees. §501.2041(6).

The problems of figuring out which content moderation cases are “relevantly similar” are well-known. Is breastfeeding “nudity”? What if it’s posted with sexualizing prose? Should reporting on child abuse have extra leeway to describe what was done to a real victim? Should anti-Black speech be treated the same as anti-white speech? Is the term “Coke bottle” hate speech, given the uses that Brittan Heller has explored? Is calling Bret Stephens a “bedbug” the same as calling a group of people “bedbugs”?

Because of the fractal complexity of human communication, and its continuous evolution, no rule can both specify in advance what content is disallowed and also treat truly “like” cases—in terms of the harm they cause—alike. One goal must yield to the other.

But the problems of consistency are greater than that. Suppose we choose to prioritize having rules that can respond to new forms of identified abuses, and even new abuses if they appear. (The Texas and Florida laws suggest that the legislators, convinced that internet services discriminate against conservatives, would prefer rigidity instead, accepting new forms of abuse in order to prevent “censorship.”)

Given the scale and variety of online communication on the largest services, it is impossible to expect more than the roughest consistency.

Appeals are likely to make the problems worse rather than better. Willingness to appeal content moderation decisions is not randomly distributed (the Oversight Board writes about geographic origin but there is good evidence that other demographic factors also strongly affect willingness to make rights claims). Even if successful appeals lead to policy changes, that doesn’t mean that previous removals will be revisited, or that the policy changes will be broad enough to treat analogous cases the same.

Other systems that operate at much smaller scales, but still with large numbers, have never been required to be consistent in this way. Consider teachers at state-run schools: They grade millions of students and even more student submissions. No one has ever suggested it is possible to constrain teachers so that an essay would receive exactly the same comments, and the same grades, from any teacher across a nation. Instead, rational school systems focus on processes for accrediting and evaluating teachers to make sure they are generally up to snuff. But two teachers can both be fine teachers even if they have very different views of what constitutes a good paper, and students’ rights are not violated by this difference, as much as they may groan about it. Systems of federalism or localism mask some of this tolerance for inconsistency, as do doctrines of deference to decisionmakers on the ground. But it is not accidental that the most important critiques of these systems focus on their disparate impact by race, gender, disability, and other socially salient axes. Inconsistency and error alone are frustrating, but inevitable in human endeavors.

To take another example of a system that has to make hundreds of thousands of judgments on very different fact patterns every year, the US trademark registration system is unitary, and still gives itself cover for inconsistency by combining broad general principles and illustrative examples with a black-letter rule that each case is treated on its own merits. No applicant or opposer can succeed by showing that a similar trademark application was treated differently. Each application has its own unique context and evidentiary record. Since each application is reviewed by one of hundreds of trademark examiners, and there are hundreds of thousands of applications reviewed every year, there can be no other practice. Thus, when the Supreme Court invalidated bars on registering “disparaging,” “scandalous,” or “immoral” trademarks, it relied on the viewpoint-discriminatory nature of these bars. Some amici highlighted the existence of inconsistencies—some applications including the term “MILF” were approved while others were rejected, and so on—and the Court alluded to this issue, but invalidating these bars because they could not be consistently applied would also endanger every other registration bar. It is equally impossible to be fully consistent about whether a term is descriptive as applied to the relevant goods or services, whether it is likely to cause confusion with another mark, and so on. Instead, we rely on general rules set forth in the Trademark Manual of Examining Procedure, which contains general rules and lots of examples, along with trained judgment—and we will never be totally satisfied with the results. As with trademarks, no map of content moderation can be as big as the territory. Of course there are and should be guideposts, but the fact that people disagree about applying those guideposts in particular situations doesn’t mean that we’ve discovered an offense in need of remediation.

As Tarleton Gillespie has insightfully written,

Given the scale and the entire range of human communication, there is no such thing as a fully specified content policy: No guideline can be stable, clean, or incontrovertible; no way of saying it can preempt competing interpretations, by users and by the platform. Categorical terms like “sexually explicit” or “vulgar or obscene” do not close down contestation, they proliferate it: what counts as explicit? Vulgar to whom? All the caveats and clarifications in the world cannot make assessment any clearer; in truth, they merely multiply the blurry lines that must be anticipated now and adjudicated later. This is an exhausting and unwinnable game to play for those who moderate these platforms, as every rule immediately appears restrictive to some and lax to others, or appears either too finicky to follow or too blunt to do justice to the range of human aims to which questionable content is put. (see Gillespie 2018 at 72–73)

Gillespie further explains that scale matters: “What to do with a questionable photo or a bad actor changes when you’re facing not one violation but hundreds exactly like it, and thousands much like it, but slightly different in a thousand ways. This is not just a difference of size, it is fundamentally a different problem.” Id. at 77. Social media posts have individualized contexts and records. As James Grimmelmann has noted, a post that decries eating Tide Pods and one that encourages eating Tide Pods can be indistinguishable to an outsider. As he says: “The difficulty of distinguishing between a practice, a parody of the practice, and a commentary on the practice is bad news for any legal doctrines that try to distinguish among them, and for any moderation guidelines or ethical principles that try to draw similar distinctions.” Of course, there are obvious rule violations, and situations where most people would have no trouble coming to a decision. But there are also constant pressures at the margins, and moderation itself contributes to those pressures as people try to get up as close to the line as they can without being banned, because borderline content gets more engagement. The nearly harassing, the nearly inciting, the nearly nude all draw attention and encourage people to react. It is in this important area where there is no hope of true consistency, only of good training, diversity of moderators, and sampling for review.

We would better serve the human goals of due process by searching for patterns of disparate impact and looking for their causes. We should also, of course, aim to correct obvious errors. (These are often linked, as when automatic screening prohibits name-strings that correspond both to English slurs and real people’s names, usually of non-English origin.) But the conversation should be about error rates and biases, not on examples that by their very nature must be unrepresentative. The DSA’s due diligence obligations are a step in that direction, but even analysis of systemic risks and mitigation must be accompanied by an awareness that individual failures will be inevitable even in the best of all possible worlds. And the DSA’s due process obligations for individual users point, like the Texas and Florida laws, in the other direction. A hobgoblin is haunting content moderation; we should face it directly.

 


SUGGESTED CITATION  Tushnet, Rebecca: A Hobgoblin Comes for Internet Regulation, VerfBlog, 2024/2/19, https://verfassungsblog.de/a-hobgoblin-comes-for-internet-regulation/, DOI: 10.59704/36e00341661f1124.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
DSA, Internet, Internet Regulation


Other posts about this region:
Europa, USA