18 May 2021

The UK’s Online Safety Bill: Safe, Harmful, Unworkable?

On 12 May 2021, the UK Government published the long-awaited Online Safety Bill, following a Green Paper on Internet Safety Strategy and the White Paper on Online Harms.

As expected, the Government’s intention to show “global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world” was met by support from the child protection community, but suspicion and warnings from digital rights and civil society organisations. So, is the Bill world-leading as the Government puts it, or is it introducing “state-backed censorship and monitoring on a scale never seen before in a liberal democracy”, “collateral censorship, the creation of free speech martyrs, the inspiration it would provide to authoritarian regimes”, “trying to legislate the impossible – a safe Internet without strong encryption”?

Essentially, the Bill establishes a new regime for the regulated internet services with the following key aims: (i) to address illegal and harmful content online (terrorist content, racist abuse and fraud in particular) by imposing the duty of care concerning this content; (ii) to protect children from child sexual exploitation and abuse (CSEA) content; (iii) to protect users’ rights to freedom of expression and privacy; and (iv) to promote media literacy. The Bill designates the Office of Communications (OFCOM) to oversee and enforce the new regime and requires OFCOM to prepare codes of practice to implement a duty of care. To put the stated aims in context, a simple search of key terms in the Bill reveals the following number of mentions: privacy – 32, freedom of expression – 36, human rights – 5, safety – 294, and harm – 195 times. The content of the Bill mostly mirrors this emphasis on safety and harm, over human rights, as expected.

This is a long and dense piece of legislation, so the focus of this piece will be on its scope, some key regulatory requirements, enforcement powers and prima facie concerns related to digital rights. If adopted in the current form, the Bill may result in (even more) red tape, human rights concerns and, potentially, private censorship.

Scope of the Bill

Two types of services are included in the scope of this law. First, “user-to-user services” – quite an awkward term to describe an internet service enabling user-generated content, e.g. Facebook or Zoom. Second “search services”, being search engine services, e.g. Google. The service must have links to the UK; that is, it must be capable of being used in the UK or for there to be ‘reasonable grounds to believe that there is a material risk of significant harm to individuals‘ from the content or the search results.

Moreover, the Bill gives quite a significant power to the Secretary of State to amend Schedule 1 on exempt services. These currently include emails, MMS, SMS and ‘one-to-one live aural communications’. The Secretary of State for Digital, Culture, Media and Sport may either exempt new services or remove exemptions, based on the assessment of the risk of harm to the individuals. Either could be interpreted as policing private messaging and communication channels. Most likely, the rationale is to address illegal content such as terrorist and CSEA content, which is disseminated through these channels. While desirable, such interference would endanger the privacy of personal communications, as well as chill free speech, especially since the Bill does not mention encryption in the context of these services.

Duty of ‘care’

The draft Online Safety Bill retains the underlying principle of a ‘duty of care’, introduced in the Online Harms White Paper in 2019. This is a duty derived from health and safety law, and is imposed on certain service providers who must moderate user-generated content in a way that prevents users from being exposed to illegal and harmful content online. It has been criticised by many, including this author, based on its inadequacy, vague nature, the rule of law concerns, etcetera.

The Bill divides services providers into 4 key categories when it comes to their duty of care obligations: (i) all providers of regulated user-to-user services; (ii) providers of user-to-user services that are likely to be accessed by children; (iii) providers of ‘Category 1’ services (providers with additional duties to protect certain types of speech), (iv) search engine providers. All categories include some similar duties such as the risk assessment of illegal content; duties toward illegal content, primarily: terrorist content, CSEA, and other illegal content; duties on the rights to freedom of expression and privacy; duties about reporting and redress; record-keeping and review duties.

The novel ‘Category 1’ raises many important questions and concerns. The Secretary of State will specify conditions and obligations for ‘Category 1’ services, based on the number of users and service functionalities. This power raises questions of legitimacy and oversight, as the UK’s Secretary of State for Digital, Culture, Media and Sport is a senior member of the government. They will need to consult OFCOM, the online safety regulator, but OFCOM is likewise not always entirely independent from the government. The Government did hint in the press release that Category 1 will include large platforms and social media. These stipulations are vague, contrary to the EU’s proposal for a Digital Services Act and its reasonably clear definitions of very large platforms (average monthly users in the EU equal to or higher than 45 million).

In addition to duties common for all service providers, Category 1 service providers will have an additional duty of care to protect content that is “of democratic importance”, broadly defined as content intended to contribute to democratic political debate in the UK or a part of it. The definition is very broad and overlaps with ‘journalistic content’, analysed below. Notwithstanding numerous and continuing issues with content moderation, this duty to essentially not remove this particular type of speech will just introduce a new dimension to this problematic area of private policing of free speech. Also, there are concerns whether political speech should be distinguished from other important forms of free speech and how this will be done in practice.

The Bill also acknowledges the importance of “journalistic content” shared on Category 1 providers, defined very broadly. The definition seems to cover user content “generated for the purpose of journalism”, as long as there is a link to the UK. The government’s press release noted that “Citizen journalists’ content will have the same protections as professional journalists’ content”. In practice, it will be difficult to implement and ascertain if a given user post should be deemed as journalistic in nature and a take-down challenged as such. Also, there is a level of confusion as to what connect will be “of democratic importance” as opposed to “journalistic” and vice versa.

The Government included ‘Category 1’ in the Bill “last minute”, in response to human rights concerns raised around the Online Harms White Paper. The aim was to “safeguard freedom of expression”, but I am concerned whether this can be achieved in practice as proposed, or whether this provision will be impossible to interpret and implement by the regulator and service providers, leaving users’ free speech in limbo.

Enforcement

OFCOM, the current independent regulator of electronic communications and broadcast, will act as an online safety regulator. Elsewhere, I have already expressed concerns around OFCOM’s regulatory capacities and suitability to regulate online content, given OFCOM’s history and industries it was designed to regulate – telecoms and broadcast.

OFCOM will have various enforcement powers, including fines of up to £18 million or 10% of a provider’s annual global turnover. Enforcement powers, such as enforcement notices, technology warning notices, business disruption measures, senior managers’ criminal liability, give OFCOM quite a lot of teeth. For example, technology warning notices will be served by OFCOM if they believe that a provider is failing to remove illegal content relating to terrorism or CSEA content and that this content is prevalent and persistent on their service. If OFCOM is satisfied that the measure is proportionate, the service provider will be required to use ‘accredited technology’ to identify terrorism or CSEA content present on the service, and to “swiftly take down that content”. In effect, service providers may be obliged to install filters. Again, this is problematic as it may interfere with encryption and affect users’ privacy and free speech.

The Bill creates a very powerful online regulator, with important enforcement mechanisms, which could have significant and lasting effects on businesses and digital rights. There is a real danger that OFCOM may not be able to undertake this role effectively, given all other areas within its regulatory remit, plus its chronic lack of human and technical capacity.

Concerns – intermediary liability, harm, free speech and privacy

As announced in the White Paper, the Bill retained the distinction between illegal content and legal, but harmful content. This is problematic for two key reasons.

First, harm is defined very vaguely: “The provider […] has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a child (adult)”. In particular, it is unclear what indirect harm includes, which can cause platforms to moderate user content by the lowest common denominator.

Second, the service provider will need to determine if the content is harmful (directly or indirectly) to children or adults (“service has reasonable grounds to believe that the nature of the content is such that there is a material risk of harm”). The standard used for this assessment is a risk of harm to adult or child of “ordinary sensibilities” – a vague legal standard, which does not correspond to the well-established standard of a ‘reasonable person’, in English courts in criminal or tort law cases.

Unfortunately, the Bill does not refer to human rights or digital rights and, in fact, it barely mentions them. It vaguely mandates “duties about rights to freedom of expression and privacy” (section 12). However, in my view, this section seems quite disjointed from the rest of the proposal and almost appears as an add-on. Privacy invasions are limited to “unwarranted”. The wording “A duty to have regard to the importance of” free speech and privacy almost reads like “please think of free speech and privacy sometimes”.

The intermediary liability regime as established in the E-commerce Directive and mostly retained in the Digital Services Act proposal is absent from the Bill. Instead, the Government refers to section 5 of the European Union Withdrawal Act 2018 stating that “there is no longer a legal obligation on the UK to legislate in line with the provisions of the [E-commerce Directive] following the end of the transition period on 31 December 2020”. This is concerning as it leaves open many questions around filtering, monitoring of user content and censorship. The most concerning one is the likely reversal of the prohibition of general monitoring of users in the UK, established in article 15 of the E-commerce Directive, primarily aimed at protecting users’ privacy. Measures outlined above directly encourage monitoring.

Finally, red tape and the bureaucratic burden on service providers and OFCOM is going to be immeasurable. Service providers will be required to make various judgement calls, including what is political and journalistic speech, which content is harmful to their users and to what extent. This will require time and resources, which service providers will neither have nor be willing to commit, as seen in examples of content moderation or data protection. There, the solution has oftentimes been invasive in individuals fundamental freedoms, e.g. by censoring excessive amounts of legal but ‘harmful’ speech. It is, therefore, doubtful whether this can ever be implemented in practice, even if we agreed that the substance itself is acceptable. In my view, in its current form, it certainly isn’t.


SUGGESTED CITATION  Harbinja, Edina: The UK’s Online Safety Bill: Safe, Harmful, Unworkable?, VerfBlog, 2021/5/18, https://verfassungsblog.de/uk-osb/, DOI: 10.17176/20210518-170138-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.