A Primer on the UK Online Safety Act
Key aspects of the new law and its road to implementation
With Royal Assent received on 26 October 2023, the Online Safety Act (OSA) has now become law, marking a significant milestone in platform regulation in the United Kingdom.
The OSA introduces fresh obligations for technology firms to address illegal online content and activities, covering child sexual exploitation, fraud, and terrorism, adding the UK to the array of jurisdictions that have recently introduced new online safety and platform accountability regulations. However, the OSA is notably short on specifics. Crucial determinations, such as the actions that social media giants and search engines will be required to take, hinge on codes of conduct and guidelines to be established by the designated regulator, Ofcom, as well as pending secondary legislation. This legislative approach places strong emphasis on finer points yet to be established, making the intricacies of implementation even more critical than in similar European regulations.
In this post, we dissect key aspects of the OSA structure and draw comparisons with similar legislation, including the EU Digital Services Act (DSA). The UK law’s openness, coupled with certain ambiguities, presents challenges for Ofcom’s implementation capacity and raises uncertainties about its future adaptability. One concern involves the potential application of the OSA to AI-powered chatbots, raising doubts about the law’s ability to achieve its objectives. Central to our examination is the OSA’s focus on distinct content categories and the need to determine what is legal or illegal. The delineation of these categories sparked heated debates during the legislative process, and incorporating these distinctions into the law poses a significant implementation challenge, particularly regarding who holds the authority to interpret what constitutes legal or illegal content.
OSA’s more specific scope compared to the DSA
Whether a platform falls within the OSA’s scope depends on the nature of the service it offers. Regulated services under the law include i) user-to-user platforms where users can upload and share content (e.g., messages, images, videos, comments) that becomes accessible to others, and ii) search engines.
Compared to the DSA, the scope of the OSA is narrower. The DSA applies to different types of platforms in addition to the ones covered by the OSA, also including app stores, and deals with broader issues such as dark patterns and intellectual property infringement.
Notably, content-based substantive obligations outlined in the OSA relate to illegal content and content that can be harmful to children. In contrast, the systemic obligations in the DSA cover a broader spectrum of content, including both legal and illegal material.
Differentiated duties of care: tailored obligations according to the risk and reach of the platform
The backbone of the OSA revolves around the concept of a “statutory duty of care” – a regulatory model proposed by the by Carnegie UK Trust drawing on health safety regulations. The underlying premise is that the way platforms are structured and managed results from corporate decisions. These corporate decisions, in turn, shape how what users see and how they engage with content. Consequently, companies are well-placed to, and have a moral responsibility to, consider the potential harms to users associated with their services when making these corporate decisions, and they should take measures to prevent or mitigate reasonably foreseeable harms.
The OSA embraces this model and establishes duties requiring all in-scope platforms to proactively manage the risk of online harms related to their services. However, not all platforms (or “providers of regulated services” as defined by the Act) are subject to the same obligations. The duties of care and their accompanying responsibilities should be proportional to the level of risk posed by platforms, considering two key dimensions.
The first dimension revolves around the service provider. The OSA’s obligations consider the service provider’s size and reach, echoing the approach in the DSA. While all platforms must prioritize user protection and embed safety considerations into their decision-making, more demanding obligations are reserved for the largest platforms due to their wider reach and higher risk functionalities, as well as the greater resources they have to manage these risks.
Service providers are categorized into three groups: Category 1, consisting of high-risk, high-impact user-to-user platforms, which face the most stringent obligations; Category 2a, which encompasses the highest-reach search services; and Category 2b covering services with potentially risky functionalities or other factors. The OSA also singles out providers of pornographic material, placing on them a standalone duty to ensure that children cannot normally access their services. Unlike the European legislation, which specifies criteria for defining the Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) that will be subject to more demanding obligations, the OSA refrains from setting explicit thresholds or criteria for each category. Instead, the law establishes that the UK government, in consultation with Ofcom, will establish these criteria through secondary legislation, accounting for factors like size and functionality.
The second dimension is content-based, resulting in a “differentiated duty of care” that introduces complexity into the legislation and poses significant implementation challenges. The OSA classifies obligations into two primary types of content: those related to illegal content and those related to content harmful to children and the obligations vary based on the nature of the content. Every platform is required to assess the risks associated with illegal content and take proportionate measures to mitigate these risks. Services likely to be accessed by children, in turn, must also assess the specifics risks of harm to children and take proportionate steps to mitigate these risks. Additionally, while all platforms must also establish effective processes and systems for users to report illegal content, and implement procedures for complaints and redress, user-to-user platforms are also mandated to act promptly to swiftly take-down illegal content once they become aware of it.
This means that OSA’s obligations are contingent not only on the type and size of the platform but also on the assessment of whether the content is illegal and whether it is likely to be accessed by children. The lack of clarity regarding how platforms should “age-gate” content that is harmful to children has ignited considerable debate concerning potential risks to user privacy and freedom of speech. Nevertheless, the most substantial uncertainty affecting all in-scope services is likely to revolve around determining the legality of content.
Content matters: identifying illegal content and priority illegal content in the OSA
The OSA defines illegal content as “content that amounts to a relevant offence”. This definition covers situations where the use of words, images, speech, or sounds amounts to an offense, where possessing, viewing, or accessing such content is an offense, or when publishing the content itself is an offense. Some new speech-related offenses are introduced by the OSA, such as the false communication offense, which makes it illegal for a person to send a message conveying information they know to be false and that is intended to cause non-trivial psychological or physical harm to a likely audience. However, illegal content also encompasses anything that is already an offense or becomes an offense after the passing of the OSA. Despite the extensive list of offenses, it remains controversial whether the law will tackle political harms that fall outside the scope of the false communication offence – including extremism, abuse, and misogyny – leading to uncertainty about the OSA’s effectiveness in safeguarding political discourse ahead of the UK’s upcoming general elections.
While the responsibility for assessing what qualifies as illegal content falls on the platforms themselves, the standards provided by the OSA to guide platforms’ assessment of the content’s legality are somewhat vague. The legal standard stipulates that providers should have “reasonable grounds to infer” that content is illegal and act accordingly. This, in turn, requires platforms to infer whether all the elements necessary for the commission of the offense, including mental elements (mens rea), are present or satisfied, and that there are no defenses that may be successfully relied upon. It is unclear how AI-powered automated content moderation systems will be capable of carrying out this assessment.
Confusingly, the OSA introduces the concept of “priority illegal content”, which carries additional specific duties for platforms. For instance, all user-to-user platforms must take “proportionate measures” to prevent users from encountering priority illegal content and minimize its availability on their platforms. The OSA defines “priority illegal content” as content related to terrorism, child sexual exploitation and abuse, or other priority offenses specified in a list in a schedule of the law. The list of other priority illegal content can be modified by the Secretary of State, a rule that has faced criticism for granting the government too much power over speech, as illustrated by the inclusion of the controversial priority offense “assisting illegal immigration”.
Navigating the intricate web of content definitions and corresponding obligations within the OSA will be a significant challenge for platforms. Ofcom’s forthcoming documents, including a code of conduct concerning illegal content duties and guidance on assessing such content’s risks, will offer more specifics. While the initial drafts have already been published and are currently under consultation, the codes of practice on harm mitigation still need to be laid before Parliament, meaning it is unclear when they will be fully operational. This marks just the initial phase of Ofcom’s extensive roadmap for implementation. With the regulator’s guidance and codes of practice playing a pivotal role, the current details are still relatively scarce and substantial groundwork is yet to be done in establishing the OSA’s operational compliance framework.
OSA’s limited impact on content moderation practices
Considering the OSA’s focus, a fundamental concern arises: will the British law significantly impact content moderation practices beyond illegal content?
Earlier drafts of the OSA included a third category of content, “legal but harmful content” for adults, which covered content legal but deemed potentially detrimental, such as misogynistic abuse, explicit self-injury promotion, and depictions of eating disorders. Under this older version of the bill, platforms were not required to remove this harmful content but had to conduct and publish risk assessments related to it. However, this category attracted considerable criticism due to the ambiguity in defining harmful yet legal content. Concerns also surfaced regarding government intervention in regulating legal speech and the potential chilling effect on free speech resulting from mandatory risk assessments for this category. As a result, the UK government removed the “legal but harmful content” category entirely, narrowing the OSA’s scope and its authority over content moderation.
In lieu of the removed duties, the OSA mandates Category 1 platforms to offer “user empowerment” tools. However, these tools are rather basic, essentially mirroring what most major social media platforms already provide, including tools that enable users to limit their exposure to certain types of content, and tools that alert users to the presence of content of a particular type (e.g., filtering screens). Additionally, the content covered by these obligations largely overlaps with what is already prohibited on major platforms, such as content promoting self-harm, abusive content targeting protected characteristics, and content inciting hatred just below the threshold of hate speech. Questions also remain regarding whether these user empowerment tools set the default correctly and whether they will effectively prevent indirect harms stemming from online content, including the impact on individuals who are not managing the platform’s settings.
Crucially, the OSA’s approach, focusing on specific types of content, limits its authority to compel platforms to address systemic risks within their content moderation systems. Conversely, the European law requires VLOPs and VLOSEs to conduct a more comprehensive evaluation of their content moderation systems, encompassing not only risks of illegal content dissemination, but also threats to fundamental rights, political processes, public health, gender-based violence, and impacts on individuals’ physical and mental well-being, among others. In contrast, the OSA mandates platforms to evaluate only risks related to illegal content and content harmful to children, while Category 1 platforms are additionally required to assess the effects of their policies on users’ privacy and freedom of expression. The OSA also lacks the crisis response mechanism that exists in the DSA, thereby restricting Ofcom capacity to directly oversee content moderation systems during critical times.
Next steps: challenges in the implementation and future-proofing the law
The Online Safety Act becoming law after four years of debate does not bring discussions around platform regulation in the UK to a close. The Act’s comprehensive framework, requiring substantial input from Ofcom, along with contentious clauses such as one mandating platforms to use “accredited technology” to spot child sexual abuse material (CSAM) – which has raised concerns about compromising end-to-end encryption – guarantee that debate will persist. However, the OSA represents an important step forward in introducing public oversight into corporate decisions shaping users’ online experiences, offering valuable insights for other jurisdictions exploring similar legislative pathways.
Two main aspects deserve close observation. The first pertains to OSA’s hybrid regulatory approach which blends elements of public and private governance, and understanding how this will be put into practice. Ofcom will not be involved in individual content vetting or removal decisions, while platforms will face some constraints in crafting their content moderation systems. For example, Category 1 platforms are obliged to protect “content of democratic importance”, recognizing their substantial role in the digital public sphere. This model aims to ensure that neither the state nor corporations hold exclusive control over online content. It seeks to address concerns about state intervention affecting users’ fundamental rights while acknowledging that holding platforms accountable is crucial to a resilient democracy. Nonetheless, the effectiveness of this engagement model between public and private entities—similar to frameworks found in sectors like financial services regulation—in regulating digital platforms, is yet to be observed.
The second is about future-proofing the law, and the extent to which the OSA is equipped to address challenges arising from the proliferation of new forms of communication. A critical question remains regarding whether OSA encompasses newer services like LLM chatbots such as ChatGPT, which are neither explicitly included nor excluded. In Europe, scholars argue that the scope of the DSA will need to be expanded to cover LLMs. In the UK, the government suggested that certain functions of LLM chatbots fall under the new legislation, arguing that content generated by AI bots would be in scope “where it interacts with user-generated content, such as on Twitter” and that “search services using AI-powered features will also be in scope of the search duties”. However, it is uncertain whether the OSA offers adequate mechanisms to protect users, particularly children, from potentially harmful content directly generated by these services. For instance, conversational chatbots used for emotional support or as spaces for sharing thoughts and feelings might pose significant risks to more vulnerable users – potentially undermining the primary objective of the new law.
As Ofcom noted, the quick rollout of generative AI shows that “the sectors the Act tasks Ofcom with regulating are dynamic and fast paced” meaning the regulator’s response will have to constantly evolve. It remains to be seen whether the OSA’s less prescriptive approach will afford the law the necessary flexibility to adapt to technological advancements and whether upcoming AI regulations in the UK will effectively address the OSA’s blind spots.