01 November 2022

Fundamental rights impact assessments in the DSA

Human rights and the risk-based approach of the new EU regulations on the digital society

As our lives are becoming more and more intertwined with our online actions and behaviours (See Floridi (Ed.)), the rights and freedoms we enjoy in the offline world must be reaffirmed in the online context and in its digital ramifications.

The long-standing substitute role of data protection in the defence of human dignity and individual rights in the digital environment cannot be extended any further. Rather than a catch-all notion of data protection, specific attention must be paid to human rights in their variety and specificity.

As has been the case with the right to privacy in the past, the peculiar nature of the online environment must be taken into account, considering new potential threats to human rights. This requires appropriate forms of assessment and mitigation of potential risks to these rights.

In line with this approach, the Digital Services Act (DSA) draws specific attention to the risks stemming from the design, functioning and use of digital services, considering their adverse effects on fundamental rights and, following the common approach to the protection of human rights, adopts an ex ante strategy centred on risk assessment.

The following sections will discuss, briefly and without the ambition of an in-depth analysis, this approach adopted by the EU legislator. Both the main elements of the risk-based framework as set out in the DSA and the approach to risk assessment will be considered.

The scope of the risk-based approach in the DSA

The EU legislator’s intention to combine the protection of fundamental rights and market interests is evident in the DSA. As common in early generations of industrial risk regulation, the solutions adopted in the DSA circumscribe a risk-based approach to the most challenging areas, which are for our purposes here the “very large online platforms” or “VLOPs”.

Compared to another challenging piece of risk-based EU regulation for the digital sector under discussion, namely the AI Act proposal, the DSA has adopted a size-based criterion rather than a proper risk-focused model (i. e., the high-risk threshold of the AI Act proposal). In this regard, while the size of the platform may impact the level of risk exposure, it has less influence on the other relevant variables in risk assessment, i.e., the probability of adverse consequences and their severity. However, the specificity of the applications considered – predominantly platforms – and their common features may justify a size-based approach in the DSA. This is because the size of a platform is a proxy for risk levels in a context centred on the network effect, which is not necessarily true in other fields – such as AI – where a variety of applications is possible.

Looking at the structure of the DSA, Articles 34, 35, and 37 are the relevant provisions dealing with risk management: Article 34 focuses on assessment, Article 35 on mitigation, and Article 37 on the complementary role of independent audits.

The DSA’s risk-based approach covers different categories of risks, not only related to the adverse effects on fundamental rights. The following five main categories can be identified: (i) illegal content; (ii) negative effects on fundamental rights; (iii) negative effects on civic discourse and the electoral process; (iv) public security; (v) negative effects in relation to gender-based violence, public health protection, the protection of minors, and serious negative consequences to the person’s physical and mental well-being.

This variety of sources and types of potential risk may make it difficult to define appropriate and coordinated assessment tools. While specific tools have been developed to counter illegal content and there is some experience in assessing the impact on fundamental/human rights (see the Danish Institute for Human Rights here and here), the evaluation is more complicated with regard to the negative effect on civic discourse and electoral process, as well as in relation to public security, which is a rather broad category in the case of global platforms.

The inclination of the EU legislator to accommodate a broad spectrum of different demands concerning the mitigation of potential risks is also evident in the fifth and last category, which brings together different situations relating to subjective status (minors), conduct (gender-based violence), collective (public health) and individual interests (physical and mental well-being).

The main result of this inclination is a fragmented generalisation of a case-based approach based on past experiences at the expense of a holistic view of potential risks. This mix of different risk categories does not provide a clear framework as needed in a future-proof regulation.  The consequence of this fragmentation is even more impactful for assessment tools, as it entails the development of a variety of specific instruments.

A binary model based on illicit content1) and prejudice to fundamental rights – which may encompass many of the other risks listed – could thus have been a more straightforward solution, leaving room for case-by-case interpretation, as is usual in the civil law tradition in the field of tort law. On the contrary, this detailed list shows a kind of didactic intent but leads to a more rigid and complicated model, opening up potential conflicting interpretations.

In a similar way, Article 34 (1)(b) provides a detailed list of potentially affected rights with explicit references to the Charter of Fundamental Rights of the European Union. Although the rights mentioned in this non-exhaustive list – namely human dignity, respect for private and family life, the protection of personal data, freedom of expression and information, including the freedom and pluralism of the media, the prohibition of discrimination, the rights of the child and consumer protection – are those most at risk in the context under consideration, this detailed approach seems superfluous where a general reference to fundamental rights would have been not only sufficient but even more comprehensive.

Finally, regarding the factual elements to be considered in the assessment, Article 34 (2) provides a non-exhaustive list of key aspects to be considered. This provision could have been drafted in line with Recital 57, which states that “When assessing such systemic risks, providers of very large online platforms should focus on the systems or other elements that may contribute to the risks” (emphasis added). This formulation is preferable since it includes contextual elements other than the features of the system. Indeed, the prejudice to fundamental rights is not limited to the design and functioning of platforms, but also concerns the context in which a given system is used (e. g., level of education of users, digital literacy, level of access to services among different groups and communities, etc.).

Assessment and complementary tools

With regard to the risk assessment methodologies, the provisions in the DSA offer limited input, as mentioned above. Following the pattern of industrial production regulation, a periodical (annual) assessment is preferred to the more common continuous assessment used in human rights. This is mitigated by the obligation to conduct such an assessment “in any event prior to deploying functionalities that are likely to have a critical impact on the risks identified” (Article 34 (1)). However, the focus is on system functionalities, overlooking external changes that may impact on already implemented functions (e. g., new forms of disinformation campaigns and techniques).

When assessing the use of large platforms from a human rights perspective it should be kept in mind that their impact is not limited to the design of their recommender systems or other system features listed in Article 34 (2). Impact assessments should include the overall effects of the ‘platformisation’ of social interaction and their consequences on the enjoyment of fundamental rights and freedoms:2) this is a missing point in the framework outlined by the EU legislator in the DSA.

In this regard, the DSA is more in line with the security approach in risk mitigation, focused on the process and products, rather than close to environment or human rights impact assessments, where the emphasis also is on what is outside the system and how technology is likely to affect and change it. While design is necessarily an internal component of platforms, risk is the result of both internal and external factors. Focusing more on the former factor may prevent a holistic perspective.

This is even more true when, as in the DSA, the model adopted is based on self-assessment, which is usually characterised by an internal perspective on potential side-effects. In addition, although soft-law instruments (guidelines, best practices, and recommendations) are provided for in the DSA, the competence required to carry out an impact assessment and the way it is to be conducted remain unclear. In Recital 59, there is a reference to “the involvement of representatives of the recipients of the service, representatives of groups potentially impacted by their services, independent experts and civil society organisations”, which highlights the importance of integrating “such consultations into their methodologies for assessing the risks and designing mitigation measures, including, as appropriate, surveys, focus groups, round tables, and other consultation and design methods”. As demonstrated in several cases in the digital environment, the role of the experts in performing self-assessment risk analysis is crucial, as is the participation of rightsholders and stakeholders.3) The DSA should therefore have paid more attention in defining these elements and their requirements.

Other experts, in turn, are involved in the audit process, as set out in Article 37, and in this case a specific condition of independence is required. But the auditors are not explicitly tasked with reviewing the fundamental rights impact assessment carried out by the platform. Although Article 37 is not clear on this point, referring to compliance with “the obligations and with the commitments”, reviewing the impact assessment carried out by the platform seems possible. However, given the variety of impacts a platform may have on fundamental rights, the effort required to re-assess the risk may lead to a narrower interpretation (see Buri and van Hokoken, p. 37), excluding re-assessment. As a result of this second interpretation, the role of audits risks being more formal than substantive, all the more so in a model centred on self-assessment. Therefore, a clear interpretation of this provision is needed, hopefully in favour of re-assessment.

Finally, Article 35 deals with the obligations resulting from risk assessment, focusing on risk mitigation. The option in favour of mitigation (which it means that a residual risk persists) rather than risk prevention is in line with the recent approach of the EU legislator. As in the AI Act proposal, this focus on mitigation is based on the idea that some uses of technology in the digital society are characterised by an endemic risk that we cannot fully prevent. We therefore accept this risk given the potential benefits provided by technology. This is in line with the legal approach already used in the regulation of the risk society,4) although it departs from the stronger position adopted in the GDPR where no high-risk applications are permitted.

Since the DSA does not set any risk threshold and only refers to the reasonableness and proportionality of the measures adopted, we can conclude that – as in the AI Act proposal – high-risk uses are permissible, provided they are supplemented by mitigation measures, without requiring the risk to fall below the high-risk threshold. On the other hand, compared to the AI Act proposal, the absence of a list of high-risk uses leaves more room for compliance assessment by the competent authorities (see also Article 35 (3) guidelines).

In this context, research organisations can play a role in detecting, identifying, and understanding systemic risks (Article 40 (4)). This may be an important contribution from academia, although the requirement of independence from commercial interest needs some clarification in an academic environment characterised by increasing research funding programmes sponsored by large platforms, which may affect the actual independence of beneficiaries in their future research.

Concluding remarks

The attention to fundamental rights in the new wave of EU digital regulation, confirmed in the DSA, is a significant step towards a more articulated and appropriate framework for protecting people in a context characterised by pervasive technologies that are often developed without adequate consideration of their impact on society. However, the emphasis on the risk-based approach and accountability in the DSA, as well as in the AI Act proposal, is not supported by adequate models for conducting impact assessment and the existing practices in human rights impact assessment show some limitations in being extended to the digital context. For this reason, referring to commonly used risk assessment parameters (severity, provability, likelihood, scale, and reversibility, see Recital 56) is not sufficient, and a specific methodology is needed to operationalise them in the context of digital societies (See Mantelero).

Although, in dealing with these issues, the DSA suggests giving “due regard to relevant international standards for the protection of human rights”, the important reference to the UN Guiding Principles on Business and Human Rights (Recital 47) does not solve practical issues concerning the development of risk assessment model. While the Guiding Principles may play a role in countries where the level of human rights protection is low – although their actual impact has been questioned5) –, the influence of these principles is more limited in EU countries where human rights principles are already largely covered by EU and national provisions.

Like other pieces of the new wave of EU regulation of the digital society, the DSA thus represents an important contribution to the development of a more human-centred technology, where the protection of human dignity and fundamental rights play a crucial role, but a major implementation effort will be needed.

References

References
1 See also Ilaria Buri and Joris van Hoboken, ‘The Digital Services Act (DSA) Proposal: A Critical Overview’ (Institute for Information Law (IViR), University of Amsterdam 2021) <https://dsa-observatory.eu/wp-content/uploads/2021/11/Buri-Van-Hoboken-DSA-discussion-paper-Version-28_10_21.pdf> accessed 25 September 2022, p. 34. Although most of the alleged illegal content will simply be removed on the basis of notice and takedown procedures, without a more extensive legal assessment, this is of limited relevance from the point of view of risk assessment, the latter being a different ex ante evaluation based on potential risks.  The critical issue is thus not the lack of a more extensive legal assessment but the difficulty in defining the illegal nature of some content and consequently the content monitoring systems to be adopted for risk mitigation as several aspects are contextual (e. g., culture-dependent aspects related to defamation or context-dependent legitimate use of copyrighted materials).
2 See Ellen Goodman and Julia Powles, ‘Urbanism Under Google: Lessons from Sidewalk Toronto’ (2019) 88 Fordham Law Review 457–498; Mantelero, Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI, 76-82.
3 See Mantelero, Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI, 104-109 and 127-130.
4 See, e. g., Guido Calabresi and Philip Bobbitt, Tragic Choices (Norton 1978).
5 See Surya Deva, ‘Treating Human Rights Lightly: A Critique of the Consensus Rhetoric and the Language Employed by the Guiding Principles’ in David Bilchitz and Surya Deva (eds), Human Rights Obligations of Business: Beyond the Corporate Responsibility to Respect? (Cambridge University Press 2013) <https://www.cambridge.org/core/books/human-rights-obligations-of-business/treating-human-rights-lightly-a-critique-of-the-consensus-rhetoric-and-the-language-employed-by-the-guiding-principles/20E6A9EC8600D94AE7D7900FB4FAAAF3> accessed 31 August 2022.

SUGGESTED CITATION  Mantelero, Alessandro: Fundamental rights impact assessments in the DSA, VerfBlog, 2022/11/01, https://verfassungsblog.de/dsa-impact-assessment/, DOI: 10.17176/20221101-220006-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
DSA, Digital Services Act, HRIA, Human Rights, Platform Governance, fundamental rights, risk assessment


Other posts about this region:
Europa