04 April 2022

The Council of Europe as an AI Standard Setter

On 4 April 2022, Member States of the Council of Europe (CoE) commences negotiations on the world’s first international binding legal instrument in the field of artificial intelligence (AI). During the last weeks, the CoE was more prominently in the headlines for expelling Russia from its membership (see here, here, here and here) than for regulating AI. Yet, the CoE has a large reservoir of both experience and expertise in the field of standard setting, as far as the three key priorities are concerned: promoting human rights, democracy, and the rule of law. Given the undisputed need for regulating AI activities (see the Verfassungsblog symposium on ‘The Rule of Law versus the Rule of the Algorithm’), the CoE appears a prime candidate for this undertaking.

This story is not without parallels. Back in the early 1980s, the CoE was a standard setter in the field of data protection law, which at time was still in its infancies. The so-called Data Protection Convention (or, ‘Convention 108’) of 1981 was the world’s first international treaty on data protection. It had a large influence on EU regulation, with the former Data Protection Directive drawing inspirations from Convention 108. In the long run, however, EU law proved more powerful, leading to a reversal in influence: After the modernisation of EU data protection law by the enactment of the General Data Protection Regulation (GDPR), the CoE, too, felt that it was time for an overhaul of Convention 108. This led to the adoption of the so-called ‘Convention 108+’, which largely draws inspiration from the new GDPR. One major difference between the EU and CoE instruments, however, is their territorial scope: Whilst the GDPR as an act of EU secondary law binds the 27 EU Member States, Convention 108 is not only binding on all 46 CoE Member States but also on nine States from Asia, Africa, and the Americas. This is due to Convention 108 being a so-called ‘open’ treaty, i.e. a treaty that is not tied to CoE membership but open to all States worldwide. With this, Convention 108/108+ has the potential to become a truly international standard in data protection law.

A similar scenario could also unfold in the field of AI. In 2020, the European Commission published its proposal for what is commonly known as the ‘AI Act’, currently under deliberation in the European Parliament. While at UNESCO and OECD level, activities led to the adoption of soft law instruments, the CoE in 2019 set up the so-called Ad hoc Committee on Artificial Intelligence (‘the CAHAI’), with the aim of preparing the draft for a legally binding treaty on AI. By the end of 2021, the CAHAI, having conducted broad multi-stakeholder consultations, agreed on ‘possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law’. The results were first kept confidential and were declassified only recently, so the aim of this blogpost is to take a first glance at the main results.

Possible Elements of an AI treaty

The proposal for a ‘legally binding transversal instrument’ aims to set ‘minimum standards for AI development’ (para 11). This makes sense, for various reasons: First, the proposal goes for an open treaty, so the end-product should not be restricted to CoE membership but be open to States worldwide (para 7). Therefore, the standard to be established must not be over-ambitious because otherwise, acceptance might be low. Second, given the activities at EU, OECD and UNESCO level, the CAHAI warns of the ‘risks of unwarranted duplication or fragmentation’ (para 15). This means that the CAHAI in its work took due notice of the developments under way at these different levels without, however, losing sight of the CoE core priorities. Third, and perhaps most importantly, the proposal takes a cautious approach, formulating the general rule that ‘the development and design of, as well as the research in, AI systems should be carried out freely’ but adding as a caveat ‘with due consideration for safety and security, and in full compliance with the Council of Europe standards on human rights.’ (para 23).

The CAHAI further proposed that the concept of ‘human dignity’ should underpin the proposed binding legal instrument, given the acceptance this concept has gained worldwide (para 16). I am a bit hesitant in this regard, as far as the genuinely legal impact is concerned. Undoubtedly, AI activities have or at least can have strong repercussions on human dignity. In legal terms, however, it should not be forgotten that human dignity is not explicitly protected as a right under the ECHR, notwithstanding the fact that the Strasbourg Court has acknowledged the overall importance of this concept for the interpretation of Convention guarantees. Furthermore, experience from German constitutional law shows that it is very difficult to agree on the exact contours of human dignity, which in a way is the basis of all fundamental rights. So, while I would have no objections against using human dignity as an overarching principle in the proposed binding legal instrument, I would be more skeptical about its usefulness in concrete and practical terms.

Commission Proposal and CAHAI Proposal compared

Very much in line with the proposed AI Act of the European Commission, the CAHAI pleads for dealing with AI systems according to a risk based assessment (para 19). As may be recalled, the Commission proposal basically differentiates between ‘high-risk AI systems’ and ‘non-high-risk AI systems’. The CAHAI proposal further identifies certain ‘red lines’, such as ‘AI systems using biometrics to identify, categorise or infer characteristics or emotions of individuals, in particular if they lead to mass surveillance, and AI systems used for social scoring to determine access to essential services’ (para 21). In the Draft AI Act, prohibited AI practices are similarly spelled out in Article 5.

A major difference of the CAHAI proposal when compared to the Commission proposal would seem to be its emphasis on the use of AI in the public sector (paras 32 et seq.). The proposed AI Act, by contrast, follows a decided market approach, so it applies to all ‘providers placing on the market or putting into service AI systems in the Union’ (Article 2(1)(a) Draft AI Act), be they natural or legal persons, public authorities, agencies or other bodies (Article 3(2) Draft AI Act). In the CAHAI proposal, some requirements are related to the use of AI in general, such as robustness, safety and cybersecurity, transparency, explainability, auditability and accountability (para 30). Based on the assumption, however, that the proposed instrument ‘should be general in nature, the CAHAI recommends that such instrument should focus on the potential risks emanating from the development, design, and application of AI systems for the purposes of law enforcement, the administration of justice, and public administration’ (para 33).

A remarkable exception in this regard are guarantees related to judicial protection, which explicitly are recommended to apply to all applications of AI systems (para 39). These include: ‘the right to an effective remedy before a national authority (including judicial authorities) …; the right to be informed about the application of an AI system in the decision-making process; and the right to choose interaction with a human in addition to or instead of an AI system, and the right to know that one is interacting with an AI system rather than with a human’ (para 40). The multitude of problems associated with these requirements would clearly go beyond the scope of this blogpost.

Finally, it should be mentioned that the CAHAI underlined the possible ‘need to ensure that all Parties share a common basic approach to civil liability in relation to AI’ while at the same time acknowledging that the ‘application of AI systems would in general be covered by existing domestic law of the Parties’ (para 42).

An additional non-binding mechanism

Beyond the proposed binding legal instrument briefly sketched out above, the CAHAI also proposed to introduce what was called ‘HUDERIA’ (Human Rights, Democracy and Rule of Law Impact Assessment, paras 45 et seq). The main elements of this non-binding mechanism are the following (para 50):

‘(1) Risk Identification: Identification of relevant risks for human rights, democracy and the rule of law;

(2) Impact Assessment: Assessment of the impact, taking into account the likelihood and severity of the effects on those rights and principles;

(3) Governance Assessment: Assessment of the roles and responsibilities of duty-bearers, rights holders and stakeholders in implementing and governing the mechanisms to mitigate the impact;

(4) Mitigation and Evaluation: Identification of suitable mitigation measures and ensuring a continuous evaluation.’

Conclusion

With the CAHAI proposal, the CoE has entered the scene of international AI regulation. It has to square the circle of being open enough to gain broader acceptance on a worldwide basis while at the same time upholding CoE standards in the field of human rights, democracy, and the rule of law. The fact that at EU level, a piece of legislation is simultaneously underway will possibly complicate proceedings. During the negotiations on what later became Convention 108+, the EU exerted major influence to bring the outcome in line with the GDPR. Although the EU is not a CoE member itself, it can be assumed that it will endeavour to bring the negotiated treaty in line with its AI Act once it is adopted. By and large, the two proposals would seem to be complementary with each other but of course, views might be divided on specific questions. One major issue could be whether the CAHAI’s concentration on the use of AI in the public sector leads to stricter standards, compared to the market approach of the Commission proposal. Conversely, the CAHAI’s emphasis on minimum standards could also lead to even more lenient standards. In any event, it is good to see the human individual placed in the centre of AI regulation.


SUGGESTED CITATION  Breuer, Marten: The Council of Europe as an AI Standard Setter, VerfBlog, 2022/4/04, https://verfassungsblog.de/the-council-of-europe-as-an-ai-standard-setter/, DOI: 10.17176/20220405-011301-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Artificial Intelligence, Council of Europe


Other posts about this region:
Europa