24 Januar 2023

The Council of Europe Creates a Black Box for AI Policy

The Council of Europe (CoE) Committee on AI has made a startling decision to carry forward future work on the development of an international convention on AI behind closed doors, despite the Council’s call for the Democratic Governance of Artificial Intelligence in a 2020 resolution. It is a surprising move from an international organization that has been at the forefront of efforts to promote greater transparency and accountability for the technology that is transforming the world. The decision is at odds with the Terms of Reference for the Committee on AI (CAI) and it is of particular concern to those who are closely following the impact of AI policy on democratic institutions.

The Meaning of Transparency for AI

The Center for AI and Digital Policy report Artificial Intelligence and Democratic Values (AIDV) set out to assess national AI policies and practices on a consistent methodology that could guide future policymaking. The aim is to provide the basis for both comparative assessment and longitudinal analysis. AIDV also provides an early warning system when countries take a sharp turn from democratic values. The alarm bells have just gone off.

Several of the metrics in the AIDV report consider the importance of transparency for the protection of democratic values. One factor is whether countries have established a formal right to “examine the logic of automated systems.” These words can be found in the original EU Data Protection Directive, as well as the modernized Council of Europe Privacy Convention, “108+.” In the report’s methodology, positive scores are assigned to those countries subject to these laws and other similar provisions.

The work to date at the CoE reflects a clear commitment to transparency in automated decision-making. Building on this work, and following an open and inclusive process, the final report of the Council of Europe Ad hoc Committee on AI (the CAHAI) called for strong provisions to ensure accountability in the proposed AI convention. As the expert committee explained, ‘the concepts of “transparency”, “explainability” and “accountability” are considered by the CAHAI to be of paramount importance for the protection of the rights of individuals in the context of AI systems.

Transparency is the cornerstone of accountability in digital systems. Without transparency, it is not possible to assess outcomes or maintain explainability. And as the Court of Justice has recently determined, systems that do not allow meaningful transparency, such as AI systems that incorporate Machine Learning techniques, may be simply incompatible with the protection of fundamental rights.

Transparency is also the foundation of democratic governance. That is the second meaning set out in the methodology of the AIDV report – to assess whether governments had created a meaningful process for public participation and whether governments made relevant documents available to the public. This is the essence of democratic governance. Over the last several years, many countries developed national AI strategies to promote innovation and growth, and also to address ethical issues and social impacts. Many countries created inclusive processes that gathered input from academics, and civil society, educators, social scientists and others. The national strategies reflected these diverse inputs.

Not all governments followed this path. In the first edition of the report, the United States received low scores on transparency because the secretive National Security Commission on AI met behind closed doors, under the leadership of tech CEOs and the defense industry. That Commission, ignoring the concerns of the American public, which is more worried about the AI future than it is enthusiastic, put forward recommendations to advance the military-industry alliance and ignored proposals to establish necessary safeguards. In recent years, the US has taken a more open and inclusive approach to national AI policy. Identifying these changes in the practices of national governments and international organizations is critical to assess the impact of AI policy making on democratic institutions.

Back to the CoE

So, what are we to say of the recent developments at the Council of Europe? In brief, the COE is moving in precisely the wrong direction. For two years, as observers of the CoE expert group on AI, civil society participated in all of the meetings, contributed suggestions, and worked with others, including national delegations and the secretariat, to help ensure that the mandate was fulfilled. Like all political processes, civil society did not achieve everything hoped for, but there was substantial progress and areas for further work were clearly  identified. The final report of the CAHAI set out the basic elements of the Convention to follow. But now the Committee on AI, at the urging of non-Member States, has chosen to move the process behind closed doors.

Why did this happen? Part of the problem is that the CoE looked to a precedent created by the drafting of the Cybercrime Convention (2001), which took place shortly after 9-11, when law enforcement cooperation was paramount. The CoE became the vehicle for an international convention to promote cooperation among police agencies. By way of contrast, the drafting of the more recent modernized Privacy Convention, focused on updates to protect fundamental rights in the realm of digital technologies, was open and inclusive. Civil society was welcome at the table and helped shape outcomes.

Part of the problem is also that the work of the CoE became entangled with similar work at the EU. The CoE began work on the AI Treaty at the same time the EU started up the EU AI Act, a risk-based framework to establish coherent regulations for the internal market. In the past, these two institutions have moved forward on parallel tracks without much conflict. That was the experience, for example, with the development of the GDPR and the Council of Europe Modernized Privacy Convention.

In this instance, Brussels decided that the EU AI Act should take priority. The European Commision’s ambition is to internationalize the EU AI Act through the CoE so that it will have the same influence as the GDPR, widely regarded as establishing the Brussels Effect and giving the EU global influence in the realm of digital policy. But the success of the GDPR came about without the need to co-opt the CoE process. The GDPR reflected the leadership of the Parliament, the close collaboration of the EU institutions, and the hard work of the advocacy community on a matter of global concern. The challenge to replicate the success of the GDPR will remain with the Commission regardless of the outcome at the Council of Europe.

Nonetheless, late last year the Commission told its delegates to Strasbourg to slow the work on the Convention until the EU AI Act was completed. This was a risky strategy for both the EU and CoE member states as many were eager to see the international convention move forward, and the EU AI Act itself is subject to delay as the incoming Swedish Presidency of the EU Council recently indicated.

The entanglement creates other problems as well. The attempt to reflect the work of the Commission at the CoE Committee on AI means that many of the delegates at the CoE are now those with the economic portfolio, who managed country positions on the risk-based framework negotiations for the internal market, not the experts in the human rights mandate of the CoE that one would typically expect to see in Strasbourg. Here again, the CoE rules of procedure, which anticipate the participation of high-level officials with the relevant expertise, are stretched to accommodate Brussels and the non-Member States.

This is not the outcome that should result. Jan Kleijssen, the outgoing director of the CoE Directorate on the Information Society, repeatedly emphasized the need for “complementarity” to align the work of the CoE and the EU and to preserve the mandate of the two.

The Path Ahead

All is not lost. The final report of the CAHAI provides a good basis for the text of the Convention. Civil society organizations have urged the CoE to build on this foundation. In a public statement to the Committee on AI, civil society organizations also urged the Committee to ensure that the Convention reinforces existing human rights frameworks and does not undermine human rights standards applicable to AI systems.

In a widely regarded report, the former chair of the UN High Commission for Human Rights urged a moratorium on the deployment of AI techniques that fail to comply with international human rights standards. UNESCO has recommended a ban on the use of AI techniques for mass surveillance and social scoring. The recent judgment of the Court of Justice in the Ligue des droits humains case raises the very real possibility that machine learning techniques are incompatible with the protection of fundamental rights, and in a statement to the UN Digital Envoy, CAIDP identified a dozen provisions of the ICCPR that could be adversely impacted by AI techniques. At a minimum, these human rights benchmarks for AI should be incorporated in the final text of the CoE Convention on AI.

The CoE must also pay particular attention to the structural gender, racial and socio-economic inequities embedded in datasets, and the impact of AI on marginalized communities, particularly those who are already subject to amplification of bias in the deployment of AI systems. Data analytic techniques provide powerful tools to uncover and correct bias in automated decision-making systems. But outputs derived from opaque systems challenge fundamental concepts of fairness and transparency.

Civil society organizations also urged the CAI ensure that the Convention includes within its scope national security and dual-use applications of AI. These recommendations follow from growing awareness that AI-powered drone swarms pose enormous concerns for democratic societies as do facial surveillance techniques that are presented in the context of national security but quickly become the default when mission creep and authoritarian mandates converge.

These are the points that civil society groups were prepared to debate and discuss at the upcoming meetings of the Drafting Committee. Although civil society is no longer in the room, it will be important to monitor the outcomes closely. Leaders at the CoE  should turn their attention to this work to ensure that the mandate is fulfilled and the democratic governance of AI is established. The CoE must ‘remain the benchmark for human rights, the rule of law and democracy in Europe.

SUGGESTED CITATION  Hickok, Merve, Rotenberg, Marc; Caunes, Karine: The Council of Europe Creates a Black Box for AI Policy, VerfBlog, 2023/1/24, https://verfassungsblog.de/coe-black-box-ai/, DOI: 10.17176/20230124-220003-0.

Leave A Comment


1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.

Explore posts related to this:
AI, AI Act, Algorithmic Transparency, Artificial Intelligence, Council of Europe, Transparency, algorithms

Other posts about this region: