Automating International Human Rights Adjudication in Strasbourg and Beyond
Celebrating the past 75 years of the European Convention on Human Rights also requires us to think about its future. The rapid progress made with digital technologies during the past decade – and especially the widespread use of AI – promises to reshape the work of the European Court of Human Rights (ECtHR). In fact, international human rights courts and quasi-judicial bodies are increasingly turning to a variety of automated decision-making (ADM) tools to address crushing backlogs and improve the efficiency of reviewing individual complaints. These courts and review bodies have, however, yet to fully address the profound legal, normative, and practical issues raised by these technologies.
In a recent-published article, Automating International Human Rights Adjudication, we offer a comprehensive and balanced assessment of the benefits and challenges associated with introducing ADM technologies, including AI, into the work of international human rights complaints mechanisms such as the ECtHR. We argue for a selective approach to automation, endorsing ADM tools that preserve discretion by generating recommendations that human adjudicators are free to accept, reject, or modify, while strongly rejecting the use of such tools to fully automate judicial or quasi-judicial decision-making. In a time of rapid technological change, a broader goal of our article is to foster more in-depth conversations between legal actors in the international human rights system and technology experts to ensure that any tools that are adopted are both fit for purpose and accountable to litigants and to the public.
The distinctive aspects of international human rights adjudication
Adjudication before the ECtHR, as before any international human rights complaint mechanism, differs considerably from ordinary domestic litigation. First, international human rights bodies act as a forum of last resort. They provide a final opportunity to hold states accountable after domestic mechanisms have failed to prevent or remedy violations. This corrective function implies a responsibility to examine each claim with care, a value that ADM tools could potentially undermine. Second, the decisions of these international bodies are, with rare exceptions, final and unreviewable, making the quality and integrity of the decision-making process paramount. Third, decisions of international human rights bodies have significant cross-border effects. Judgments not only affect the obligations of the respondent state but also have an impact on other states and international bodies that view them as persuasive authority. Finally, international courts and review bodies have a longstanding commitment to the dynamic and evolutive interpretation of human rights treaties as living instruments, allowing them to expand protections in response to new contexts and progressive trends—a principle that clashes with the status quo bias of data-driven algorithms.
Types and purposes of automated decision-making (ADM)
We next review six functions that ADM tools could perform in human rights adjudication:
- Content capture: Digitizing hard copy documents to replace repetitive manual tasks and reduce docket congestion.
- If-then calculations: Applying fixed, rule-based criteria to simple registration and admissibility decisions, such as time limits or jurisdiction.
- Decision tree and precedent guidance: Providing new staff and judges with a structured, step-by-step decision-making process, recommending relevant precedents and identifying older ones for potential review.
- Summarization and translation: Using software to summarize lengthy complaints or translate key texts, freeing up staff time.
- Similarity clustering: Identifying highly repetitive cases or complaints alleging similar violations against the same state (e.g., for pilot judgments) to improve efficiency and consistency.
- Prediction: Using algorithms to predict the outcome of a case, such as whether a state is likely to be found in violation of the European Convention.
The suitability of ADM: preserving judicial discretion
Having identified the range of potential functions, we next ask what types of ADM are suitable for international human rights adjudication. The suitability framework we propose assesses whether an ADM tool preserves the essential attributes of human judging, including discretion, agency, dignity, fairness, and individualized justice. We strongly endorse the use of ADM to improve the speed and consistency of digitization and case management tasks, such as content capture, if-then calculations for initial registration decisions, and similarity clustering for identifying repetitive cases.
However, we draw a firm line against outcome prediction for legal concepts, precedents, and violation determinations, especially tasks that are closely linked to final judgments on the merits. In particular, we reject on legal, normative, and practical grounds the use of algorithms or AI to predict whether a state has violated a treaty. Such decisions must remain in the hands of human adjudicators to ensure judicial discretion and to remain compatible with the principle of individualized justice.
The challenge of minimizing cognitive bias
Even ADM limited to semi-automated, supportive roles implicates two major types of cognitive biases: biases inherent in the data, and biases arising from human-machine interaction. As our article explains, these biases must be minimized if algorithms and AI are to remain suitable for international human rights adjudication.
Status quo bias. Algorithms learn from previous jurisprudence, meaning their predictions tend to align with the status quo and reproduce prior outcomes. This poses a direct challenge to the expansive interpretation and living instrument doctrine that are fundamental to the ECtHR and other international human rights bodies.
Selection bias. The selection and quality of training data are critical. Imbalanced datasets, such as an over-representation of cases against a single state (e.g., Ukraine in early ECtHR studies), can skew results and reduce an algorithm’s utility and accuracy for analyzing human rights concerns in other countries.
Biases that arise from human-machine interaction include automation bias (the over-reliance on an algorithm’s recommendation which should be viewed sceptically) and discounting bias (the inappropriate rejection of an accurate algorithmic recommendation). The tension here is that we expect humans to supervise machines even when machines on average outperform human decision makers. Studies have shown that training, disclosures and cautionary instructions have limited impact on reducing these biases or improving human oversight.
Enhancing accountability
The persistence of cognitive biases reinforces the need to manage the risks of supportive or semi-automated ADM through a robust accountability framework. Our article conceptualises accountability as comprising two essential elements: institutional oversight and explainability.
Institutional oversight
Institutional oversight requires international human rights tribunals to adopt a structured process for evaluating, designing, developing, implementing, and monitoring ADM tools. This process begins with an initial decision: before adoption, the tribunal must determine whether an ADM tool is compatible with the principle of individualized justice and meets the suitability standards for specific decision-making tasks.
For ADM tools that contribute to admissibility or merits decisions, accountability demands a public review process and potentially the creation of an external oversight body. This review should involve consultations with key stakeholders, including states, lawyers, litigants, and civil society groups, to discuss the selection of tools, their introduction methods, and the foreseeable benefits and risks.
Crucially, oversight must be continuous. The ECtHR and other international human rights complaints mechanisms must monitor the ADM tool’s performance by examining metrics like accuracy and consistency. Based on this analysis, the court or body can recalibrate the tool as necessary.
Explainability
The second component of accountability, explainability, focuses on the information provided to the public and to litigants after an ADM tool is introduced. This information can be divided into two distinct categories:
- Systemic, ex ante explainability: This is aimed at the public and provides a high-level, systemic overview of each automation tool. Tribunals should inform the public about how the tool was selected, vetted, and integrated into the adjudication process; why the automation is expected to improve decision-making outcomes, and why the outcomes are considered accurate and trustworthy.
- Case-specific, ex post explainability: These explanations are provided to litigants when an algorithm or AI assists with a substantive decision (e.g., a decision on inadmissibility or merits). At minimum, the parties should be informed which type of ADM tool was used in their case and, critically, whether the tribunal accepted, modified, or rejected the automated recommendation.
By adopting this dual framework of oversight and explainability, international human rights complaints mechanisms such as the ECtHR can leverage the efficiencies of ADM while preserving the legitimacy and integrity of the adjudication process.
Conclusion
The introduction of ADM to international human rights courts and review bodies is a double-edged sword. While it promises relief for overburdened judges, registries and secretariats, increased efficiency and improved consistency, it also raises fundamental questions about the essential nature of rights adjudication, including the preservation of judicial discretion and the accountability of international institutions.
A central claim of our article is that the future lies in the adoption and refinement of semi-automated tools – AI and algorithms that issue recommendations that human decision-makers are free to accept, reject, or modify – supported by robust institutional oversight and explainability. Full automation of decisions, especially those pertaining to treaty violations, is never appropriate. As automation tools continue to evolve, we expect that more fundamental changes to the process of international human rights adjudication – including at the ECtHR –may ultimately be required.




