08 December 2025

Artificial Intelligence and Human Rights Courts

Understanding the Challenges

Artificial intelligence (AI), especially generative AI, is becoming increasingly embedded in almost every aspect of life. From user experience to management of industrial processes and education, AI is everywhere. In this context, it is particularly challenging to assess how the use and deployment of AI are already affecting the work of courts, both at the domestic and international levels. Human Rights Courts (HRCs) are no exception, and their work will be increasingly shaped by this technology. To better understand the challenges these institutions face, particularly those posed by the rise of generative AI, this piece proposes to distinguish between two aspects of AI and its use that are likely to influence the work of HRCs: first, how HRCs will use and deploy AI in their operations, and second, how they will engage with cases involving AI usage. Rather than offering solutions, this piece aims to lay the groundwork that will help start a much-needed conversation.

Artificial Intelligence and the Judiciary

AI is a comprehensive term which often refers to a range of processes and technologies that enable computers to support or substitute tasks traditionally performed by humans (UN GA A/73/348, p. 3). As a result, several types of AI can be identified including traditional/ predictive AI (type of AI that can forecast or predict outcomes accurately based on historical data (Caballar; Microsoft)) and generative AI (type of AI that can perform human-like tasks generating original content, such as text, video, audio or images (Stryker and Scapicchio; Microsoft)).

The rapid deployment of AI, especially generative AI, in judicial settings is quite unsettling. Despite the lack of a comprehensive understanding of the consequences of this technology, particularly its legal implications, courts, especially at the domestic level, are already employing AI (UN GA A/80/169). Although HRCs have their own characteristics, several of the questions concerning responsible adoption of AI that have been raised at the domestic level also apply to those courts. For instance, efficiency, transparency, bias, and risk of errors.

At the domestic level, efficiency is often cited as the main reason for deploying AI in courts. In overburdened court systems, the promise of automating repetitive or time-consuming routine tasks, which will impact on the volume of cases processed, is very attractive. And that is understandable. If the tools make daily work easier, why not use them?

In HRCs’ settings, efficiency will likely be (and is already, as Veronika Fikfak and Laurence Helfer will show in their contribution to this symposium) a driver for AI adoption as the high volume of cases is also a pressing issue. For example, the European Court of Human Rights (ECHR) had more than 60,000 pending cases by the end of 2024. In this context, automating routine tasks is a promising use of AI. However, it is important to consider several questions that arise: What does efficiency mean from a human rights perspective? How might discourses of efficiency influence the adjudication of human rights? What is gained and what is lost in the process? How do the benefits of efficiency balance with the obligation to protect rights? Could safeguards be overlooked in the name of automation? For instance, processing large volumes of data could lead to the risk of overlooking data protection regulations, particularly in regions that lack strong data protection frameworks, such as the European Union’s General Data Protection Regulation.

Another concern raised at the domestic level is bias. Employment of AI, especially generative AI, in the Judiciary has the potential to reproduce biases present in the datasets used to train the models. This is more concerning when using the available commercial models, as it is nearly impossible to identify the sources of the training data. As a result, the models might reproduce, amplify and propagate those biases and produce content that reinforces the societal biases, as Nydia Remolina and David Socol de la Osa have shown. In HRCs’ settings, according to Daniel Moeckli, this is particularly troubling as non-discrimination is one of the core principles for human rights law. When considering the deployment of AI, HRCs should assess how they will ensure that biases are accounted for and which measures they will implement to counter potential problems derived from those biases to ensure that their decisions are fair and trustworthy.

There is also the risk of errors which can happen for several reasons. In the context of generative AI, ‘hallucinations’ and over-reliance on the tools are particularly problematic. Hallucinations occur when the model produces information that is false, inaccurate, or nonsensical and presents it as factual (IBM; Google) while over-reliance takes place when the user puts too much trust in the model and accepts the model’s outputs even if they are incorrect (IBM; Passi and Vorvoreanu). Both risks can severely impact human rights.

In addition to the issues just mentioned and to better understand the challenges that HRCs are facing with this technology, this piece considers AI influence in two main areas. The first refers to the use of AI by HRCs themselves in their daily operations. This involves concerns related both to the way HRCs will deploy AI and about understanding the purposes of such deployment. The second area relates to how HRCs will address cases involving AI, and how this intersects with the broader issue of AI governance.

First Challenge: Using and Deploying AI in Human Rights Courts

The first challenge concerns the use and deployment of AI by HRCs. The question is not if HRCs will use and deploy AI in their daily operations rather the question is when (and by extension how) HRCs will do it. In fact, for example, the Inter-American Court of Human Rights (IACHR) has already done so, introducing an AI-powered platform that allows systematic and organized searches of the IACHR’s jurisprudence (Themis AI).

There are several areas where AI tools might be able to help the usually overloaded workflows of HRCs. For instance, AI could be successfully employed in translation and document summarization, research support, even in drafting assistance. However, responsible adoption of AI, particularly generative AI, requires balancing the perceived benefits (particularly those related with efficiency: more cases in less time) with the risks that this technology poses to human rights.

At domestic levels, the deployment of AI usually follows one of two pathways: either a more formal institutional framework or a more informal, ad-hoc practice-driven approach. The institutional framework requires that the adoption of AI is supported by the court itself. Often this includes the adoption of dedicated rules or normative frameworks to govern the deployment and use of AI by the court. The informal approach, however, usually follows a bottom-up process which involves judicial staff or employees using AI tools (particularly commercial large language models) on an individual basis, often without a proper legal or ethical framework in place.

Both domestic approaches have different implications and are likely to influence how HRCs adopt AI. While a more formalistic approach appears to be more consistent with a human rights legal framework, it could also hinder innovation and the adoption of AI within HRCs. A more informal approach is more flexible and could potentially help with AI adoption by HRCs, but, at the same time, is challenging given the sensitivity and complexity of the cases involved. For instance, informal uses tend to foster lack of transparency concerning AI employment, which may also conflict with the very rights that HRCs are tasked to uphold (for example, violations of the non-discrimination principle or disregard or conflicts with due-process rules).

Additionally, there is another ethical dimension that must be considered. Is it ethical for HRCs to employ tools that have significant environmental costs? For instance, AI development, including computing and data storage, are resource-intensive and, in some cases, directly affecting local communities. Hence, another salient question emerges: is it acceptable for HRCs to use tools that might have negative impacts on communities already affected by environmental degradation?

Whatever the approach HRCs consider, it becomes clear that the responsible adoption of AI requires that these institutions think carefully about adequate legal frameworks to balance necessary innovation and the protection of human rights.

Second Challenge: Engaging with AI in Human Rights Cases

In a context where the use of AI has increased significantly in recent years, it is only a matter of time before HRCs are confronted with cases in which AI is involved. How HRCs will address such disputes constitutes the second challenge that AI poses to their work. This is a compelling dimension for understanding the intersection between technology and human rights law and for addressing the impact of AI on human rights.

This second challenge refers to two different dimensions, each with its own implications. The first dimension concerns the level of technical capacity that HRCs must possess to meaningfully evaluate cases in which AI plays a central role. For example, these cases may involve issues such as mass surveillance systems, facial and biometric recognition technologies, or algorithmic decision-making in immigration or welfare settings.

These cases pose serious questions for judicial work. To what extent can judges and legal officers be expected to understand the underlying technical architecture of AI systems? How will judges and legal officers be trained to engage critically with AI-related evidence and claims, such as biases, opacity, and accuracy?

The second dimension lies in addressing the extent to which court judgments delivered by HRCs may influence broader AI governance frameworks, both at domestic and international levels. The question is not whether HRCs will directly participate in shaping these governance frameworks, but rather, how their rulings may influence them. Although HRCs are not rule-makers, their jurisprudence can shape how future rules are interpreted, designed, and implemented.

This is particularly important in a regulatory landscape where many emerging AI normative frameworks expressly rely on human rights norms (see here and here). In this context, HRCs’ reasoning may indirectly define the scope of permissible State actions or obligations which will impact AI development or deployment. For instance, if a HRC interprets the state obligations in environmental issues, that interpretation may shape the way regulators design AI frameworks to accommodate those same obligations.

Several questions arise: should HRCs consider these broader implications when deciding cases? Do they have a duty to think beyond the individual case and consider how their decisions contribute to or shape emerging norms on AI? There is no simple answer, but the question reveals how AI challenges traditional assumptions about judicial work.

Final Thoughts

AI is not going away; it is becoming an embedded feature in virtually every aspect of society, including the judiciary. HRCs are not an exception, and they will be called upon to address responsible adoption of AI within their operations as well as its increasing presence in the cases before them. The key question for HRCs is not if they should adopt AI, but how and when they will do so, and more importantly, why. In this context, the discussion of the development of normative frameworks for responsible AI adoption should begin with a simple but nonetheless fundamental question: what do HRCs want these tools for? What is the purpose they should serve within a human rights framework?

The adoption of AI in HRCs’ operations offers opportunities for improvement in terms of efficiency and access, but it also poses significant challenges. When implementing these tools, HRCs must ensure that they do not compromise the very rights they are tasked to protect. At the same time, HRCs will increasingly engage with cases involving AI, and they will need to develop greater awareness of the complex implications of technology for human rights. The responsible adoption of AI will require careful consideration and a balanced approach, based on a nuanced understanding of the underlying technical architectures of AI systems.


SUGGESTED CITATION  Pilar Llorens, Maria: Artificial Intelligence and Human Rights Courts: Understanding the Challenges, VerfBlog, 2025/12/08, https://verfassungsblog.de/artificial-intelligence-and-human-rights-courts/, DOI: 10.17176/20251209-172102-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.