Law students often mention poor math scores as a reason to elect their course of study. Refugees of a world increasingly dominated by numbers and number-crunchers, jurists often wear the adage “iudex non calculat” as a badge of honour. The jurists’ legendary allergy to quantitative arguments, though, has been said to come to the detriment of good judicial decision making. The scale of partisan gerrymandering in the United States is seen as fundamentally undermining the principles of democratic accountability. Although both parties have sought to redefine electoral district for their own political profit, Republicans have been taking advantage of the coincidence of a favourable political cycle with the constitutionally mandated decennial census to redraw the boundaries of congressional districts. Partisan gerrymandering is claimed to have influenced the outcome of the 2016 US elections and it could affect the results of the upcoming midterm elections. This is in a context where the electoral system already shows a marked Republican bias. Consider that Democrats have won a plurality of the vote in four of the five last elections but secured a majority of congressional seats in only one of them. Many have pinned their hopes on the Supreme Court as the institution best positioned to address this dysfunction of American democracy. But in the two cases that came before the Court this last term, Gill v Whitford and Benisek v Lamone, the Justices were criticized for failing to engage with quantitative measures of gerrymandering. During oral arguments, Chief Justice John Roberts called these measures “sociological gobbledygook”. The Supreme Court ended up dodging the issue, although the question may well come back before the Court.
Surmounting the discipline’s allergy to numbers could do some good not just to constitutional judges but also to the scholarship that concerns itself with the discussion of the constitutional texts they are supposed to apply but also with the decisions they churn out. Some may object that quantitative methods are poorly suited for the analysis of legal arguments and textual contents. But an expanding body of empirical constitutional research demonstrates that this view is wrong.
Start with constitutional texts. Mila Versteeg has systematically coded the content of all human rights catalogues enacted since World War II worldwide, including all the revisions they have been subject to. Together with David Law, she has used this data to shed light on fascinating trends in the evolution of global constitutionalism. Among these trends features the phenomenon of “rights creep” – the fact that an ever larger number of constitutions enshrine an ever larger number of rights – and the rise of “generic” human rights provisions, i.e. boiler-plate rights provisions that now appear in virtually all constitutional charters of the world. This work also documents the diffusion of constitutional review through the last 70 years. More impressive still, the Comparative Constitutions Project has assembled an incredible wealth of data on all constitutions and constitutional amendments ever enacted since 1789. The resulting database compiles information on more than 600 indicators, covering all aspects of constitution-making, from the composition of the legislature to the rights of individuals and minority groups or the content of the preamble. The constitution of any state that has been independent over the past 200 years is included (so you will find Bavaria for the stretch of time during which it functioned as an independent monarchy). This dataset has already served to test a wide range of hypotheses, whether it is on the factors driving the global spread of judicial review, the environmental and design causes of the longevity of constitutions or the incorporation of international law into domestic legal orders.
It is sometimes said that these research projects only consider the “law in the books”. Constitutional law is more than the sum of all constitutional texts. The 7,762 words of the US Constitution are dwarfed by the gloss Supreme Court judges have put on them through hundreds of rulings. The same holds for the 27,739 words of German Basic Law in comparison to the voluminous case law of the German Federal Constitutional Court. Yet quantitative methods can be just as illuminating when it comes to investigating judicial output. Much of constitutional scholarship (and possibly most of the contributions to this Blog) consists of case analyses which attempt to evaluate the significance of the ruling at issue by relating it to previous decisions. Case citation dynamics is the example of an area where a technique known as “network analysis” has been fruitfully applied, as shown by work investigating the case of the European Court of Justice and the European Court of Human Rights. At least in places where judges do refer to previous rulings, the web of precedents in which a decision is (or is not) embedded can reveal a lot not only about the general direction of the court’s jurisprudence but also about the role of precedents in constitutional reasoning, especially in hard cases where the need to fend off criticism can be experienced as particularly strong. In the same vein, we also made a stab at mapping argumentation strategies in landmark constitutional cases in a recent book. We did it by bringing together 20 scholars from the five continents with the mission to investigate constitutional reasoning on 18 constitutional courts (included are the European Court of Human Rights and the European Court of Justice as well as the major constitutional tribunals of the democratic world). Instead of simply reporting the received scholarly wisdom, each contributor based her analysis on the 40 “most important” constitutional cases, which were selected according to a carefully designed procedure involving 5 additional local experts. All 760 judgments thus collected were then systematically annotated following a detailed conceptual map spanning 40 categories. Does the opinion make use of plain meaning, teleological interpretation or precedent? Does it cite legal scholarship? What is the structure of the argumentation? Is the case about competence or about human rights? Using this methodology, the book finds a great a variety of patterns of argumentation across cases. The finding is consistent with previous research suggesting that judges tend to be more creative in more salient cases. While casting doubt on the reality of a common law vs civil law divide often taken for granted, the book also finds that purposive (i.e. teleological) and precedent-based arguments have been gaining in popularity. In contrast, textualist arguments and arguments based on (original) intent have been losing traction. We tried to rationalize these findings by linking them to the rise of judicial power. Leading judgments capture major episodes of law creation in which judges choose their arguments strategically in function of their audience and political environment.
Limitations and Opportunities
To be sure, all these research projects, our own empirical inquiry into constitutional reasoning included, are not free of limitations. Just as any piece of research, quantitative research involves design choices, for which there is not always a single right answer. To what extent is the concept of “most important constitutional ruling” intersubjectively verifiable? What should a comprehensive typology of constitutional arguments look like? Different choices will likely generate different results. Yet even when the research is imperfect, engaging it critically and constructively – which implies refraining from the temptation to dismiss it as “sociological gobbledygook” without further ado – can be a great source of insights, forcing researchers to think more deeply about concepts and assumptions that are routinely invoked in constitutional debates but lack analytical sharpness. Just as engaging with quantitative research on gerrymandering may help judges develop a workable standard to establish what is and what is not acceptable in a representative democracy, grappling with the challenges and puzzles raised by quantitative empirical scholarship can help constitutional scholars form a more informed picture of constitutional law.
What is more, we predict that the work mentioned here is only the beginning of a more far-reaching revolution in constitutional research, as scholars begin to leverage the wide-ranging possibilities offered by Big Data, machine learning and natural language processing. Mapping issue attention across large corpora of judicial decisions or predicting constitutional rulings are just two examples of what the future potentially holds out.
This new scholarship, we believe, points the way to an exciting future for comparative constitutional law. Of course, the emergence of empirical methods will not make the normative debates redundant. Questions that are fundamentally normative in nature cannot, by definition, be reduced to empirical ones. Still, empirical approaches potentially represent a great enrichment to the normative discussion. By sharpening our understanding of how constitutional law operates in its global diversity, studies based on empirical social science methods promise to foster a more informed and, we believe, more interesting normative discussion.
You read this long post all the way down. Thanks, much obliged! Now, let me ask you something: Do you enjoy reading Verfassungsblog? If you do, please support us so that we can keep up our work and stay independent.
All the best, Max Steinbeis