This article belongs to the debate » The Rule of Law versus the Rule of the Algorithm
31 March 2022

Global Inequities in Algorithms

The Impact of Algorithms, Machine Learning and Automated Decision-making on the Most Vulnerable Populations

Algorithms can seem like esoteric subjects, often relegated to the realm of engineers and technology companies, given the technical nature of algorithmic design. Algorithms, when applied, take on a social character that invites us into peer beneath the hood to understand both their function and application. Given the growing ubiquitousness of algorithms in our daily lives, policymakers are looking to capture algorithms within regulatory mechanisms. In the backdrop of these developments, algorithms that are seen as “neutral” are anything but, rather have the potential to amplify societal inequalities and bolster existing systems of power. This article seeks to understand the inequalities that undergird algorithmic applications, in order to understand how to regulate these systems.

Dismantling Techno-optimism

There has been a lot of techno-optimism surrounding the use of algorithms, and often concurrently Artificial Intelligence, given their ability to sort, process and parse through large volumes of data. This optimism, though, glosses over many of the issues algorithms could pose to our present and future(s). Interventions by critical scholars and activists have underscored the fact that algorithms are not “neutral”, rather very much products of economic, social and political systems and can have disparate impacts on different communities, particularly disadvantaging marginalized groups and identities. This neutrality is important to dismantle, since it can often make it difficult to guard against discrimination, as individual intention is distilled through technological systems. Ruha Benjamin talks about this in her anthology “Captivating Technology”:

“These “default settings” encompass legal, economic, and now computer codes, and move past an individual’s intention to discriminate, by focusing analysis on how technoscience reflects and reproduces social hierarchies, whether wittingly or not. From credit-scoring algorithms to workplace monitoring systems, novel techniques and devices are shown to routinely build upon and deepen inequality”.1)

Technologies have the ability to obscure both individual and structural biases under claims to neutrality. These claims are even more potent when they’re backed by colonial logics of Western objectivity, often providing cover to inequalities that inhere under the surface. In a lot of countries like Pakistan, for instance, where corruption is trumpeted as the foremost political problem and impediment to progress, there has been a tendency to speak in these techno-optimistic terms that lend to the deployment of algorithms to perform tasks humans are too “flawed” to perform. These solutions are often bolstered by developmental and aid agencies as well.2)

Ruha astutely points out that this tendency to look towards the technoscientific to ““fix” the problem of human bias when it comes to a wide range of activities” is misplaced, given there is now ample literature that establishes the fact that algorithms have the potential to replicate societal biases, and in some cases amplify them.3) Interventions by researchers such as Safiya Umoja Noble4) have gone a long way in making the case for racist, sexist, ageist, and ableist algorithms. Many developers will posit that algorithms are only as good as the data you feed into them and that the solution lies with better data – machine learning will eventually correct itself and biases are merely growing pains. However, data itself is bound by gaps and biases that are underpinned by discriminatory structures. These issues raise some important questions for anti-discrimination law: How do we hold a machine or algorithm liable for bias and discrimination? What would the application of existing human rights and accountability frameworks look like in these cases?

Global Inequalities in Tech

These biases are bound to be exasperated in many parts of the Global South that are often “recipients” of ready-made technologies and algorithms, rather than the site where these technologies are built. This plays into larger issues of global inequities and the extractive nature of global capital. It is unlikely that big tech companies will design and train algorithms on data from each and every context they serve. The indifference of many platforms to developing context-specific content moderation is only the tip of the iceberg.5)

When it comes to homegrown technologies, the lack of capital has meant that these technologies are either not built in the first place or are unable to compete with algorithmic solutions from the more “developed world”. Even when measures are taken to ensure more context-specific tech, there is little room for conversations on biases. In Pakistan, the Presidential Initiative for Artificial Intelligence & Computing (PIAIC), for instance, has been launched to deliver courses on a range of topics such as AI, Cloud and Blockchain technologies to students and professionals, with the aim of capacity-building within the country. However, these homegrown programs have the potential to replicate approaches adopted elsewhere – building solutions without ensuring enough measures are in place for ethics, privacy and non-discrimination to be built into these systems.

The uncritical embrace of algorithmic solutions is prevalent in areas such as policing. as smart city projects are mushrooming across the world, as a way to ensure better city management as well as crime control measures. Much of these technologies and systems are being imported into countries that lack safeguards regarding privacy and rule of law. The use of algorithms in policing is bound to amplify pre-existing biases of the legal system, which holds systemic biases on lines of gender, race, ethnicity, nationality, ability and class. There is a dangerous trend towards global export of facial recognition technology, which employs algorithms to identify “suspects”, from countries such as Israel, the United States, Canada and China prompting international bodies to call for a moratorium on the global trade of surveillance tech “until rigorous human rights safeguards are put in place to regulate such practices and guarantee that governments and non-State actors use the tools in legitimate ways.”6) These developments have been accelerated by the Covid-19 pandemic as governments and businesses are turning to technology as the “solution”. To illustrate, in Pakistan, the government sought to employ a contact-tracing surveillance application, which was made mandatory for those travelling to Pakistan.7) The application however ran into many privacy issues due to a failure of the developers to build in safety concerns at the design stage of the application.8)

AI and the Law

All these issues beg the question: How do we begin to regulate algorithms? This question is particularly urgent for countries that lack constitutional and human rights safeguards that can be transposed onto emerging technologies. Pakistan currently does not have a data protection law, though a Personal Data Protection Bill has been developed by the Ministry of Information Technology and Telecommunications. However, the latest publicly available draft has faced criticism due to data localization requirements and vague terminology.9)

Meanwhile, algorithmic transparency for instance is posited as a way to force developers, be they businesses or states, to be transparent and open regarding how the algorithm takes decisions so that biases can be spotted and accountability can be directed towards those responsible. However algorithmic transparency is difficult to practice in contexts where Rule of Law mechanisms are weak. Furthermore, applying algorithmic regulation to tech and businesses developed “elsewhere” is a legal and jurisdictional challenge in the absence of international frameworks to ensure cross-border accountability. Facebook’s inability, and damningly unwillingness, to adapt its content moderation algorithms and policies to other contexts, for instance, has led to devastating consequences, with the most egregious being the genocide in Myanmar.

Additionally, legal interventions can only do so much in contexts where infrastructural problems inhere in terms of “building, maintaining, and appropriating data-driven technologies”.10) These problems can hardly be addressed legally, and are more about where centers of global capital are located, which leads to prioritizing of needs of one population as opposed to another. A strictly legal approach would just be window-dressing without serious investment in understanding and developing indigenous systems of knowledge that are geared towards context-specific technologies that are both developed locally and accountable to the populations they impact. The discourse around legislation and regulation of technologies is oriented towards the West as the “gold standard”. This forecloses the possibility of developing more context-specific systems (and laws!) that speak to the needs and understandings of local populations, while presupposing that these systems are objectively better, when they are far from perfect even in the Western context. This techno-legal and context-specific approach is necessary for ensuring algorithmic justice for all.

References

References
1 Ruha Benjamun, “Captivating Technology”, p. 2.
2 Horlane Mbayo, “Data and Power: AI and Development in the Global South,” Oxford Insights, October 2, 2020, https://www.oxfordinsights.com/insights/2020/10/2/data-and-power-ai-and-development-in-the-global-south.
3 Ibid.
4 Safiya Umoja Noble, “Algorithms of Oppression,” 2018, NYU Press.
5 Chinmayi Arun, “AI and the Global South: Designing for Other Worlds,” in The Oxford Handbook of Ethics of AI, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, July 2020, DOI: 10.1093/oxfordhb/9780190067397.013.38.
6 Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, “Report on the adverse effect of the surveillance industry on freedom of expression,” May 29, 2019, A/HRC/41/35, https://www.undocs.org/A/HRC/41/35.
7 Amir Saeed, “Pakistan rolls out coronavirus surveillance app for incoming travelers,” Arab News, July 11, 2020, https://www.arabnews.com/node/1703361/world.
8 Ramsha Jahangir, “Govt’s Covid-19 app sparks furore over security flaws,” Dawn, June 11, 2020, https://www.dawn.com/news/1562767.
9 Kalbe Ali, “Pakistan personal data protection bill termed vague,” Dawn, October 24, 2021, https://www.dawn.com/news/1653675.
10 Ranjit Singh, “Mapping AI in the Global South,” Data and Society, January 26, 2021, https://points.datasociety.net/ai-in-the-global-south-sites-and-vocabularies-e3b67d631508.

SUGGESTED CITATION  Khan, Shmyla: Global Inequities in Algorithms: The Impact of Algorithms, Machine Learning and Automated Decision-making on the Most Vulnerable Populations, VerfBlog, 2022/3/31, https://verfassungsblog.de/roa-global-inequities-in-algorithms/, DOI: 10.17176/20220401-011214-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.