Search
Generic filters
16 May 2024
,

Gaza, Artificial Intelligence, and Kill Lists

The Israeli army has developed an artificial intelligence-based system called “Lavender”. This approach promises faster and more accurate targeting; however, human rights organizations such as Human Rights Watch (HRW) and the International Committee of the Red Cross (ICRC) have warned of deficits in responsibility for violations of International Humanitarian Law (IHL). In the following, we will examine these concerns and show how responsibility for violations of IHL remains attributable to a state that uses automated or semi-automated systems in warfare. Continue reading >>
07 February 2024
,

Examining the EU’s Artificial Intelligence Act

Finally, consensus on the EU Artificial Intelligence Act. The academic community is thus finally in a position to provide a (slightly) more definitive evaluation of the Act’s potential to protect individuals and societies from AI systems’ harms. This blog post attempts to contribute to this discussion by illustrating and commenting on the final compromises regarding some of the most controversial and talked-about aspects of the AI Act, namely its rules on high-risk systems, its stance on General Purpose AI, and finally its system of governance and enforcement. Continue reading >>
0
13 December 2023

What’s Missing from the EU AI Act

The AI Act negotiators may still have been recovering from the political deal that was struck during the night of December 8 to 9 when two days later Mistral AI, the French startup, open sourced its potent new large language model, Mixtral 8x7B. Though much smaller in size, it rivals and even surpasses GPT 3.5 on many benchmarks thanks to a cunning architecture combining eight different expert models. While a notable technical feat, this new release epitomizes the most pressing challenges in AI policy today, and starkly highlights the gaps left unaddressed by the AI Act: mandatory basic AI safety standards; the conundrum of open-source models; the environmental impact of AI; and the need to accompany the AI Act with far more substantial public investment in AI. Continue reading >>
0
05 October 2023
,

Automated Decision-Making and the Challenge of Implementing Existing Laws

Who loves the latest shiny thing? Children maybe? Depends on the kid. Cats and dogs perhaps? Again, probably depends. What about funders, publishers, and researchers? Now that is an easier question to answer. Whether in talks provided by the tax-exempt ‘cult of TED’, or in open letters calling for a moratorium, the attention digital technologies receive today is extensive, especially those that are labelled ‘artificial intelligence’. This noise comes with calls for a new ad hoc human right against being subject to automated decision-making (ADM). While there is merit in adopting new laws dedicated to so-called AI, the procedural mechanisms that can implement existing law require strengthening. The perceived need for new substantive rules to govern new technology is questionable at best, and distracting at worst. Here we would like to emphasise the importance of implementing existing law more effectively in order to better regulate ADM. Improving procedural capacities across the legal frameworks on data protection, non-discrimination, and human rights is imperative in this regard. Continue reading >>
0
18 August 2023

One Act to Rule Them All

Soon Brussels' newest big thing - the Artificial Intelligence Act - will enter the Trilogues. In order to better understand what’s at stake, who are the main actors and their motivations, and how to make one’s mind about all the conflicting claims we need to dive into the legal, economic and political aspects of the AI Act. The aim of this piece is to contextualize major milestones in the negotiations, showcase some of its critical features and flaws, and present challenges it may in the near future pose to people affected by “smart” models and systems. Continue reading >>
0
25 July 2023

A Scandal on AI in Administration, Again

After the infamous Dutch benefits scandal, the Netherlands are yet again the scene of wrongful application of an algorithm by the government. This time, the main actor is the Dienst Uitvoering Onderwijs (DUO), the Dutch agency responsible for the allocation and payment of student loans to those enrolled in Dutch higher education. Specifically, DUO used an algorithm in their enforcement task, namely to verify whether the student loans have been rightfully allocated. In 2012, DUO commenced the use of this ‘in-house’ algorithm, which the Minister of Education – under whose responsibility DUO falls – halted on 23 June. The developments in the Netherlands epitomize the promises and pitfalls of further integrating automated decision-making (ADM) into public administration. On the one hand, ADM – sometimes labelled ‘artificial intelligence’ – is cheap and promises efficiency gains. On the other hand, ADM systems may be error-prone when facing the complex realities of societal life and legal ambiguity. Continue reading >>
0
07 April 2023

Squaring the Circle

The Italian Data Protection Authority banned ChatGPT for violating EU data protection law. As training and operating large language models like ChatGPT requires massive amounts of (personal) data, AI's future in Europe, to an extent, hinges upon the GDPR. Continue reading >>
24 January 2023
, ,

The Council of Europe Creates a Black Box for AI Policy

The Council of Europe Committee on AI has made a startling decision to carry forward future work on the development of an international convention on AI behind closed doors, despite the Council’s call for the Democratic Governance of Artificial Intelligence in a 2020 resolution. It is a surprising move from an international organization that has been at the forefront of efforts to promote greater transparency and accountability for the technology that is transforming the world. Continue reading >>
0
19 August 2022
,

Compute and Antitrust

Compute or computing power refers to a software and hardware stack, such as in a data centre or computer, engineered for AI-specific applications. We argue that the antitrust and regulatory literature to date has failed to pay sufficient attention to compute, despite compute being a key input to AI progress and services, the potentially substantial market power of companies in the supply chain, and the advantages of compute as a ‘unit’ of regulation in terms of detection and remedies. Continue reading >>
0
18 August 2022
,

Effective Enforceability of EU Competition Law Under Different AI Development Scenarios

This post examines whether competition law can remain effective in prospective AI development scenarios by looking at six variables for AI development: capability of AI systems, speed of development, key inputs, technical architectures, number of actors, and the nature and relationship of these actors. For each of these, we analyse how different scenarios could impact effective enforceability. In some of these scenarios, EU competition law would remain a strong lever of control; in others it could be significantly weakened. We argue that despite challenges to regulators' ability to detect and remedy breaches, in many future scenarios the effective enforceability of EU competition law remains strong. Continue reading >>
0
Go to Top