Artificial Intelligence Must Be Used According to the Law, or Not at All
Democracy requires to strengthen the Rule of Law wherever public or private actors use algorithmic systems. The law must set out the requirements on AI necessary in a democratic society and organize appropriate accountability and oversight. To this end, the European Commission made several legislative proposals. In addition to the discussion on how to use algorithmic systems lawfully, the question when it is beneficial to use them deserves more attention.
Nobody should hide their responsibilities behind automation
In August 2020, when hundreds of students grouped together and shouted “f**k the algorithm” in front of the UK’s Department for Education, they did not rage against the machine. Their rage was directed against the government that decided to use a tool they perceived as unjust. Even though media headlines and articles sometimes claim that “algorithms already rule our lives”, the students were not confused about who was responsible. Even if certain decision-making processes are automated, a “rule of the algorithm” shall and does not exist. There must always be someone, either a legal or natural person, who uses the algorithm and can be held responsible. In some cases, this use can be a challenge for the respect and enforcement of applicable legislation, where a system is used without adequate safeguards and quality controls to automate or support decision-making processes or for activities such as surveillance, this may violate the rights of individuals. Such violations can occur at great scale, depending on how broadly a system is used, and they can be difficult to prevent or detect when the system is not sufficiently transparent, or people remain unaware of its use. For example, the automated inference of information about people can affect their privacy and data protection rights. Another example is that bias in algorithms or training data of AI systems can lead to unjust and discriminatory outcomes.
The use of automated systems can also affect many other rights laid down in the European Charter of Fundamental rights, such as those to human dignity, good administration, consumer protection, social security and assistance, freedom of expression, freedom of assembly, education, asylum, collective bargaining and action, fair and just working conditions, access to preventive care, or cultural and linguistic diversity. If those systems are used in the context of law enforcement or the judiciary, they can also affect the presumption of innocence and the right to fair trial and defense.
The inaccessibility or non-existence of relevant information on automated systems impedes effective enforcement of fundamental rights obligations, secondary law and access to legal remedies. In addition, we sometimes see a tendency to design for purposes of deception and for evading responsibility. For example, certain car makers designed an algorithm that recognized when a car was being tested for exhaust fumes, to produce false test results which would not correspond to the real emissions in normal traffic. And in the Right to be Forgotten/Google Spain Case before the European Court of Justice, Google tried to dissociate its responsibility from the performance of its search algorithm, with the argument that it was fully automated, and that Google would not be responsible for the search results as a company or as a controller under the General Data Protection Regulation (GDPR). Thankfully, the European Court of Justice did not accept this early effort to create a new “irresponsibility defense”.