Search
Generic filters
18 February 2025

The De-Regulatory Turn of the EU Commission

The current events in the US, especially the takeover of executive branches by the non-elected private citizen Elon Musk, left legal scholars and other constitutional experts in a state of shocked disbelief. From a European perspective, many consider such a development unthinkable. However, we should not be too certain about that. The current decision of the EU Commission to carry out a “de-regulatory turn” illustrates how strongly a technical innovation narrative – one that has contributed to the success of individuals like Musk and their corporate conglomerates – is catching on globally. Continue reading >>
0
14 February 2025

Rethinking Remembrance

Can commemorative practices such as memorials, museums, and national remembrance days effectively transform attitudes and behaviours to deter violence? Despite the proliferation of memorialisation practices globally, their tangible impact on reducing violence or fostering reconciliation and healing is often assumed rather than rigorously demonstrated. Continue reading >>
0
23 January 2025

Banning AI for Political Campaigns

On 2 January 2025, the Indonesian Constitutional Court banned the use of Artificial Intelligence by political candidates to design campaign portraits, citing ethical concerns and a violation of the constitutional "honest principle." This post explores the cultural context behind this unique decision, focusing on how Indonesia’s communal values and emphasis on outward appearance shape both the Court’s reasoning and the petitioner’s arguments. Continue reading >>
0
12 December 2024
,

Integrating Artificial Intelligence in Ukraine’s Courts

This post examines Ukraine’s recent steps toward AI integration in the courts, highlighting initiatives and plans for the future. While these efforts reflect a growing recognition of AI’s potential, they also reveal limitations. Concerns surrounding AI, such as data security and confidentiality, reliability, transparency, explainability, accountability, fairness, and bias, are just as significant in judicial contexts as they are in other areas. Continue reading >>
0
10 December 2024
,

AI Act and the Prohibition of Real-Time Biometric Identification

Remote biometric identification (RBI) systems are increasingly becoming part of our daily lives. The most prominent example is the use of facial recognition technologies in public spaces (e.g. CCTV cameras). The AI Act regulates the use of RBI systems distinguishing between real-time and post RBI systems. While one of the main aims of the AIA was to ban real-time RBI systems, the Regulation failed to do so in an effective manner. Instead, it can be argued that the AIA still allows for a broad use of such systems. Continue reading >>
0
09 December 2024

The EU AI Act’s Impact on Security Law

The process of integrating European security law is imperfect and unfinished – given the constraints posed by the European Treaties, it is likely to remain that way for the foreseeable future. This inevitable imperfection, lamentable as it may be, creates opportunities for legal scholarship. Legal scholars are needed to explore the gaps and cracks in this new security architecture and to ultimately develop proposals for how to fix them. This debate series, being a product of VB Security and Crime, takes the recently adopted AI Act as an opportunity to do just that: It brings together legal scholars, both German and international, in order to explain, analyze and criticize the EU AI Act’s impact on security law from both an EU and German national law perspective. Continue reading >>
0
27 November 2024

Who Let the Bots Out

As artificial intelligence revolutionizes modern warfare, systems like Israel’s Lavender and Ukraine’s Clearview AI are transforming combat with precision and efficiency. This advancement has sparked an urgent debate on the responsible use and governance of AI in military, with 57 countries signing the Political Declaration on AI’s military applications, urging adherence to international law. Central to this is the accountability – who is responsible when AI systems violate laws? This blog post argues that state responsibility for AI violations remains viable within existing legal frameworks. Continue reading >>
12 November 2024

Frisch gewagt ist nur halb gewonnen

Kein Erfolg ohne Training und gutes Trainingsmaterial. Was schon seither für Menschen gilt, ist auch für Künstliche Intelligenz („KI“) nicht anders zu beurteilen. Diese benötigt quantitativ und qualitativ hochwertige Datensätze, um menschenähnlich kreativen Output generieren zu können. Teil dieser Datensätze sind urheberrechtlich geschützte Werke (etwa Fotos oder Texte), derer sich Unternehmen auch bedienen, ohne vorher die Einwilligung der Urheber einzuholen. Ein Urteil des LG Hamburg versucht nun dieses Spannungsfeld aufzulösen – dies gelingt allerdings nur teilweise. Continue reading >>
0
03 November 2024

Of Artificial Intelligence and Fundamental Rights Charters

The Council of Europe has adopted the Framework Convention on Artificial Intelligence – the first of its kind. Notably, the Framework Convention includes provisions specifically tailored to enable the EU’s participation. At the same time, the EU has developed its own framework around AI. I argue that the EU should adopt the Framework Convention, making an essential first step toward integrating the protection of fundamental rights of the EU Charter. Ultimately, this should create a common constitutional language and bridge the EU and the Council of Europe to strengthen fundamental rights in Europe. Continue reading >>
0
03 June 2024

Deepfakes, the Weaponisation of AI Against Women and Possible Solutions

In January 2024, social media platforms were flooded with intimate images of pop icon Taylor Swift, quickly reaching millions of users. However, the abusive content was not real; they were deepfakes – synthetic media generated by artificial intelligence (AI) to depict a person’s likeness. But the threat goes beyond celebrities. Virtually anyone (with women being disproportionately targeted) can be a victim of non-consensual intimate deepfakes (NCID). Albeit most agree that companies must be held accountable for disseminating potentially extremely harmful content like NCIDs, effective legal responsibility mechanisms remain elusive. This article proposes concrete changes to content moderation rules as well as enhanced liability for AI providers that enable such abusive content in the first place. Continue reading >>
0
Go to Top