03 June 2024
Deepfakes, the Weaponisation of AI Against Women and Possible Solutions
In January 2024, social media platforms were flooded with intimate images of pop icon Taylor Swift, quickly reaching millions of users. However, the abusive content was not real; they were deepfakes – synthetic media generated by artificial intelligence (AI) to depict a person’s likeness. But the threat goes beyond celebrities. Virtually anyone (with women being disproportionately targeted) can be a victim of non-consensual intimate deepfakes (NCID). Albeit most agree that companies must be held accountable for disseminating potentially extremely harmful content like NCIDs, effective legal responsibility mechanisms remain elusive. This article proposes concrete changes to content moderation rules as well as enhanced liability for AI providers that enable such abusive content in the first place. Continue reading >>
0
15 November 2023
Biden, Bletchley, and the emerging international law of AI
Everyone talks about AI at the moment. Biden issues an Executive Order while the EU hammers out its AI Act, and world and tech leaders meet in the UK to discuss AI. The significance of Biden’s Executive Order can therefore only be understood when taking a step back and considering the growing global AI regulatory landscape. In this blogpost, I argue that an international law of AI is slowly starting to emerge, pushing countries to adopt their own position on this technology in the international regulatory arena, before others do so for them. Biden’s Executive Order should hence be read with exactly this purpose in mind. Continue reading >>
0