Deepfakes, the Weaponisation of AI Against Women and Possible Solutions
In January 2024, social media platforms were flooded with intimate images of pop icon Taylor Swift, quickly reaching millions of users. However, the abusive content was not real; they were deepfakes – synthetic media generated by artificial intelligence (AI) to depict a person’s likeness. But the threat goes beyond celebrities. Virtually anyone (with women being disproportionately targeted) can be a victim of non-consensual intimate deepfakes (NCID). Albeit most agree that companies must be held accountable for disseminating potentially extremely harmful content like NCIDs, effective legal responsibility mechanisms remain elusive. This article proposes concrete changes to content moderation rules as well as enhanced liability for AI providers that enable such abusive content in the first place.
Continue reading >>A Primer on the UK Online Safety Act
The Online Safety Act (OSA) has now become law, marking a significant milestone in platform regulation in the United Kingdom. The OSA introduces fresh obligations for technology firms to address illegal online content and activities, covering child sexual exploitation, fraud, and terrorism, adding the UK to the array of jurisdictions that have recently introduced new online safety and platform accountability regulations. However, the OSA is notably short on specifics. In this post, we dissect key aspects of the OSA structure and draw comparisons with similar legislation, including the EU Digital Services Act (DSA).
Continue reading >>