<?xml version="1.0" encoding="UTF-8"?>
<mods xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.loc.gov/mods/v3" version="3.7" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-7.xsd">
  <titleInfo>
    <title>Deepfakes, the Weaponisation of AI Against Women and Possible Solutions</title>
  </titleInfo>
  <name type="personal" usage="primary">
    <namePart>Kira, Beatriz</namePart>
    <role>
      <roleTerm type="text">Author</roleTerm>
    </role>
    <role>
      <roleTerm authority="marcrelator" type="code">aut</roleTerm>
    </role>
  </name>
  <typeOfResource/>
  <genre authority="rdacontent">Text</genre>
  <originInfo>
    <place>
      <placeTerm type="code" authority="marccountry">xx#</placeTerm>
    </place>
    <dateIssued encoding="marc">2024</dateIssued>
  </originInfo>
  <originInfo eventType="publisher">
    <place>
      <placeTerm type="text"/>
    </place>
    <publisher>Verfassungsblog</publisher>
    <dateIssued>2024-06-03</dateIssued>
  </originInfo>
  <language>
    <languageTerm authority="iso639-2b" type="code">eng</languageTerm>
  </language>
  <physicalDescription>
    <form authority="marccategory">electronic resource</form>
    <form authority="marcsmd">remote</form>
    <form type="media" authority="rdamedia">Computermedien</form>
    <form type="carrier" authority="rdacarrier">Online-Ressource</form>
  </physicalDescription>
  <abstract displayLabel="Summary">In January 2024, social media platforms were flooded with intimate images of pop icon Taylor Swift, quickly reaching millions of users. However, the abusive content was not real; they were deepfakes – synthetic media generated by artificial intelligence (AI) to depict a person’s likeness. But the threat goes beyond celebrities. Virtually anyone (with women being disproportionately targeted) can be a victim of non-consensual intimate deepfakes (NCID). Albeit most agree that companies must be held accountable for disseminating potentially extremely harmful content like NCIDs, effective legal responsibility mechanisms remain elusive. This article proposes concrete changes to content moderation rules as well as enhanced liability for AI providers that enable such abusive content in the first place.</abstract>
  <accessCondition type="use and reproduction">CC BY-SA 4.0</accessCondition>
  <note type="statement of responsibility">Kira, Beatriz</note>
  <subject>
    <topic>AI</topic>
  </subject>
  <subject>
    <topic>AI Regulation</topic>
  </subject>
  <subject>
    <topic>Deepfake</topic>
  </subject>
  <subject>
    <topic>Misinformation</topic>
  </subject>
  <subject>
    <topic>Platform Regulation</topic>
  </subject>
  <classification authority="ddc" edition="23">342</classification>
  <location>
    <url displayLabel="raw object" usage="primary display">https://verfassungsblog.de/deepfakes-ncid-ai-regulation/</url>
  </location>
  <relatedItem type="host">
    <titleInfo>
      <title>Verfassungsblog</title>
    </titleInfo>
    <identifier type="issn">2366-7044</identifier>
    <name>
      <namePart>Max Steinbeis Verfassungsblog gGmbH</namePart>
    </name>
  </relatedItem>
  <identifier type="doi">10.59704/9987d92e2c183c7f</identifier>
  <recordInfo>
    <recordCreationDate encoding="marc">240603</recordCreationDate>
    <recordIdentifier source="DE-Verfassungsblog">10.59704/9987d92e2c183c7f</recordIdentifier>
    <recordOrigin>Converted from MARCXML to MODS version 3.7 using MARC21slim2MODS3-7.xsl
				(Revision 1.140 20200717)</recordOrigin>
  </recordInfo>
</mods>
