<?xml version="1.0" encoding="UTF-8"?>
<mods xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.loc.gov/mods/v3" version="3.7" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-7.xsd">
  <titleInfo>
    <title>We Don’t Need No Education? - Teaching Rules to Large Language Models through Hybrid Speech Governance</title>
  </titleInfo>
  <name type="personal" usage="primary">
    <namePart>Schulz, Wolfgang</namePart>
    <role>
      <roleTerm type="text">Author</roleTerm>
    </role>
    <role>
      <roleTerm authority="marcrelator" type="code">aut</roleTerm>
    </role>
  </name>
  <name type="personal">
    <namePart>Ollig, Christian</namePart>
    <role>
      <roleTerm type="text">Author</roleTerm>
    </role>
    <role>
      <roleTerm authority="marcrelator" type="code">aut</roleTerm>
    </role>
  </name>
  <typeOfResource/>
  <genre authority="rdacontent">Text</genre>
  <originInfo>
    <place>
      <placeTerm type="code" authority="marccountry">xx#</placeTerm>
    </place>
    <dateIssued encoding="marc">2023</dateIssued>
  </originInfo>
  <originInfo eventType="publisher">
    <place>
      <placeTerm type="text"/>
    </place>
    <publisher>Verfassungsblog</publisher>
    <dateIssued>2023-11-09</dateIssued>
  </originInfo>
  <language>
    <languageTerm authority="iso639-2b" type="code">eng</languageTerm>
  </language>
  <physicalDescription>
    <form authority="marccategory">electronic resource</form>
    <form authority="marcsmd">remote</form>
    <form type="media" authority="rdamedia">Computermedien</form>
    <form type="carrier" authority="rdacarrier">Online-Ressource</form>
  </physicalDescription>
  <abstract displayLabel="Summary">Artificial Intelligence doesn't know what's 'true'. Especially, generative AI models like chatbots veer from the truth, i.e. “hallucinate”, quite regularly. Chatbots simply invent information at least 3 percent of the time and sometimes as high as 27 percent. Given the (future) use of such systems in nearly all domains, we might want such systems to follow more stringent rules of accuracy. And those truth-related rules are not the only rules for AI systems that warrant societal scrutiny. How those systems are trained will be crucial. In this blog post, we argue that a new perspective is key to tackle this challenge: “Hybrid Speech Governance”.</abstract>
  <accessCondition type="use and reproduction">CC BY-SA 4.0</accessCondition>
  <note type="statement of responsibility">Schulz, Wolfgang</note>
  <subject>
    <topic>AIA</topic>
  </subject>
  <subject>
    <topic>DSA</topic>
  </subject>
  <subject>
    <topic>generative AI</topic>
  </subject>
  <subject>
    <topic>large generative ai models</topic>
  </subject>
  <subject>
    <topic>llm</topic>
  </subject>
  <subject>
    <topic>speech governance</topic>
  </subject>
  <classification authority="ddc" edition="23">342</classification>
  <location>
    <url displayLabel="raw object" usage="primary display">https://verfassungsblog.de/we-dont-need-no-education/</url>
  </location>
  <relatedItem type="host">
    <titleInfo>
      <title>Verfassungsblog</title>
    </titleInfo>
    <identifier type="issn">2366-7044</identifier>
    <name>
      <namePart>Max Steinbeis Verfassungsblog gGmbH</namePart>
    </name>
  </relatedItem>
  <identifier type="doi">10.59704/2a94578e42e943d2</identifier>
  <recordInfo>
    <recordCreationDate encoding="marc">231109</recordCreationDate>
    <recordIdentifier source="DE-Verfassungsblog">10.59704/2a94578e42e943d2</recordIdentifier>
    <recordOrigin>Converted from MARCXML to MODS version 3.7 using MARC21slim2MODS3-7.xsl
				(Revision 1.140 20200717)</recordOrigin>
  </recordInfo>
</mods>
