Artificial Constitutionalism?
Testing the Boundaries of Freedom of Speech in the Age of Generative AI
Large language models (LLMs) are rapidly becoming embedded in everyday life, serving functions that range from professional assistance to entertainment and even emotional support. As their popularity and adoption grow, so do the legal questions surrounding their use – especially when interactions with individuals result in harm. A crucial threshold issue in establishing the legal framework applicable to LLMs, including the responsibilities of their developers, is whether their outputs – often resembling human expression – can receive constitutional protection as “speech” The question is also key to determining the applicable legal regime, the liability of AI developers for such content, and its potential consequences for individuals.
A recent decision by a US court has begun to address this issue, offering preliminary insights into a seemingly trivial yet potentially contentious topic. In this piece, we explore the reasoning behind the court’s decision and argue that, despite the negative answer of the District Court in the case, algorithmic “speech” may nonetheless fall under the protection of the US First Amendment – even if such coverage does not necessarily extend to every output generated by AI. We then discuss the implications of granting constitutional protection to AI-generated speech for regulatory efforts, particularly those related to content moderation and liability. Although this debate is largely centred in the US, its legal implications could extend significantly to Europe.
The case of Garcia v. Character Technologies, Inc.
Character A.I. is a generative AI application that allows users to interact with various chatbots, referred to as “Characters”. These range from fictional persons to celebrities, with some even portraying a therapist. Characters are designed to respond in ways that mimic human mannerisms, including stuttering, emotive language, and the appearance of typing pauses, all of which contribute to their anthropomorphic nature. With 20 million active users as of January 2025, Character A.I. swiftly gained popularity as a tool for immersive and highly customised interaction.
One of these users was Sewell Setzer III, a teenager from Florida who started engaging with Game of Thrones-based characters on the app. According to reports, the AI Characters expressed love, closeness, and romantic attachment during these discussions, which purportedly took on emotional and even sexual connotations. The teenager grew more emotionally reliant on the chatbot chats and more reclusive over time. He eventually committed suicide soon after writing a farewell message to one of the characters, despite his parents’ best efforts to help him, which included psychological support and taking away his phone.
After the incident, the teen’s mother, Megan Garcia, filed a lawsuit against Character Technologies, as well as its founders and associated companies including Google LLC before the US District Court for Middle District of Florida – Orlando Division. She claims that the design and operation of Character A.I. were dangerously defective. The lawsuit includes allegations that the app was psychologically manipulative, failed to verify users’ ages, encouraged obsessive usage, and presented itself in ways that led vulnerable users to perceive its responses as real and emotionally meaningful. Her claims include product liability, negligence, wrongful death, and deceptive trade practices.
In their motion to dismiss, the defendants argued that all plaintiff’s claims are barred by the First Amendment because the output of Character A.I. constitutes protected speech which Character A.I.’s users have a right to receive. They analogise interactions with Character A.I. Characters to interactions with non-player characters (NPCs) in video games and with other people on social media platforms – both of which have received First Amendment protection.
In its order dated 21 May 2025 the court considered that the defendants failed to advance such analogies and were unable to articulate “why words strung together by an LLM are speech”. According to the court, the question lies not on the mere similarity between Character A.I. and other protected mediums, but on whether the output qualifies as constitutionally protected speech – namely, whether it conveys ideas or messages in a meaningful way. Here, referring to Justice Barrett’s concurrence in Moody v. NetChoice, the court observed that, unlike an algorithm implementing a platform’s inherently expressive choice to remove posts supporting a particular position from its social media site, LLMs such as Character A.I.’s do not reflect human expressive choices due to their autonomous functioning.
The order constitutes the first documented precedent that specifically tackles the qualification of LLMs’ output as speech. While abstaining from providing a definitive answer at this early stage of the proceedings, the court did not exclude that AI-generated output may amount to speech.
Algorithmic speech: the new frontier of American exceptionalism?
So far, the debate on algorithmic speech has been predominantly addressed by US scholars investigating the possible extension of the First Amendment, which prohibits any abridgment of speech, to AI-generated outputs (Kaminski-Jones, 2024; Salib, 2024; Sunstein, 2023; Volokh-Lemley-Henderson, 2023; Massaro-Norton-Kaminski, 2017). This is unsurprising, considering how freedom of speech has emerged as one of the most salient expressions of American exceptionalism – a term referring to the unique and unusually strong protection of values such as freedom of speech in the US compared to other liberal democracies (Schauer 2005). Indeed, freedom of speech has been characterised as ‘the paramount right within the American constellation of constitutional rights’ (Rosenfeld and Sajó 2007), with its scope of protection being broadly interpreted by US courts. This has resulted in the recognition of First Amendment protection to a broad range of expressive outputs – including video games and, more recently, social media feed curation, as highlighted in the defendants’ motion to dismiss, coupled with the general prohibition of any content-based regulation of speech resulting in viewpoint discrimination.
In contrast, Europe has historically followed a different path, rooted in a genuinely diverging and more limited understanding of the right to freedom of expression enshrined in the European Convention of Human Rights and in the EU Charter of Fundamental Rights. The European perspective, stresses the variety of rights individuals are entitled to and conceptualises this right without prioritising its protection over other competing fundamental freedoms (Haupt 2017). Furthermore, the interpretation of freedom of expression by European (supranational and national) courts has been consistent with a less far-reaching coverage and scope of protection, in contrast to the First Amendment expansionism.
However, some scholars have recently begun exploring algorithmic speech also from a European perspective: first in relation to some technological developments such as search engine results and autocomplete suggestions (Sears 2020), and ultimately in the context of AI-generated content (Bassini 2025, de Vries 2021).
First Amendment constitutionalism has largely shaped the global debate on the impact of digital technologies on fundamental rights. This influence was already evident in the late 1990s, when the US Supreme Court, in the landmark Reno v. ACLU decision, portrayed cyberspace as the new marketplace of ideas. And it is still clear now, with US Supreme Court judgement in Moody v. NetChoice reasoning on the constitutional protection as speech of curated compilations of content offered by social network platforms. The new technological revolution of Artificial Intelligence is now part of everyday life and will likely reaffirm US exceptionalism in the area of freedom of speech once more. The next key test for this expansive interpretation of the First Amendment will concern the legal qualification of AI-generated output as protected speech.
AI-generated content and the right of listeners to receive information
In the case of Garcia v. Character Technologies, the court acknowledged that the US First Amendment protects not just speakers but also listeners. Notably, freedom of speech is a constitutional right protecting first and foremost the ability of individuals to communicate ideas, thoughts, and opinions without undue government interference. As such, it is attached to a speaker. In the case of AI-generated outputs, however, determining a speaker is problematic since these Generative AI can create utterances – in the form of text, images, sounds – with minimal or no human involvement. Importantly, these systems can originate content in ways that users and even developers cannot fully control or anticipate.
But freedom of speech also has a passive dimension, which encompasses people’s right to receive information. As users increasingly turn to LLMs to access and aggregate information, the question of protecting these sources against undue government interference emerges: what if governments wish to suppress or otherwise limit the output that LLMs can generate? Some scholars argue that this suspicion of government control and the protection of listeners is the most persuasive line of reasoning in this debate. Based on this idea, the US Supreme Court has extended First Amendment protection to video game content – even when it isn’t clearly linked to a specific speaker. The aim was to protect the people interacting with the content, not the content itself. This could also support protecting AI-generated content (Sunstein 2023).
In fact, this twofold nature of freedom of speech reflects the European understanding of freedom of expression. Article 10 of the European Convention on Human Rights, among others, highlights this inherent ambivalence, suggesting that even European courts may base constitutional protection on the individual’s right to receive information. However, when it comes to determining the scope of constitutional protection, the United States’ First Amendment exceptionalism may indeed make a difference. In its ruling, the District Court pointed to the difference between pure speech and expressive conduct to determine whether First Amendment rights were implicated in the case. While ultimately rejecting the defendants’ comparisons, the Court has recalled the key importance attached to the expressive conduct test, which “is to determine whether conduct is sufficiently similar to speech so as to warrant First Amendment protections”. According to this test, the speaker must intend to convey a particular message, and there must be a high likelihood that the message will be understood by listeners. Rather than engaging in any broader exercise of “digital constitutionalism”, the District Court simply applied the well-established test set out by the Supreme Court in Spence v. Washington.
Challenges in recognising algorithmic speech
Far from being a final say on a highly debatable and under-explored issue, the order of the District Court does not preclude future developments; rather, it paves the way for them in possible future litigations. It is important to note, though, that even if AI-generated content enjoys constitutional coverage, that does not necessarily imply that every output is worthy of protection. Legislators could still pass regulations requiring that generative AI output does not include illegal content, such as defamatory statements, or banning the use of LLMs in specific contexts, such as school examinations. However, any such intervention would need to pass a First Amendment scrutiny based on their impact on the right to receive information from the public (Bassini 2025), just like other comparable interferences.
As the case demonstrates, the existence of constitutional coverage for AI outputs may have significant implications ranging from the liability of deployers or providers of AI systems for illegal or harmful content to the existence of content moderation obligations. Such coverage would practically restrict the ability of legislators to interfere with what LLMs can and cannot say; at the same time, it would not facilitate protecting users from harmful yet illegal content.