Every Fake You Make
Blanket Deepfake Bans Are the Next Level in Asia’s War on Fake News
In Asia, the war on fake news is reaching the next level. Several countries tightened regulatory demands over the past few years and introduced criminal liability for publishing false content online. The laws effectively contribute to making the truth: Anything that is not aligned with the legal standards of what is considered true may be found unlawful.
This blog post scrutinizes new legislation in South Korea and Singapore. Both jurisdictions are criminalizing deepfakes per se during election periods. The post situates these laws in the broader context of legal efforts to tighten controls of digital communication in Asia and beyond.
Under previous laws that penalized fake news not every message needed to be true if it was not likely to affect specific interests. In other words, prior rules penalized probable harm – not fake news as such. Now, in the run-up to an election, every deepfake is becoming a crime in South Korea and Singapore. The new provisions do not require proof of at least probable harm. They allow criminal punishment for conduct alone – for every election-related deepfake – without the need to substantiate any likely consequence.
While the aim of these offenses is to protect the integrity of elections, I expect the removal of probable harm to become a preferred criminalisation model in the war on fake news more generally. This is because blanket bans appear increasingly practical in the face of rapidly rising numbers of deepfakes. But without distinctions between harmful and harmless content, governments gradually evolve from agents of the public interest to arbiters of truth, assuming additional powers to define reality.
Blanket deepfake bans in South Korea and Singapore
Asia has been at the forefront of the global war on fake news. As I noted in previous posts, new and amended legislation across the region established novel truth regimes that relied on various tools, including the criminal law. Among them: Malaysia’s Anti-Fake News Act (AFNA) of 2018, Singapore’s Protection Against Online Falsehoods and Manipulation Act (POFMA) of 2019, and Thailand’s 2017 amendment of the Computer Crime Act. More recent laws are now introducing broad criminal provisions to function as blanket deepfake bans during election periods.
In late December 2023, the National Assembly of the Republic of Korea enacted amendments of the Public Officials Elections Act (in Korean) that entered into force in late January 2024.
A new Article 82-8 titled “Election Campaigning Using Deepfake Videos and Similar Methods” prohibits, for the period from 90 days before until the election date, the production, editing, distribution, screening and posting of virtual sounds, images or videos (“deepfake videos and similar content”) that are difficult to distinguish from reality, using artificial intelligence technology or other means for election campaigning (para. 1). Violations are punishable with imprisonment of up to seven years or by a fine between 10 to 50 million won, roughly between 6,800 to 34,000 EUR (Article 255 para. 5).
Outside the 90 days period, anyone who produces, edits, distributes, screens, or posts deepfake videos and similar content for election campaigning must clearly indicate that the information is virtual content and that it was created using artificial intelligence technology or other means, according to regulations issued by the National Election Commission (Article 82-8 para. 2). Here, violations are punishable with a fine not exceeding 10 million won (Article 261 para. 3 No. 4).
More recently, on 9 September 2024, the government of Singapore tabled the Elections (Integrity of Online Advertising) (Amendment) Bill in Parliament. Currently under deliberation, it is expected to be passed without major revisions. The bill introduces amendments of Singapore’s Parliamentary Elections Act 1954 and the Presidential Elections Act 1991.
According to draft Section 61MA of the Parliamentary Elections Act, it is an offence, during an election period (from the issuance of the writ to the close of polling), to publish (or cause to be published) election-related online advertising that contains an audio, visual or audiovisual representation of a candidate saying or doing something that they in fact did not say or do, but “the representation is realistic enough such that it is likely that some members of the general public would, if they heard or saw the representation, reasonably believe that the candidate said or did that thing”, and this advertising was generated or manipulated at least in part using digital means.
Draft Section 61MA distributes the burden of proof between prosecutors and suspects: Prosecutors must prove the physical elements of the offence and that the suspect knew or ought reasonably to have known that the content is or includes online election advertising (draft Section 61MA para. 1). Suspects, however, carry the burden to prove that they did not know and had no reason to believe that a candidate did not in fact say or do what the advertising alleged (draft Section 61MA para. 3).
Violations will be punishable by fines not exceeding S$ 1,000 (roughly 700 EUR) or imprisonment up to 12 months or both (draft Section 61MA para. 2). In addition, the responsible authority (the Returning Officer) can take all reasonable steps for the removal, disabling of access, or to stop or reduce such communications (Section 61N). However, the new deepfake provision would neither apply to private or domestic electronic communications nor to news publications by authorised news agencies (draft Section 61MA para. 4).
Similar changes will be introduced in Singapore’s Presidential Elections Act (draft Section 42LA).
How is this the next level?
In Southeast Asia, the previous approach to fake-news crimes has been to address the communication of false information if it was “likely” to result in harm to specified public interests such as national security, public order, international relations, orderly elections, or trust in state institutions. Most laws thus required proof of probable harm.
While such ex post facto likelihood is quite problematic for its own reasons, the new laws against deepfakes remove probability requirements entirely. For the lawmakers in South Korea and Singapore, all election-related deepfakes are criminal in the run-up to the vote. The actual content of an individual piece is irrelevant. The blanket bans target a particular form of content generation, presuming harm to the integrity of elections for every case.
Legal developments are thus accelerating in a sector-specific way. The integrity of elections has become the first sector where each and every deepfake is declared unlawful. Any sound, image or video that does not reasonably represent reality is prohibited. This all-or-nothing approach is the next level in Asia’s war on fake news.
While prior requirements of likely harm reflected some balancing between public interests and freedom of expression, blanket bans opt for a no-balancing solution. For the case of Singapore, this arguably reflects the idea that proportionality “has never been part of Singapore law” (Chee Siok Chin v Minister for Home Affairs [2005], at 87) and illustrates the “relative weightlessness of rights” (Thio, 11.154) in this jurisdiction.
An all-out deepfake ban is also much more practical: After all, there has to date not been a single criminal conviction under POFMA (enforcement statistics here). This may well reflect Singapore’s calibrated coercion, but it may also have something to do with the practical difficulties of proving probable harm beyond reasonable doubt, which leads me to what these developments may entail more generally.
An outlook on the future of anti-falsehood legislation
The pace of technological advancement continues to challenge legal categories. In the face of seemingly ubiquitous misinformation, legislators react with increasing vigour. Consequently, South Korean and Singaporean laws against election-related deepfakes restrict online speech also in cases where the content in question causes no harm, where the integrity of elections is not actually or probably compromised.
The resolute responses in these Asian settings differ from European approaches, where aspects of proportionality carry comparably heavier weight. The European Union’s most recent legislative act against deepfakes is the Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) that was adopted earlier this year. It imposes on deployers of deepfake-generating AI systems the duty to disclose the fact that the content has been artificially generated or manipulated, with possible adaptations for evidently artistic, creative, satirical, fictional or analogous works or programmes (Article 50 para. 4). Non-compliance can result in administrative fines of up to 15,000,000 EUR or 3 percent of an undertaking’s total worldwide annual turnover (Article 99 para. 4(g)).
The EU’s AI Act limits itself to transparancy obligations and addresses only “deployers”, which excludes all cases where an “AI system is used in the course of a personal non-professional activity” (Article 3 para. 4). The European approach thus reflects a delicate balance between conflicting interests, and EU lawmakers apparently continue to trust in the ability of end-users to distinguish between innocent and harmful deepfakes. But for how long?
Respondents in a recent report by the World Economic Forum considered mis- and disinformation the most severe risks in the next two years, ahead of extreme weather events. The AI-powered ease of production will certainly result in a rapdily increasing volume of such content. But while AI may learn to recognize deepfakes, the technology will have difficulties in helping us separate the good from the bad.
Blanket bans might therefore, not only in Asia, become more frequent, simply because of a feeling of practical necessity. But if legal systems resort to conduct crimes that require neither proof of actual nor probable consequences, the application of the criminal law – a sharp sword in the anti-fake news toolbox – will lead to broader and deeper restrictions of free speech and create chilling effects. The catch-all prohibitions will also provide the legal basis for blockings and removals on a larger scale than already practiced.
The move away from distinguishing between problematic and harmless content further contributes to the emancipation of truth as a separate legal interest. In this scenario, truth ceases to function as a mere intermediary that facilitates the protection of legitimate public and private interests. Rather, it becomes an end in itself. Where the war against fake news reaches this level, governments evolve from protectors of the common good to simultaneous arbiters of truth – a vision that may be feared or welcomed, depending on perceived threat levels and public trust in state institutions.
Dear Mr. Schuldt,
I fail to see the issue, that is to say: the threat to democracy, in that case. The very nature of a deepfake constitutes, that you use AI to create the impression that something happened (or was said) which did not happen. Why would someone want to create a deepfake of something that already exists? And if it does not exist, why should it be legal create it if it necessarily carries misinformation of another perseon? Depending on the exact case, a deepfake therefor can be slander, identity theft or much more. In any case it means the victim suffers a loss of authority over his or her ability to communicate.
Also, the law does not forbid to create a deepfake per se, you just have to ensure transparency about its origin and nature. The only thing I see very critical is the fact that you can be prosecuted for distribution, as the intention of a deepfake is often that it cannot be told apart from reality, a lot of people could theoretically be prosecuted for something they did unintentionally.
Dear PTM, many thanks for your comment. I fully agree that harmful (f. ex. slanderous, fraudulent, etc.) deepfakes should be addressed by effective and proportionate measures. The novel character of the laws I introduce in this post, however, lies in the fact that these laws do not require proof of actual or probable harm. Rather, every deepfake that is related to an election is unlawful (it is a crime) throughout an election period. This differs from previous laws in Asian countries that aimed to target only harmful falsehoods.