04 November 2022

Regulating influence, timidly

We are proposing a new set of EU Digital Principles to shape our European society […] and your views matter”, declares an advertisement paid for by the official page of the European Commission on Facebook. This message, soliciting participation for a public consultation, was produced by the European Commission and circulated over Facebook advertising platforms, to over a million people.

Advertisement of the European Commission captured on Facebook advertising platforms. Image credit: Persuasion Lab

Like many other institutions, the Commission uses the medium of advertising on digital platforms to reach people, garner support and generally maintain public relations. Facebook advertising platforms enable an influential body such as the Commission to stay relevant by having an active public-facing digital presence. Even as the institution spends millions of euros advertising on these platforms every year, the Commission passed a different regulation, the Digital Services Act, that attends to the question of regulating online advertising, or influence.

The DSA considers advertising and recommender systems as deserving of regulatory attention, and not immutable facets of an online world. But even as the regulation furthers current standards in disclosures around online advertising, it insulates advertising business models and consolidates platform efforts to sidestep the operative question that characterizes online advertising: how and why advertisements reach who they reach, in less abstract terms.

Transparency and other obligations

Advertising business models have been at the heart of many of the persistent issues with technology practices and products. The circulation of advertisements over digital platforms is determined, secretly, by private mega-corporations incentivized by profit and growth. The inclusion of and focus on regulating online advertising and recommender systems within coherent frameworks is long overdue, and the DSA addresses these systems from a fundamental rights and collective ‘societal harms’ perspective.

Practically, the DSA imposes transparency obligations upon ‘online platforms’ like Facebook to declare when an advertisement is being displayed, on whose behalf the advertisement was paid for, and ‘meaningful information’ to determine why the advertisement reaches a particular person (Article 26). Beyond this, the regulation requires ‘very large online platforms’ (VLOPs) to make advertising transparency data available through Application Programming Interfaces (Article 39), a form that allows systematic analysis of data on a large scale.

Two large deceptions perpetrated by platforms’ self-regulatory initiatives seem to have been addressed in the regulation. First, companies such as Meta and its group of advertising platforms presently offer transparency information about a small class of advertisements deemed to be ‘political.’ These advertisements make up less than 1% of its total advertising revenue, by the company’s own admission. This distinction of ‘political ads’ is sustained on a forced binary between commerce and advocacy, where only the latter is deemed political. Second, transparency information is often made available in forms such as graphical interfaces, that give the impression of informing but preclude assessment of information at scale.

Beyond transparency requirements, the DSA imposes due diligence obligations upon VLOPs to identify, analyse, assess and mitigate some categories of systemic risks: (i) illegal content, (ii) actual or forseeable negative effects on the exercise of fundamental rights as protected by the Charter, (iii) actual or foreseeable negative effects on civic discourse and electoral processes, and public security, and (iv) actual or foreseeable negative effect in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being (Articles 34 and 35). Beyond acknowledging the breadth of risks from VLOPs, these provisions buy into platforms’ promise of continued refinement, effectively insulating them from any definitive consequence for proven or potential harm. Broad exemptions to disclosure of systemic risks publicly (Article 42(5)) leaves the provisions with no teeth.

The Commission will facilitate and encourage the development and application of voluntary codes of conduct for online advertising (Article 46) by online platforms and other entities.  Codes of conduct carry the danger of being accessories, for platforms and regulators alike, to signal the operation of the rule of law in the online realm, but without much substance. Besides this, independent audits for compliance (Article 37), access to and scrutiny of data by vetted researchers (Article 40) are mandated for VLOPs. In a procedural realm, the DSA introduces mechanisms to charge VLOPs an annual supervisory fee to fund the resources required to carry out its supervisory functions (Article 43).

It is clear that the DSA is both ambitious about squaring the digital advertising ecosystem with fundamental rights, as well as cognizant of the varieties of risks posed by advertising and recommender systems. Nevertheless, its regulatory approaches follow the Big Tech transparency playbook, and go little further than where technology businesses themselves have ventured. The interventions in the regulation are constrained in their imagination by the forms and substance of platforms’ self-regulatory efforts.

Predatory inclusion and discriminatory exclusion

Advertisements on online platforms are typically delivered based on a determination of ‘relevance’ for users (among other factors). While ‘relevance’ is an abstraction, viewership of advertisements is determined through predatory inclusion and discriminatory exclusion. At a collective level, these circulatory logics have translated to polarized publics, election-related information asymmetries, etcetera. So while companies like Facebook are considered online platforms because they perform a function of ‘dissemination to the public’ (Article 3(k)), practically, they craft calculated audiences, the opposite of making information available ‘to a potentially unlimited number of third parties’.

In the form and substance of transparency requirements, the DSA entrenches existing orientations of technology development. The two areas in which the DSA requires transparency disclosures are content moderation, and advertising/recommender systems. But the insights sought are able to address any harms only after the fact. Insofar as the DSA seeks to complement transparency measures with other obligations, such as risk assessment and mitigation methods, it is far-fetched to expect companies to be either perceptive or forthcoming about the systemic risks of their own products.

It is worth recounting that online advertisement transparency emerged as a self-regulatory response to a crisis of accountability in the operation of online platforms. The power of platforms to influence elections became a flashpoint for broader concerns about online advertising and algorithmic systems of information curation. The voluntary commitments that emerged in the aftermath were inspired by regulation for a previous media era, where only a particular class of advertisements (‘political ads’) were understood as worthy of scrutiny. Within this class, the substance of transparency information is focused away from platform workings.

Transparency for a different media era

As advertisement space was used for political campaigns in print, TV, radio and such older forms of media, political advertisements became a subject of regulation. The substance of political advertisement transparency in previous media eras correlated with dangers arising therein. Business models of media companies, characteristics of different media, and more contributed to how regulations were created. For example, political parties investing in print ads in newspapers were required to disclose advertising expenditure. As advertisement revenue was the business model for many newspapers, a transparency mandate was designed to minimize risks of partisan reporting in newspapers, for continued relations with political parties. Regulations and guidelines about online advertising have tried to extend the transparency response from the past era to digital advertising without accounting for the divergences in  business models, and the different kinds of publics they create and address. In the present information society, the forms in which political messages are received, the diverse motives for advertising, the computational methods of delivery, etcetera create vastly different conditions than their mass media predecessors.

Transparency of abstractions

In the past, advertisers had to specify the exact nature of the audiences they desired to reach (through interests, behavior and demographic information). Today, advertising systems on social media platforms function with no more than a clear definition of the desired results (e.g.,100 app installs, 5000 page views, 1 million impressions, etc.). Based on the outcomes sought, platforms are able to balance the demand for attention (advertisements) and the supply (users’ impressions) in the most ‘optimal’ or profitable distribution, without requiring advertisers to spell out any predatory forms of targeting. Advertisers are able to derive audiences that are niche and hyper-local, or global and transnational, and everything in between.

Meanwhile, in their transparency initiatives, platforms have managed to abstract away the kind of determinations used in finding matches between advertisements and their viewership. Information is offered in the form of broad demographic breakdowns of audiences for every ad, avoiding the determinations made by the platform in assigning matches between ads and people entirely.

By leaving ‘meaningful information’ (Article 26) open for interpretation and allowing platforms the power to determine the degree of abstraction by which they make their advertising operations legible to the public (breakdown by age brackets, gender, etc.), the DSA will allow companies to disclose only as much non-threatening information as their business models permit.

Contrast this with the fact that more than 2 million data points are used by Facebook in the determination of why a particular ad was chosen for a particular user’s attention, and the indeterminate number and type of factors which are used by machine learning algorithms that curate audiences for ads. While broad demographic information about targeted groups is still meaningful to an extent, the DSA effectively treats people like market segments from a different media era. Given the liberties of abstraction available in the DSA, the state of play will not shift towards pre-empting hidden infractions of rights that these technologies might embed.

Influencing regulation

In a sense, the Cambridge Analytica scandal galvanized technology businesses to make a demonstrable effort in being accountable for their services. Transparency initiatives took the form of nominal accountability for the activities on the platform, rather than accountability for the workings of the platform. As these voluntary commitments arrived on the heels of a massive shift in public opinion towards the platforms, they are equally crafted to be public relations campaigns. What has followed in the form of transparency initiatives has become the ceiling of platform accountability standards. The form and substance of transparency has skirted around any possible threats to the business models of these companies. Following the breadcrumbs laid by self-regulatory initiatives makes regulation run the risk of being a legal spectacle, where compliance becomes mere performance.


SUGGESTED CITATION  Ranganathan, Nayanatara: Regulating influence, timidly, VerfBlog, 2022/11/04, https://verfassungsblog.de/dsa-regulating-influence/, DOI: 10.17176/20221104-215642-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
Big Tech, DSA, Digital Services Act, Platform Regulation, digital advertising, influencer economy


Other posts about this region:
Europa