This article belongs to the debate » Final Call for Digital Workers Rights in the EU
19 July 2023

How the Platform Work Directive Protects Workers’ Data

The Commission’s proposal of the new platform labour directive came with a core promise to platform workers in the EU: to recognize the impact algorithmic management has on their working conditions. In doing so, the directive seeks to clarify and strengthen data rights of workers, regardless of whether they are classified as employees or not.

So far, the debates about ‘data rights’ at work focused on how the privacy and data of individual workers could be better protected. This logic rooted in the General Data Protection Regulation, but was never really helpful for workers. Most do not have a problem with providing the data at work, but with the decisions on working conditions based on data made by their algorithmic bosses.

This essay argues that the main achievement of the proposed Directive is to clarify and reframe existing norms about automated decision-making in a way that shifts attention from data to working conditions. While the specific proposed provisions do not go far beyond norms already established in the General Data Protection Regulation, they are reframed in a way that clarifies that digital labour platforms have the responsibility to ensure fairness, transparency and accountability when making decisions that rely on algorithms.

The ambition of the Platform Work Directive, however goes beyond better protection of platform workers. In addressing challenges faced by platform workers, it seeks to pave way for stronger regulation of technologies in other types of workplaces. This aim resonates with the shift in public discourse recently captivated by the risks and benefits of tools such as ChatGPT, and moved away from the apocalyptic image of robots stealing jobs to a more nuanced reflection on how AI-powered tools are likely to change every job that exists.

Indeed, the Council restates this ambition by identifying the Directive as “the first proposal for legislation at Union level regulating the use of artificial intelligence at the workplace” and one that “might serve as precursor for a legislation with a much broader application.” Yet, in the upcoming trilogues, the topic of algorithmic management is likely to be eclipsed by the disagreements about the ‘presumption of employment’ (see Kocher’s blogpost).  This is perhaps due to the fact that there is a consensus that existing ‘data rights’ can be restated in a way that makes it easier for workers to access them.

From ‘data subjects’ to ‘platform workers’

The Commission’s proposal marks a real shift in the public discourse about the role of technology at the workplace. In the last decade, concerns about data protection, privacy and surveillance became part of mainstream public discourse and helped raise awareness of risks that we face as consumers of digital products. This coincided with the landmark General Data Protection Regulation, which granted data protection rights to all users of such products. The GDPR, however, does not differentiate between consumers, students, patients – or workers for that matter. They are all considered simply as ‘data subjects’ who have a right as individuals to know what happens with the data they provide.

No one could have anticipated that it would be platform workers who will put in focus the fact that GDPR is not only about protecting the data, but also about clarifying when and how data may be used to make decisions. Cases brought up by platform workers, challenging decisions made by platforms purportedly based solely on automated processing of data (Art. 22 GDPR), revealed uncertainties about the application of the norm in the employment context.

Indeed, research shows that workers often see algorithmic decisions as unfair, opaque, and random and cannot predict how their own actions will affect their working conditions. Platform workers, whether their job is to give someone a ride or translate an article, do not even know if their standing on the platforms depends only on the algorithmic ‘black box’ or whether platform decisions were also reviewed by people. They realize very quickly though what losing good standing means – that they will earn less money or have no job at all.

The proposed Platform Work Directive puts the human impact of these automated decisions at the very heart of the provisions included in Chapter III (entitled Algorithmic management in the Commission’s proposal). After all, the main concern of workers has never been the processing of the data they generate during their working time, but fair and predictable working conditions.  This is perhaps why workers were never really attracted to the idea that they would be able to access their rights by identifying themselves as ‘data subjects’.

The main achievement of the Directive is precisely this reframing which acknowledges the direct link between automated decisions and working conditions. The data rights included in the proposal are not only about what and how data is collected in the first place (although red lines on collection of certain types of data are redrawn). Data rights are also about the access to information on who and how decisions are being made.  As such, they are in principle about rebalancing power between workers and platforms, which have consistently added ‘information asymmetry’ to the repertoire of means of indirect control at employers’ disposal.

If the Directive delivers on its promises to platform workers to improve access to information about algorithmic management systems, it might inspire other workers to use the GDPR to the same end – or to advocate for an even more far-reaching understanding of ‘data rights’ in the employment context.

From ‘black boxes’ to workers’ rights

It makes sense that the thrust of provisions about “algorithmic management” included in Chapter III is to oblige platforms to give more information to workers about what and who has influence on their working conditions. While research about algorithmic management systems exploded in recent years, it is important to keep in mind how little we actually know about how they work. Since platforms shield access to technical evidence about the role algorithms play in work organization, most investigations focus on how these opaque managerial systems are experienced and perceived by those who work with them. In fact, uncertainty about which (if any) decisions were taken automatically or which weren’t is what is the unifying social experience of workers across a wide variety of platforms.

Although reliance on opaque automated decision-making systems is what defines the social experience of platform workers, including it in the legal definition of digital labour platforms is imprudent. The Council suggested that a platform will only be considered a digital labour platform it they meet an additional criterion: “(d) it involves the use of automated monitoring or decision-making systems” (Art 2(1)(1)). It is unclear what were the intentions behind this amendment, but it is unnecessary and potentially counterproductive. So far, platforms reveal evidence about algorithmic systems only when directed by courts. The logic proposed by the Council implies that it would have to be established a priori that an algorithmic management system exists in order to fall in the scope of the Directive. Yet, it is precisely the lack of access to such evidence by platform workers that the Directive is trying to remedy.

The proposal clarifies when and how workers should obtain information about algorithmic management – both to individually better predict outcomes of their actions, or to collectively address the ‘information asymmetry’ that is a key source of platforms’ control” and understand its impact on general working conditions. First, before starting to work, each worker should receive a written document that explains what categories of actions are turned into data, and what parameters are used to make automated decisions. Second, in case ‘significant decisions’ were made automatically, for example to withhold wages or terminate the contract, the worker would have the right to review and contest it. Third, if a platform is planning to make changes to algorithmic management, the workers and their representatives could request to be informed and consulted about them.

In addition to providing workers with rights to know more about what and who affects their working conditions, the proposal sets important limitations on the type of data that can be processed automatically that are tailored to the workplace context. The platforms are prohibited from using data on the emotional or psychological state, private conversations (including exchanges with workers’ representatives) and data that was generated while the person was not performing work as inputs for algorithmic management. The Council’s proposal stops short of restating red lines about health data, as originally proposed by the Commission, as well as data about racial or ethnic origin or sexual orientation, as proposed by the European Parliament, which is concerning given that processing such data is already restricted by the GDPR. Moreover, the Council disregarded the calls from workers and the European Parliament to ban automated processing of biometric data.

Finally, the proposal makes an even more explicit link between algorithmic management and impact on working conditions, by singling out risks it can cause to workers’ health and safety. The case of platform workers reveals that working for an algorithmic boss can result in serious risks to person’s mental health, but also contribute to the risk of serious injury or death, when people feel pushed to work beyond their limits. By making it clear that it is the responsibility of platforms to evaluate and mitigate these risks, the Directive shifts the larger debates away from the increasingly complex ‘black boxes’ to their evident impact on workers.

Conclusion

Hopefully, the promises entailed in the Directive can also be made evident during the process of transposition to the diverse national legislative and institutional contexts of each member state. If the harmonizing objective fails, platform workers working with the same platform might have different chances of obtaining information and accessing their rights. On the other hand, transposition opens ground to design and test diverse measures for platform workers data rights, that can be also applied other types of workplaces that rely on algorithmic management.

However, there is a real concern that the Directive might fail to achieve its aims if platforms strategically misinterpret new provisions, for example by citing insurmountable technical challenges, or purported needs of workers or customers. In fact, it is surprising that there was little resistance from platforms on this front. Perhaps, the new rules are seen as unlikely to undermine their core business, or perhaps, it is expected that in the absence of strong and speedy enforcement, the provisions in the Directive might result in nothing more than a meaningless box-ticking exercise.

One thing is certain: policymakers have been on a steep learning curve about the legal, technical and social aspects of regulating technologies in the workplace. The digital labour platforms inadvertently exposed the risks of automated-decision making systems. The platform workers helped us understand that these risks could one day apply to all of workers. By capturing and solidifying these lessons from platform work, the Directive paves the way for stronger ‘data rights’ for all.

Acknowledgements

Special thanks to Eva Kocher and Michael Silberman for their continuous support and valuable feedback on this essay.

 


SUGGESTED CITATION  Bronowicka, Joanna: How the Platform Work Directive Protects Workers’ Data, VerfBlog, 2023/7/19, https://verfassungsblog.de/how-the-platform-work-directive-protects-workers-data/, DOI: 10.17176/20230719-132129-0.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
digital labour, gdpr, workers rights