17 May 2023

Personalized Law and Social Media

Laws apply in context. A dark rainy night requires greater care by drivers. A sale of a dangerous product requires more pronounced warnings. And the sanction for a criminal act depends, among other things, on the harm it caused. The more circumstances the law counts as relevant in issuing specific commands, the more granular and contextualized it is.

But while counting many of the situational circumstances as relevant to the legal outcome, laws rarely count the identity of a person and their subjective characteristics. Interpersonal variance is not only ignored; it is regarded as illegitimate ground for differential treatment by law. For millennia, laws announce their aspiration for uniformity. Justitia, the Goddess of Justice whose blindfolded image adorns the facades of courthouses, assures the public: judges are impartial. They will not treat you differently based on who you are.

In 2021, Ariel Porat and I published a book that challenged law’s interpersonal uniformity axiom. In Personalized Law: Different Rules for Different People, we argued that rather than blindfolded, the law ought to know everything that’s relevant about people and use this information to tailor personalized legal commands. Many goals of many laws could be better achieved if one-size-fits-all rules were to be replaced by a scheme that recognizes relevant differences between people. Personalization — a paradigm that has been widely and successfully embraced in other areas of human activity, and primarily on social media — may be ready for the law. If medicine, education, or even parenting can treat, teach, or nurture better when personalized, the law too may be a candidate for a radical transformation, and reap the benefits of personalization.

Our book tried to show that it would be more just and efficient for the law to impose duties that vary person-by person. Tort law, for example, would pose greater standards of due care on people who create greater risks. In a personalized negligence regime, duties would be tailored not to the “reasonable person” but instead to a novel normative metric — the “reasonable you.” Dangerous drivers would have to comply with more exacting traffic laws, drive slower, and perhaps pay higher fines. Similarly for tailored rights: why not bestow greater consumer protections on consumers who need them more? The entire arsenal of protections, from the most exacting (like mandatory warranties and rights to withdraw) to the most permissive (like default rules and information rights) could be personalized. Consumers who are less sophisticated, experienced, educated, affluent, or cognitively sharp need stronger protections, and under personalized law they would receive them.

In this contribution to the symposium, I revisit the question where might the data for personalizing legal commands come from? How would lawmakers and judges measure the relevant difference between people. Specifically, I suggest one rich source of data — social media — but then immediately qualify it. The gist of my argument is this: social media elicits from people numerous quick and thoughtless decisions. A personal profile emerging from this environment is suitable only for legal areas that seek to personalized rules for similarly spur-of-the-moment irrational actions (like driving); it is unsuitable for regulating environments characterized by people’s “slow”, more reflective, decisions (like borrowing).

How Could the Law Be Personalized?

The list of regulatory techniques that could be personalized is almost unlimited. Criminal sanctions could be personalized (as they sometimes already are) in a manner that would potentially diminish the existing distortions of the criminal justice system.

Food labels or drug warnings could be designed to show each person a different subset of information, more relevant to their diet and health. A statutory age of capacity — to drive, purchase alcohol, or pilot a plane — could be based on each person’s statutory safety score. Methods applied by auto insurers to classify and predict each policyholder’s risk could be adopted by governmental licensing bureaus who would vary the age of capacity and the licensing restrictions based on each person’s risk profile.

Personalized law is a novel jurisprudential template with many “moving parts.” One design question is how “precise” the tailoring ought to be. This question is closely related to, and the answer to it is derived from, how granular is the information fed to the screening model. Personalized law could be, and sometimes already is, crude. For example, personalized speed limits for drivers could be “high,” “medium,” and “low,” and personalized rights to withdraw could create “long” and “short” duration categories. Such crudeness would be a sensible design choice when the information about people’s differences is less refined. And, conversely, personalized law could be maximally granular. Fueled by big data and implemented by algorithms, the scheme may account for numerous differences between people and issue commands along a continuum, to each citizen their own rule. It is this radical limit case that our book imagined. Each person would be “fitted” with a personal legal regime. It would be based on vast personal data about the person’s preferences, skills, risks, needs, and experience. The data would be processed with the help of statistical and machine learning models to generate commands that advance the objective underlying the law.

This prototype of personalized law has weighty challenges and problems. It might seem to conflict with fundamental values of distributive justice and equal protection. Is it fair to treat people so differently? Shouldn’t “equal protection” be equal? It is also very possible that personalized law would have unintended effects, for example of stigmatizing people who are singled out as “risky” or “needy.” Or that it might chill people’s incentives for personal improvement, for example, when the acquisition of skill and knowledge would reduce the personalized legal protections they are granted.

Some of the most pressing concerns hanging over personalized law have to do with information. The optimal administration of any law requires information, and first order of business for any legal reform is to soberly recognize information costs and constraints. Personalized law needs information about the relevant differences between people, but where would this information come from?

The Data Needed for Personalized Law

All around us we are witnessing a massive growth of personalized environments that rely on personal data collected by digital services and platforms. People read their news online, shop in e-commerce stores, stream their entertainment, meet through social media sites, and drive connected cars, and the data footprint they leave behind when engaged in these digitally supplied activities makes it possible for commercial parties to personalize the information, the products, the recreation, the social environments, and the insurance premiums they offer. Imagine if the government were able to acquire some of these databases and use them to personalize legally mandated disclosures and product warranties, licensing requirements, and duties of care. This might send chills up your spine. You might be worried about privacy and abuse of power. In the hands of the government such vast data are dangerous. The Chinese social credit system, which collects personal data to generate civic reputation scores and is used to silence dissent and to deny people access to primary services, is a startling warning of governmental abuse of data-driven personalized law.

But the issue I want to comment on here is the quality of the data and their suitability for designing legal commands. Let’s assume, for a moment, that legal rules could be personalized by a benevolent government with data from social media. Imagine, that is, that information about what people like, choose, buy, read, say, and visit can be used in tailoring legal commands. Platforms and advertisers find such information invaluable in advancing their commercial goals; would it also be valuable for the law? If so, for what purpose?

It might seem, at least at first glance, that social media data is ideally relevant for the law. It provides a rich perspective on each person, obviating the use of demographic proxies. For example, social media postings could identify some people as vulnerable to “status spending” or other disastrous but avoidable expenditures, and the law could require sellers and lenders to target red-flag warnings only to these people. Assembling snippets of evidence about people’s behaviors, preferences, regrets, and risks could be relevant to the personalization of licenses, rights, warnings, and duties. Social media could provide a rich body of such snippets.

This is an argument Porat and I made in the book, but I now tend to think that it may have been a bit hasty. Social media may indeed expose some truths about people. However, it may also — and quite often — display and even heighten the thoughtless, impulsive, and biased sides of their actions. This exposed gap between “preferences” and “choices” provides both an opportunity but also a critical limit for personalized law.

Regulating “Fast” versus “Slow” Decisions

We know that many choices people make on social media are impulsive. They have thirty seconds in an elevator to check their “feeds” and react, and so it is not surprising that the output of this meditation does produce filtered, reflective decisions. On social media, people instinctively “like” some content; click on news feeds based on momentary temptation; drawn to the sensational; and say things that they later regret. The profile of a person that emerges from the sum of these infinitesimal manifestations of uninhibitedness may be very different from who the person really is. Just think of some of your thoughtful, introspective friends who parade injudicious social media avatars. Tailoring an environment for an individual based on the sorrier half of their personality — especially a legal environment — could get things very wrong.

What we learn about a person from their social media profile could be relevant for some laws but not others. It is relevant for laws that address the impulsive and thoughtless side of their conduct. For example, branches of consumer protection law that protect buyers from the consequences of rash and reckless purchases, such as cooling off laws, are particularly useful for people who are prone to such behavior, and this propensity could be reflected in, and inferred from, social media. In contrast, that same information is less relevant for laws that govern thoughtful and slow choices, such as mortgages, insurance, or the writing of wills. It would be silly to predict how a person would want to bequeath their estate (for the purpose of a personalized intestate allocation rule) by observing who they “friend” on Facebook.

Put differently, in areas of deliberate choice, where the so-called “system two” thinking is active, the law wants to help people make good choices that serve their deep-rooted preferences, but here information from social media would be quite useless. The tools used by law in these occasions aid individuals in overcoming poor information or of lack of expertise. Here, personalized law needs information about people’s real preferences, not their impulsive ones. On social media, what people choose may not be what they truly prefer, since the surrounding choice architecture drives them and manipulates them to forge gut responses based on instantaneous emotional allure. An algorithm trained on such mindless behaviors infers people’s preferences with great error.

The information such algorithm processes, and the predictions it makes, could be valuable for other areas of law—those that address behaviors governed by people’s thoughtless and automated “system one” processes. Driving is the ultimate system-one operation, and laws of the road are designed to address the dangers associated with the mindlessness with which drivers create risks. Different people have different tendencies for imprudent driving, and these attributes are likely correlated with degrees of imprudence on social media. Granted, there are better information sources for predicting drivers’ idiosyncratic tendencies for risky maneuvers. Tracking data collected by auto insurers come to mind. The general point, however, holds. Social media data are informative in analyzing people’s fast decisions; many risks that the law regulates emerge from fast decisions, and they could be addressed in a personalized manner with the aid of these data.


In sum, people who display more offensive and thoughtless behavior on social media are, all else equal, destined to make other poor heat-of-the moment decisions, such as driving intoxicated, speeding dangerously, buying expensive things they cannot afford, or falling prey to online scams. If such correlations are strong, then social media data could predict the personal propensities that are relevant to the regulation of such activities. Using these data in the design of personalized standards of due care, personalized age of capacity, or personalized cooling off periods, could save lives, money, and hardship.

SUGGESTED CITATION  Ben-Shahar, Omri: Personalized Law and Social Media, VerfBlog, 2023/5/17, https://verfassungsblog.de/personalized-law/, DOI: 10.17176/20230518-020305-0.

Leave A Comment


1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.

Explore posts related to this:
Big Tech, Consumer Protection,