My colleagues Jim Harper and Julian Sanchez have been having a friendly debate over privacy regulations, which Julian summarizes over at TLF. Jim eschews his customary blogging parsimony in favor of a lengthy treatise on online privacy. The heart of their disagreement is captured in this paragraph from Julian’s post:
Evolutionary mechanisms are great, but they’re also slow, incremental, and in the case of the common law typically parasitic on the parallel evolution of broader social norms and expectations. That makes it an uneasy fit with novel and rapidly changing technological platforms for interaction. The tradeoff is that, while it’s slow, the discovery process tends to settle on efficient rules. But sometimes having a clear rule is actually more important—maybe significantly more important—than getting the rule just right. These features seem to me to weigh in favor of allowing Congress, not to say what standards of privacy must look like, but to step in and lay down public default rules that provide a stable basis for informed consumers and sellers to reach their own mutually beneficial agreements.
I think Jim’s response gets this exactly right:
As so many have before him, Julian asks for an “ordinary-language explanation” of what is going on. But we don’t yet have a reliable and well-understood language for describing all the things that happen with data. Much less do we know what features of data use are salient to consumers. Many blame corporate obfuscation for long, confusing privacy policies, but just try describing what happens to information about you when you walk down the street and the difficulty with writing privacy policies become clear.
To put this slightly differently, I think Julian is wrong to think that common law is somehow special in its dependence on “social norms and expectations.” In reality, all laws are dependent on underlying social norms. A law that’s out of touch with them will be ignored and evaded regardless of how it emerged.
Julian suggests that the common law process is a poor fit for “novel and rapidly changing technological platforms,” but I think just the opposite is true. He wants “ordinary-language explanations” of what websites do with personal data, but I think he’s underestimating how difficult that is. Users’ expectations—and their “ordinary-language” vocabulary for talking about those expectations—evolves in parallel with the technology itself. Predicting the evolution of language or culture relating to a technology is no easier than predicting the evolution of the technology itself. The typical policymaker in 1994 would not have been able to predict any more than he would have been able to predict the emergence of Facebook or GMail themselves.
If you doubt that developing a vocabulary for privacy is difficult, I encourage you to read about the history of the P3P project. I’m old enough to remember when P3P was an up-and-coming standard with broad industry and academic support. The idea was to agree on a standard, machine-readable format for describing websites’ privacy policies. Once this was accomplished, browser manufacturers would be able to add mechanisms to automatically notify users when they visited a site whose privacy policy didn’t live up to the user’s standards.
P3P is now effectively dead. Its failure is a complex, multi-faceted story, but one of the most important factors was that encoding meaningful privacy disclosures in precise, machine-readable formats turned out to be a lot harder than people expected. There is an almost unlimited number of possible permutations for the way website might use a customer’s data, and describing them in a finite number of standard categories necessarily meant lumping together a lot of different behaviors under the same category. In essence, the P3P team was trying to drain all the nuance out of what was still a complex and rapidly-changing set of social norms. It didn’t end well.
A federal disclosure mandate isn’t as bad an idea as comprehensive privacy regulations. But it would still run afoul of the same basic problem: legislators will need to guess what future technologies will look like, and they’re likely to guess wrong. Once the federal government has declared which facts must be disclosed and what the acceptable categories are, website operators will be required to disclose those facts whether or not consumers find them useful. And even if they also disclose other aspects of their privacy policies that users do find useful, those facts will be buried in boilerplate. Worst of all, statutory codification of privacy rules will warp the evolution of common-law rules, so that if public norms do finally gel, it will take longer for the law to catch up with them.
Your post tantalizing raises the the issue of privacy norms before regulation. That is precisely what should be discussed, however most of what I read tends to be repetitive anti-regulatory ranting so the issue of someone’s “right” to invade your privacy never even gets discussed. This unfortunate silence implies an implicit “right” of anyone to invade your privacy.
It is my understanding that Libertarian thought, in part, is based on responsibility. There is a saying that your freedom of speech ends when your fist touches my nose. While unsolicited intrusions into your privacy are not as onerous as a fist, it is nevertheless an unwarranted intrusion into your personal space. My point is that the social norm of privacy is that privacy belongs to the intended recipient of a message, NOT the instigator. Therefore, I find the discussion of privacy to be deceptive since it focuses on hyping “regulation” as an onerous restraint of freedom rather than focusing on the responsibility of the instigator to exhibit a level of self control. Before regulation there is responsibility.
Please see my post: Misplaced Regulatory Blame II