September 06, 2018 by Christopher
The General Data Protection Regulation is designed to support the individual’s interest in informational privacy, which the EU recognizes as a fundamental right. Under that law, the collection, use and transfer of personal information is prohibited unless done with consent of the individual. It has a de minimis legitimating role for social or business purposes but generally, if the individual revokes consent, processing of information must stop and often the information itself must be deleted.
The US works from a different paradigm. We certainly value privacy as necessary and valuable to ensure both personal dignity and a free and functioning society. But we focus privacy laws on the prevention and remediation of harm, not on consent. United States privacy law grew out of the common-law privacy torts: defamation, intrusion on seclusion, disclosure of private facts, false light and the right of publicity. Thus, for example, the tort of disclosure of private facts requires that those facts be offensive to a reasonable person of ordinary sensibility, not been previously revealed to the public, and not newsworthy. Truth is not a defense to this kind of action, but context is.
Outside of the common law, federal and state statutes have focused around the prevention of harm in specific sensitive contexts. For example, the Video Privacy Protection Act (VPPA) prohibits the dissemination of a consumer’s video rental list. The Health Information Portability and Accountability Act (HIPAA) protects against the unauthorized disclosure of covered health information. The Fair Credit Reporting Act safeguards the transmission of consumer information for certain purposes like employment, insurance, and credit reporting. And the Electronic Communications Privacy Act protects against eavesdropping without one-party consent. These statutes (and their state analogs) are designed to protect individuals from harm (or the risk of harm) caused by the release of accurate information that would hurt recognized privacy interests.
States have now begun to pass their own privacy laws that cover “personal information” writ large, without any connection to harm or sensitive contexts. California, for example, just passed a landmark privacy law that (among other things) requires publishers to delete a consumer’s personal information and to cease any further transmission or dissemination of that information, upon the request of the consumer.
The definition of covered information is a broad one. The California law defines “personal information” as “information capable of being associated, or [that] could reasonably be linked, directly or indirectly, with a particular consumer or household.” The definition excludes “publicly available information,” defined as information obtained from government records, but then states that information is not “publicly available” if it is used for a purpose that is “not compatible with the purpose for which the information is maintained and made available in government records or for which it is publicly maintained.” The practical problems with this approach are numerous—you would not want, for example, a father refusing to pay child support to be able to “opt out” of a database used by the courts or government agencies to track him down. And the definition includes information that is widely available and known, such as might be gleaned from press clippings.
Even more fundamentally, however, the legislation as a whole depends on a novel and dangerous premise: that the government can reel in lawfully acquired information that it has given to the public. The government has never had this power because our balance of interests is different from Europe’s.
The collection, use and dissemination of information is speech. And the First Amendment requires that in order to regulate speech, the government must tailor the legislation to an important interest that it says it’s advancing. This is especially clear in the case of public records, which are generally available to the public. It seems at best dissonant for the government to claim that it’s protecting privacy on the one hand, while making the information available to all comers on the other. It’s exactly this kind of dissonance that has caused the downfall of many a speech-restrictive statute.
The statutory tailoring of most existing U.S. privacy statutes to a substantial or compelling state interest is one reason that they would likely withstand constitutional challenge. But that foundational requirement has to be considered when looking to expand or create a new federal privacy statute. Governments can—and should—pass legislation to deal with particular privacy problems, as the Federal Government has in terms of eavesdropping, video rental records, and health and financial information. But in each case, Congress identified a specific kind of harm (or at the very least a risk of harm) in a specific sector, and drafted legislation designed to address that harm.
And that’s a crucial difference between the US and Europe. Europe allows broad privacy rules without a speech-protective tie to a compelling state interest. In contrast, the U.S. Constitution allows that information to flow freely unless the government restricts it in a narrow way to satisfy a specific privacy interest or other important government interest.
In another post, I’ll get a bit more into the legal weeds of this, and address what one law professor has called “information fiduciaries.”

Christopher Mohr is General Counsel and VP, Intellectual Property Policy & Enforcement at SIIA.