How New Data Protection Laws Affect Your Non-Transactional Website

Good news! Regulatory agencies are taking privacy policies and data protection more seriously than ever. Bad news! Regulatory agencies are taking privacy policies and data protection more seriously than ever.

Good news! Regulatory agencies are taking privacy policies and data protection more seriously than ever.

Bad news! Regulatory agencies are taking privacy policies and data protection more seriously than ever.

The increased regulatory activity is certainly good news for all of us as consumers. As marketers, that silver lining can be overshadowed by the cloud of fear, uncertainty, and doubt — to say nothing of the potentially enormous fines — attached to these new regulations. Let’s take a look at what your responsibilities are (or are likely to become) as privacy regulations become more widely adopted.

Before we begin: I’m not a lawyer. You should absolutely consult one, as there are so many ways the various regulations may or may not apply to your firm. Many of the regulations are regional in nature — GDPR applies to the EU, CCPA to California residents, the SHIELD Act to New York State — but the “placelessness” of the Internet means those regulations may still apply to you, if you do business with residents of those jurisdictions (even though you’re located elsewhere).

Beyond Credit Cards and Social Security Numbers

With the latest round of rules, regulators are taking a broader view of what constitutes personally identifiable information or “PII.” This is why regulations are now applicable for a non-transactional website.

We are clearly beyond the era when the only data that needed to be safeguarded was banking information and social security numbers. Now, even a site visitor’s IP address may be considered PII. In short, you are now responsible for data and privacy protection on your website, regardless of that website’s purpose.

Though a burden for site owners, it’s not hard to understand why this change is a good thing. With so much data living online now, the danger isn’t necessarily in exposing any particular data point, but in being able to piece so many of them together.

Fortunately, the underlying principles are nearly as simple as the regulations themselves are confusing.

SSL Certificates

Perhaps the most basic element of data protection is an SSL certificate. Though it isn’t directly related to the new regulatory environment it’s a basic foundational component of solid data handling. You probably already have an SSL certificate in place; if not, that should be your first order of business. They’re inexpensive — there are even free versions available — and they have the added benefit of improving search engine performance.

Get Consent

Second on your list of good data-handling practices is getting visitor consent before gathering information. Yes, opt-in policies are a pain. Yes, double opt-in policies are even more of a pain — and can drive down engagement rates. Both are necessary to adhere to some of the new regulations.

This includes not only information you gather actively — like email addresses for gated content — but also more passive information, like the use of cookies on your website.

Give Options

Perhaps the biggest shift we’re seeing is toward giving site visitors more options over how their PII is being used. For example, the ability to turn cookies off when visiting a site.

You should also provide a way for consumers to see what information you have gathered and associated with their name, account, or email address.

Including the Option to Be Forgotten

Even after giving consent, consumers should have the right to change their minds. As marketers, that means giving them the ability to delete the information we’ve gathered.

Planning Ad Responsibilities For Data Breaches

Accidents happen, new vulnerabilities emerge, and you can’t control every aspect of your data handling as completely as you’d like. Being prepared for the possibility of a data breach is as important as doing everything you can to prevent them in the first place.

What happens when user information is exposed will depend on the data involved, your location, and what your privacy and data retention policies have promised, as well as which regulations you are subject to.

Be prepared with a plan of action for addressing all foreseeable data breaches. In most cases, you’ll need to alert those who have been or may have been affected. There may also be timeframes in which you must send alerts and possibly remediation in the form of credit or other monitoring.

A Small Investment Pays Off

As a final note, I’ll circle back to the “I’m not a lawyer” meme. A lawyer with expertise in this area is going to be an important part of your team. So, too, will a technology lead who is open to changing how he or she has thought about data privacy in the past. For those who haven’t dealt with transactional requirements in the past, this can be brand new territory which may require new tools and even new vendors.

All of this comes at a price, of course, but given the stakes — not just the fines, but the reputational losses, hits to employee morale, and lost productivity — it’s a small investment for doing right by your prospects and customers.

The Sustainability of Consumer Trust

I refuse to jump on the privacy “scandal” bandwagon. It is rough listening in this week to certain lawmakers fail to recognize the absolute benefits accrued by consumers through the responsible collection and use of data for commerce, advertising and innovation. Yes, data handling requires stewardship — but that doesn’t mean “data” in and of itself — constitutes anything close to being a harm that needs to be regulate.

I refuse to jump on the privacy “scandal” bandwagon.

It is rough listening in this week to certain lawmakers fail to recognize the absolute benefits accrued by consumers through the responsible collection and use of data for commerce, advertising and innovation. Yes, data handling requires stewardship — but that doesn’t mean “data” in and of itself — constitutes anything close to being a harm that needs to be regulated, as if there were no rules already in place.

Whether or not I, as a voter, saw Russian-administered content online, in an alleged bid to stir up controversy and division among the American electorate, is an understandable concern. It should be investigated — because we need to isolate and diminish any and all “malicious” actors in our digital economy and democracy. Fraud, too, is a harm that must be isolated, identified and eradicated. You’ll have no debate from me here.

But let’s not conflate malicious actors with bona fide relevant advertising and content being presented to, and engaged by, consumers — a wholly beneficial outcome that public policy must acknowledge. Data, commerce and advertising are not dirty words — they are engines for job growth, innovation and — yes, even government revenue through taxes. Ads finance content freely available and useful to consumers and other businesses — and pay for journalism, too, and a diversity of content on the Net.

But all of this responsible data collection and use are useless if there is no consumer trust.

I argue that the U.S. approach to data regulation — legal restriction where sensitive personal information is collected, transferred and applied through sector-specific laws concerning health data, certain government IDs, personal finance, children’s data and credit — and where Fair Information Principles are applied for other data categories that do not have such propensity for harm. Consumers are extremely well-served by this regimen. It is far more harmful to have a breach of credit data, for example, than to have business entities share consumers’ advertising profiles in non-sensitive categories — the latter data use being wholly beneficial. U.S. data protections thus are wisely targeted and measured.

Ads pay the freight — and the economy grows as a direct result. Better for an ad to be relevant to an audience. I believe this is superior (at least for Americans) than other highly prescriptive regimes (read, Europe) that make little distinctions between data categories and require “opt-in” permissions for any and all types of data sharing, including that related to advertising and marketing. If consumers are inundated with opt-in requests for less sensitive types of data use, even for categories of use where they directly benefit, then inertia and fatigue will prevail and useful data flows won’t happen. I’m hoping businesses handling European citizen data will find ways to prove me wrong!

Is the motivation of such draconian restrictions in Europe really privacy protection as a fundamental human right? Or is it somehow questioning commerce, competition, diversity, trade and innovation, and other important social aims financed by advertising — all sacrificed on the altar of permission? Yes, consumer control should be a default with ad data — but affirmative consent should be reserved for data categories where the propensity for harm is real. This point of view, in Europe at least, is rhetorical — because the law is the law. And as of May 25, the General Data Protection Regulation is enforceable on all global entities touching EU citizen data. God Save the Queen and her data subjects, and perhaps all of us.

At a recent privacy conference, one of the primary architects of the European Union’s ePrivacy Regulation (a follow-up to GDPR) said, “What we are aiming at is to abolish surveillance-driven advertising.”

Surveillance-driven” advertising and “surveillance capitalism” — are Europe- and privacy-academic speak that seeks to link or conflate interest-based advertising with government surveillance of citizens. I mean, really? Ad tech may include companies not known to the consumer, and because brands enlist outside ad tech companies to help them make advertising inventory more relevant (and useful) to site and app users, and disclose such data sharing through enhanced notice mechanisms and privacy policies, somehow this still constitutes “surveillance” with all of its negative connotations. Let’s not forget that the intended result of these investments in consumer engagement is a more relevant ad, not a dossier!

We can see where some (important) heads may be motivated.

Back to the U.S.

Even something as innocuous as an ad served to a device — we have plenty of guidance for privacy rules of the road: Fair Information Practice Principles that are global, Data & Marketing Association Data Standards 2.0, Digital Advertising Alliance Principles and the YourAdChoices program (a client), IAB technical standards … and an active Federal Trade Commission (and other agencies, in certain data categories) overseeing the ecosystem, and enforcing privacy expectations — is both self-regulatory and legal.

The idea that the United States is a laissez-faire data free-for-all is pointedly not a correct assessment. In the digital world, we have 20-plus years of self-regulation, based on nearly 60 years of self-regulation offline. All of this premised on building and bolstering consumer trust. We have federal and state law in important data sectors. Through all of these decades, we’ve had an FTC minding the advertising/marketing store, growing the market, and — enforcing privacy and security of data through meaningful enforcement actions and consent decrees that serve as teaching tools to other businesses.

Instead of grandstanding and undermining years of hard-earned consumer trust, more policymakers — and perhaps industry leaders, too — should recognize how responsible data flows serve the consumer. Thus, they should back existing industry regimes that promote stewardship and governance — and hold us to it. Serving consumers, earning their trust, after all, is a shared goal by all responsible stakeholders.

Privacy or Trade Barrier? Searching for a New ‘Safe Harbor’

The Court of Justice of the European Union has ruled that the European Union-United States “Safe Harbor” Agreement, which allowed collection of E.U. citizen data by U.S. entities because the two governments had analogous levels of privacy protection, is no longer valid.

First, there is no legal advice in this blog post (there never is) … just a little bit of reporting.

The Court of Justice of the European Union on October 6 ruled that the European Union-United States “Safe Harbor” Agreement, operating since 2000, was no longer valid. The “Safe Harbor” had enabled cross-border data flows regarding EU citizens to the United States because the U.S. was deemed to have inadequate privacy protections under the EU Data Protection Directive of 1995 (which took effect in 1998). The “Safe Harbor” provided needed protection cover. That is no longer the case.

In its decision, the Court also ruled that individual data protection authorities in 28 EU member states have new powers to deem any cross-border data transfer mechanism as non-EU regulation compliant — even if the European Commission may feel otherwise.

According to a recent Webinar (October 9), the nullification of the Safe Harbor affects more than 4,000 U.S. companies alone that have relied on it. While the Court reportedly wants data to continue to flow between the world’s two largest markets, it sees an immediate need for a new level set of privacy protection in the United States, and is committed to providing guidance as soon as possible as to how such protections can be afforded and data flows and data processing reinstated. The rub is not with U.S. companies per se – the trouble originates with U.S. government surveillance and law enforcement agencies in the wake of Edward Snowden’s 2013 revelations.

As one Professor wrote:
The Court reiterates even more clearly that mass surveillance is inherently a problem, regardless of the safeguards in place to limit its abuse. Indeed, as noted already, the Court ruled that mass surveillance of the content of communications breaches the essence of the right to privacy and so cannot be justified at all. (Surveillance of content which is targeted on suspected criminal activities or security threats is clearly justifiable, however).
—A
rs Technica, Oct. 15, 2015

In the wake of the decision, privacy advocates reportedly have given three months for a new U.S. and EU “Safe Harbor 2.0” agreement. Otherwise, they will seek coordinated action by EU data protection commissioners against individual companies operating under the previous Safe Harbor, which again is immediately invalid. Alternatively, businesses are left to model contract clauses or binding agreements with national data protection authorities — not challenged by the court’s decision — to maintain (where present) or reinstate (where newly concluded) personal data flows outside the EU. Risk assessors must be busy.

U.S. and European governments have been working on a new Safe Harbor 2.0 for at least two years, according to Andrea Glorioso, counselor, digital economy/cyber, Delegation of the European Union to the United States. No one is certain when such a revised Safe Harbor agreement may be finalized, but, given the ramifications of the EU court’s decision, it’s in no one’s interest to let this carry on for long.

And a little bit of opinion: Mass surveillance by government and law enforcement — to combat crime and terrorism, for example — and responsible data collection and use by the private sector in the pursuit of economic growth are not the same subject, and should not be linked. Let’s hope a new Safe Harbor will differentiate the two — and not just for Europeans. It’s not as if American citizens are free from worry about what European governments may be up to, and that’s a concern that extends inside our own borders, too.

White House ‘Big Data’ Review Recognizes Innovation and Self-Regulation

When the White House announced its intent to study the rise of “Big Data,” as a citizen, I guessed there might be a lot to say about government surveillance, public safety and terrorism, in light of Snowden. As a consumer, I suspected there might be a lot of attention to data breaches, in light of the recent Target incident among others. As a working individual whose livelihood depends on data access and use for more relevant marketing, I was nervous

When the White House announced its intent to study the rise of “Big Data” and its impact on business, commerce, government and consumer’s everyday lives, with privacy protection as an underlying theme, I have to admit I was bracing myself.

As a citizen, I guessed there might be a lot to say about government surveillance, public safety and terrorism, in light of Snowden. As a consumer, I suspected there might be a lot of attention to data breaches, in light of the recent Target incident among others.

As a working individual whose livelihood depends on data access and use for more relevant marketing, I was nervous there might not be a practical discussion of how information sharing and privacy protection can (and is) successfully provided through a combination of peer regulation, enterprising technology and sector-specific legal regulation where information protection and security is niche-based and designed to prevent harm from data error or misuse (credit, financial, health, for example).

Then the report, titled “Big Data: Seizing Opportunities, Preserving Values” (pdf), was released.

As a citizen, I was left wanting. Government surveillance of law-abiding U.S. citizens is parked for another report, another day. Some reforms have already been announced. Perhaps this is a blessing—there never should have been a link made between government spying and private sector use of data for commercial purposes anyway.

As a consumer, I was glad to see a call for a single national data breach notification standard. A few years back, I received several notices of “my” data being breached in a few months’ span—two of which offered a year’s worth of identity theft and fraud protection (which I continued to purchase on my own). Whether by luck or design, those notices have declined in number—I’ve had none in the past year. As I hear and read about more recent major data breaches, I haven’t been directly affected (to my knowledge), and maybe—just maybe—some organizations and brands in which I’m involved have gotten better about security. (Indirectly, we all pay for fraud—in higher prices for products and services, insurance, bank fees and the like—and perhaps in our collective loss of trust and carefree.)

As a marketer, I have to say I was happily surprised at the clear-headed conveyance of facts and reporting of opinion in this report—and, importantly, the steer-clearance of political grandstanding. I will leave it to our trade associations to comment on the policy recommendations, but as one our industry’s leading practitioners stated in Adweek, “If anyone of my clients wants a 101 on big data, I’m going to send them this report. This report is very relevant because a lot of what drives this business is programmatic media buying. There are millions of places to advertise on the Web, so an algorithm will decide what your likely audience will be.”

The report either cited or recognized such industry initiatives as the Data-Driven Marketing Institute’s “Value of Data Sharing” report, the Digital Advertising Alliance (disclosure, a client) and its own recent research on data sharing’s role in increasing advertising’s value, as well as DAA’s YourAdChoices.com site and consumer opt-out program for online interest-based advertising. There was care to note—even in the report’s title—that innovation is one of the benefits made possible by big data, and that this economic and social value needs to be enabled, if not fully supported and facilitated.

The report did raise red flags about commercial redlining, eligibility issues connected to employment, healthcare, finance and insurance, and data security (as noted)—but these important areas for consumer protection largely are already regulated, and even have industry backing for further regulation in certain areas such as breach notification. Most of these topics don’t have much to do with smarter marketing, even if some privacy advocates and academics hypothesize about that stretch.

Where do we go from here? The report did make several policy recommendations—and while there were some seeking to codify in law Fair Information Practices Principles (a Consumer Privacy Bill of Rights), there was no attempt to call for an omnibus privacy protection law that treats all data and all data usage the same. If you haven’t had the chance, give it a read—I actually learned from it, and avoided tears and rage.