Big Data Must Get Smaller

Like many folks who worked in the data business for a long time, I don’t even like the words “Big Data.” Yeah, data is big now, I get it. But so what? Faster and bigger have been the theme in the computing business since the first calculator was invented. In fact, I don’t appreciate the common definition of Big Data that is often expressed in the three Vs: volume, velocity and variety. So, if any kind of data are big and fast, it’s all good? I don’t think so. If you have lots of “dumb” data all over the place, how does that help you? Well, as much as all the clutter that’s been piled on in your basement since 1971. It may yield some profit on an online auction site one day. Who knows? Maybe some collector will pay good money for some obscure Coltrane or Moody Blues albums that you never even touched since your last turntable (Ooh, what is that?) died on you. Those oversized album jackets were really cool though, weren’t they?

Like many folks who worked in the data business for a long time, I don’t even like the words “Big Data.” Yeah, data is big now, I get it. But so what? Faster and bigger have been the theme in the computing business since the first calculator was invented. In fact, I don’t appreciate the common definition of Big Data that is often expressed in the three Vs: volume, velocity and variety. So, if any kind of data are big and fast, it’s all good? I don’t think so. If you have lots of “dumb” data all over the place, how does that help you? Well, as much as all the clutter that’s been piled on in your basement since 1971. It may yield some profit on an online auction site one day. Who knows? Maybe some collector will pay good money for some obscure Coltrane or Moody Blues albums that you never even touched since your last turntable (Ooh, what is that?) died on you. Those oversized album jackets were really cool though, weren’t they?

Seriously, the word “Big” only emphasizes the size element, and that is a sure way to miss the essence of the data business. And many folks are missing even that little point by calling all decision-making activities that involve even small-sized data “Big Data.” It is entirely possible that this data stuff seems all new to someone, but the data-based decision-making process has been with us for a very long time. If you use that “B” word to differentiate old-fashioned data analytics of yesteryear and ridiculously large datasets of the present day, yes, that is a proper usage of it. But we all know most people do not mean it that way. One side benefit of this bloated and hyped up buzzword is data professionals like myself do not have to explain what we do for living for 20 minutes anymore by simply uttering the word “Big Data,” though that is a lot like a grandmother declaring all her grandchildren work on computers for living. Better yet, that magic “B” word sometimes opens doors to new business opportunities (or at least a chance to grab a microphone in non-data-related meetings and conferences) that data geeks of the past never dreamed of.

So, I guess it is not all that bad. But lest we forget, all hypes lead to overinvestments, and all overinvestments leads to disappointments, and all disappointments lead to purging of related personnel and vendors that bear that hyped-up dirty word in their titles or division names. If this Big Data stuff does not yield significant profit (or reduction in cost), I am certain that those investment bubbles will burst soon enough. Yes, some data folks may be lucky enough to milk it for another two or three years, but brace for impact if all those collected data do not lead to some serious dollar signs. I know how the storage and processing cost decreased significantly in recent years, but they ain’t totally free, and related man-hours aren’t exactly cheap, either. Also, if this whole data business is a new concept to an organization, any money spent on the promise of Big Data easily becomes a liability for the reluctant bunch.

This is why I open up my speeches and lectures with this question: “Have you made any money with this Big Data stuff yet?” Surely, you didn’t spend all that money to provide faster toys and nicer playgrounds to IT folks? Maybe the head of IT had some fun with it, but let’s ask that question to CFOs, not CTOs, CIOs or CDOs. I know some colleagues (i.e., fellow data geeks) who are already thinking about a new name for this—”decision-making activities, based on data and analytics”—because many of us will be still doing that “data stuff” even after Big Data cease to be cool after the judgment day. Yeah, that Gangnam Style dance was fun for a while, but who still jumps around like a horse?

Now, if you ask me (though nobody did yet), I’d say the Big Data should have been “Smart Data,” “Intelligent Data” or something to that extent. Because data must provide insights. Answers to questions. Guidance to decision-makers. To data professionals, piles of data—especially the ones that are fragmented, unstructured and unformatted, no matter what kind of fancy names the operating system and underlying database technology may bear—it is just a good start. For non-data-professionals, unrefined data—whether they are big or small—would remain distant and obscure. Offering mounds of raw data to end-users is like providing a painting kit when someone wants a picture on the wall. Bragging about the size of the data with impressive sounding new measurements that end with “bytes” is like counting grains of rice in California in front of a hungry man.

Big Data must get smaller. People want yes/no answers to their specific questions. If such clarity is not possible, probability figures to such questions should be provided; as in, “There’s an 80 percent chance of thunderstorms on the day of the company golf outing,” “An above-average chance to close a deal with a certain prospect” or “Potential value of a customer who is repeatedly complaining about something on the phone.” It is about easy-to-understand answers to business questions, not a quintillion bytes of data stored in some obscure cloud somewhere. As I stated at the end of my last column, the Big Data movement should be about (1) Getting rid of the noise, and (2) Providing simple answers to decision-makers. And getting to such answers is indeed the process of making data smaller and smaller.

In my past columns, I talked about the benefits of statistical models in the age of Big Data, as they are the best way to compact big and complex information in forms of simple answers (refer to “Why Model?”). Models built to predict (or point out) who is more likely to be into outdoor sports, to be a risk-averse investor, to go on a cruise vacation, to be a member of discount club, to buy children’s products, to be a bigtime donor or to be a NASCAR fan, are all providing specific answers to specific questions, while each model score is a result of serious reduction of information, often compressing thousands of variables into one answer. That simplification process in itself provides incredible value to decision-makers, as most wouldn’t know where to cut out unnecessary information to answer specific questions. Using mathematical techniques, we can cut down the noise with conviction.

In model development, “Variable Reduction” is the first major step after the target variable is determined (refer to “The Art of Targeting“). It is often the most rigorous and laborious exercise in the whole model development process, where the characteristics of models are often determined as each statistician has his or her unique approach to it. Now, I am not about to initiate a debate about the best statistical method for variable reduction (I haven’t met two statisticians who completely agree with each other in terms of methodologies), but I happened to know that many effective statistical analysts separate variables in terms of data types and treat them differently. In other words, not all data variables are created equal. So, what are the major types of data that database designers and decision-makers (i.e., non-mathematical types) should be aware of?

In the business of predictive analytics for marketing, the following three types of data make up three dimensions of a target individual’s portrait:

  1. Descriptive Data
  2. Transaction Data / Behavioral Data
  3. Attitudinal Data

In other words, if we get to know all three aspects of a person, it will be much easier to predict what the person is about and/or what the person will do. Why do we need these three dimensions? If an individual has a high income and is living in a highly valued home (demographic element, which is descriptive); and if he is an avid golfer (behavioral element often derived from his purchase history), can we just assume that he is politically conservative (attitudinal element)? Well, not really, and not all the time. Sometimes we have to stop and ask what the person’s attitude and outlook on life is all about. Now, because it is not practical to ask everyone in the country about every subject, we often build models to predict the attitudinal aspect with available data. If you got a phone call from a political party that “assumes” your political stance, that incident was probably not random or accidental. Like I emphasized many times, analytics is about making the best of what is available, as there is no such thing as a complete dataset, even in this age of ubiquitous data. Nonetheless, these three dimensions of the data spectrum occupy a unique and distinct place in the business of predictive analytics.

So, in the interest of obtaining, maintaining and utilizing all possible types of data—or, conversely, reducing the size of data with conviction by knowing what to ignore, let us dig a little deeper:

Descriptive Data
Generally, demographic data—such as people’s income, age, number of children, housing size, dwelling type, occupation, etc.—fall under this category. For B-to-B applications, “Firmographic” data—such as number of employees, sales volume, year started, industry type, etc.—would be considered as descriptive data. It is about what the targets “look like” and, generally, they are frozen in the present time. Many prominent data compilers (or data brokers, as the U.S. government calls them) collect, compile and refine the data and make hundreds of variables available to users in various industry sectors. They also fill in the blanks using predictive modeling techniques. In other words, the compilers may not know the income range of every household, but using statistical techniques and other available data—such as age, home ownership, housing value, and many other variables—they provide their best estimates in case of missing values. People often have some allergic reaction to such data compilation practices siting privacy concerns, but these types of data are not about looking up one person at a time, but about analyzing and targeting groups (or segments) of individuals and households. In terms of predictive power, they are quite effective and results are very consistent. The best part is that most of the variables are available for every household in the country, whether they are actual or inferred.

Other types of descriptive data include geo-demographic data, and the Census Data by the U.S. Census Bureau falls under this category. These datasets are organized by geographic denominations such as Census Block Group, Census Tract, Country or ZIP Code Tabulation Area (ZCTA, much like postal ZIP codes, but not exactly the same). Although they are not available on an individual or a household level, the Census data are very useful in predictive modeling, as every target record can be enhanced with it, even when name and address are not available, and data themselves are very stable. The downside is that while the datasets are free through Census Bureau, the raw datasets contain more than 40,000 variables. Plus, due to the budget cut and changes in survey methods during the past decade, the sample size (yes, they sample) decreased significantly, rendering some variables useless at lower geographic denominations, such as Census Block Group. There are professional data companies that narrowed down the list of variables to manageable sizes (300 to 400 variables) and filled in the missing values. Because they are geo-level data, variables are in the forms of percentages, averages or median values of elements, such as gender, race, age, language, occupation, education level, real estate value, etc. (as in, percent male, percent Asian, percent white-collar professionals, average income, median school years, median rent, etc.).

There are many instances where marketers cannot pinpoint the identity of a person due to privacy issues or challenges in data collection, and the Census Data play a role of effective substitute for individual- or household-level demographic data. In predictive analytics, duller variables that are available nearly all the time are often more valuable than precise information with limited availability.

Transaction Data/Behavioral Data
While descriptive data are about what the targets look like, behavioral data are about what they actually did. Often, behavioral data are in forms of transactions. So many just call it transaction data. What marketers commonly refer to as RFM (Recency, Frequency and Monetary) data fall under this category. In terms of predicting power, they are truly at the top of the food chain. Yes, we can build models to guess who potential golfers are with demographic data, such as age, gender, income, occupation, housing value and other neighborhood-level information, but if you get to “know” that someone is a buyer of a box of golf balls every six weeks or so, why guess? Further, models built with transaction data can even predict the nature of future purchases, in terms of monetary value and frequency intervals. Unfortunately, many who have access to RFM data are using them only in rudimentary filtering, as in “select everyone who spends more than $200 in a gift category during the past 12 months,” or something like that. But we can do so much more with rich transaction data in every stage of the marketing life cycle for prospecting, cultivating, retaining and winning back.

Other types of behavioral data include non-transaction data, such as click data, page views, abandoned shopping baskets or movement data. This type of behavioral data is getting a lot of attention as it is truly “big.” The data have been out of reach for many decision-makers before the emergence of new technology to capture and store them. In terms of predictability, nevertheless, they are not as powerful as real transaction data. These non-transaction data may provide directional guidance, as they are what some data geeks call “a-camera-on-everyone’s-shoulder” type of data. But we all know that there is a clear dividing line between people’s intentions and their commitments. And it can be very costly to follow every breath you take, every move you make, and every step you take. Due to their distinct characteristics, transaction data and non-transaction data must be managed separately. And if used together in models, they should be clearly labeled, so the analysts will never treat them the same way by accident. You really don’t want to mix intentions and commitments.

The trouble with the behavioral data are, (1) they are difficult to compile and manage, (2) they get big; sometimes really big, (3) they are generally confined within divisions or companies, and (4) they are not easy to analyze. In fact, most of the examples that I used in this series are about the transaction data. Now, No. 3 here could be really troublesome, as it equates to availability (or lack thereof). Yes, you may know everything that happened with your customers, but do you know where else they are shopping? Fortunately, there are co-op companies that can answer that question, as they are compilers of transaction data across multiple merchants and sources. And combined data can be exponentially more powerful than data in silos. Now, because transaction data are not always available for every person in databases, analysts often combine behavioral data and descriptive data in their models. Transaction data usually become the dominant predictors in such cases, while descriptive data play the supporting roles filling in the gaps and smoothing out the predictive curves.

As I stated repeatedly, predictive analytics in marketing is all about finding out (1) whom to engage, and (2) if you decided to engage someone, what to offer to that person. Using carefully collected transaction data for most of their customers, there are supermarket chains that achieved 100 percent customization rates for their coupon books. That means no two coupon books are exactly the same, which is a quite impressive accomplishment. And that is all transaction data in action, and it is a great example of “Big Data” (or rather, “Smart Data”).

Attitudinal Data
In the past, attitudinal data came from surveys, primary researches and focus groups. Now, basically all social media channels function as gigantic focus groups. Through virtual places, such as Facebook, Twitter or other social media networks, people are freely volunteering what they think and feel about certain products and services, and many marketers are learning how to “listen” to them. Sentiment analysis falls under that category of analytics, and many automatically think of this type of analytics when they hear “Big Data.”

The trouble with social data is:

  1. We often do not know who’s behind the statements in question, and
  2. They are in silos, and it is not easy to combine such data with transaction or demographic data, due to lack of identity of their sources.

Yes, we can see that a certain political candidate is trending high after an impressive speech, but how would we connect that piece of information to whom will actually donate money for the candidate’s causes? If we can find out “where” the target is via an IP address and related ZIP codes, we may be able to connect the voter to geo-demographic data, such as the Census. But, generally, personally identifiable information (PII) is only accessible by the data compilers, if they even bothered to collect them.

Therefore, most such studies are on a macro level, citing trends and directions, and types of analysts in that field are quite different from the micro-level analysts who deal with behavioral data and descriptive data. Now, the former provide important insights regarding the “why” part of the equation, which is often the hardest thing to predict; while the latter provide answers to “who, what, where and when.” (“Who” is the easiest to answer, and “when” is the hardest.) That “why” part may dictate a product development part of the decision-making process at the conceptual stage (as in, “Why would customers care for a new type of dishwasher?”), while “who, what, where and when” are more about selling the developed products (as in “Let’s sell those dishwashers in the most effective ways.”). So, it can be argued that these different types of data call for different types of analytics for different cycles in the decision-making processes.

Obviously, there are more types of data out there. But for marketing applications dealing with humans, these three types of data complete the buyers’ portraits. Now, depending on what marketers are trying to do with the data, they can prioritize where to invest first and what to ignore (for now). If they are early in the marketing cycle trying to develop a new product for the future, they need to understand why people want something and behave in certain ways. If signing up as many new customers as possible is the immediate goal, finding out who and where the ideal prospects are becomes the most imminent task. If maximizing the customer value is the ongoing objective, then you’d better start analyzing transaction data more seriously. If preventing attrition is the goal, then you will have to line up the transaction data in time series format for further analysis.

The business goals must dictate the analytics, and the analytics call for specific types of data to meet the goals, and the supporting datasets should be in “analytics-ready” formats. Not the other way around, where businesses are dictated by the limitations of analytics, and analytics are hampered by inadequate data clutters. That type of business-oriented hierarchy should be the main theme of effective data management, and with clear goals and proper data strategy, you will know where to invest first and what data to ignore as a decision-maker, not necessarily as a mathematical analyst. And that is the first step toward making the Big Data smaller. Don’t be impressed by the size of the data, as they often blur the big picture and not all data are created equal.

Data Deep Dive: The Art of Targeting

Even if you own a sniper rifle (and I’m not judging), if you aim at the wrong place, you will never hit the target. Obvious, right? But that happens all the time in the world of marketing, even when advanced analytics and predictive modeling techniques are routinely employed. How is that possible? Well, the marketing world is not like an Army shooting range where the silhouette of the target is conveniently hung at the predetermined location, but it is more like the “Twilight Zone,” where things are not what they seem. Marketers who failed to hit the real target often blame the guns, which in this case are targeting tools, such as models and segmentations. But let me ask, was the target properly defined in the first place?

Even if you own a sniper rifle (and I’m not judging), if you aim at the wrong place, you will never hit the target. Obvious, right? But that happens all the time in the world of marketing, even when advanced analytics and predictive modeling techniques are routinely employed. How is that possible? Well, the marketing world is not like an Army shooting range where the silhouette of the target is conveniently hung at the predetermined location, but it is more like the “Twilight Zone,” where things are not what they seem. Marketers who failed to hit the real target often blame the guns, which in this case are targeting tools, such as models and segmentations. But let me ask, was the target properly defined in the first place?

In my previous columns, I talked about the importance of predictive analytics in modern marketing (refer to “Why Model?”) for various reasons, such as targeting accuracy, consistency, deeper use of data, and most importantly in the age of Big Data, concise nature of model scores where tons of data are packed into ready-for-use formats. Now, even the marketers who bought into these ideas often make mistakes by relinquishing the important duty of target definition solely to analysts and statisticians, who do not necessarily possess the power to read the marketers’ minds. Targeting is often called “half-art and half-science.” And it should be looked at from multiple angles, starting with the marketer’s point of view. Therefore, even marketers who are slightly (or, in many cases, severely) allergic to mathematics should come one step closer to the world of analytics and modeling. Don’t be too scared, as I am not asking you to be a rifle designer or sniper here; I am only talking about hanging the target in the right place so that others can shoot at it.

Let us start by reviewing what statistical models are: A model is a mathematical expression of “differences” between dichotomous groups; which, in marketing, are often referred to as “targets” and “non-targets.” Let’s say a marketer wants to target “high-value customers.” To build a model to describe such targets, we also need to define “non-high-value customers,” as well. In marketing, popular targets are often expressed as “repeat buyers,” “responders to certain campaigns,” “big-time spenders,” “long-term, high-value customers,” “troubled customers,” etc. for specific products and channels. Now, for all those targets, we also need to define “bizarro” or “anti-” versions of them. One may think that they are just the “remainders” of the target. But, unfortunately, it is not that simple; the definition of the whole universe should be set first to even bring up the concept of the remainders. In many cases, defining “non-buyers” is much more difficult than defining “buyers,” because lack of purchase information does not guarantee that the individual in question is indeed a non-buyer. Maybe the data collection was never complete. Maybe he used a different channel to respond. Maybe his wife bought the item for him. Maybe you don’t have access to the entire pool of names that represent the “universe.”

Remember T, C, & M
That is why we need to examine the following three elements carefully when discussing statistical models with marketers who are not necessarily statisticians:

  1. Target,
  2. Comparison Universe, and
  3. Methodology.

I call them “TCM” in short, so that I don’t leave out any element in exploratory conversations. Defining proper target is the obvious first step. Defining and obtaining data for the comparison universe is equally important, but it could be challenging. But without it, you’d have nothing against which you compare the target. Again, a model is an algorithm that expresses differences between two non-overlapping groups. So, yes, you need both Superman and Bizarro-Superman (who always seems more elusive than his counterpart). And that one important variable that differentiates the target and non-target is called “Dependent Variable” in modeling.

The third element in our discussion is the methodology. I am sure you may have heard of terms like logistic regression, stepwise regression, neural net, decision trees, CHAID analysis, genetic algorithm, etc., etc. Here is my advice to marketers and end-users:

  • State your goals and usages cases clearly, and let the analyst pick proper methodology that suites your goals.
  • Don’t be a bad patient who walks into a doctor’s office demanding a specific prescription before the doctor even examines you.

Besides, for all intents and purposes, the methodology itself matters the least in comparison with an erroneously defined target and the comparison universes. Differences in methodologies are often measured in fractions. A combination of a wrong target and wrong universe definition ends up as a shotgun, if not an artillery barrage. That doesn’t sound so precise, does it? We should be talking about a sniper rifle here.

Clear Goals Leading to Definitions of Target and Comparison
So, let’s roll up our sleeves and dig deeper into defining targets. Allow me to use an example, as you will be able to picture the process better that way. Let’s just say that, for general marketing purposes, you want to build a model targeting “frequent flyers.” One may ask for business or for pleasure, but let’s just say that such data are hard to obtain at this moment. (Finding the “reasons” is always much more difficult than counting the number of transactions.) And it was collectively decided that it would be just beneficial to know who is more likely to be a frequent flyer, in general. Such knowledge could be very useful for many applications, not just for the travel industry, but for other affiliated services, such as credit cards or publications. Plus, analytics is about making the best of what you’ve got, not waiting for some perfect datasets.

Now, here is the first challenge:

  • When it comes to flying, how frequent is frequent enough for you? Five times a year, 10 times, 20 times or even more?
  • Over how many years?
  • Would you consider actual miles traveled, or just number of issued tickets?
  • How large are the audiences in those brackets?

If you decided that five times a year is a not-so-big or not-so-small target (yes, sizes do matter) that also fits the goal of the model (you don’t want to target only super-elites, as they could be too rare or too distinct, almost like outliers), to whom are they going to be compared? Everyone who flew less than five times last year? How about people who didn’t fly at all last year?

Actually, one option is to compare people who flew more than five times against people who didn’t fly at all last year, but wouldn’t that model be too much like a plain “flyer” model? Or, will that option provide more vivid distinction among the general population? Or, one analyst may raise her hand and say “to hell with all these breaks and let’s just build a model using the number of times flown last year as the continuous target.” The crazy part is this: None of these options are right or wrong, but each combination of target and comparison will certainly yield very different-looking models.

Then what should a marketer do in a situation like this? Again, clearly state the goal and what is more important to you. If this is for general travel-related merchandizing, then the goal should be more about distinguishing more likely frequent flyers out of the general population; therefore, comparing five-plus flyers against non-flyers—ignoring the one-to-four-time flyers—makes sense. If this project is for an airline to target potential gold or platinum members, using people who don’t even fly as comparison makes little or no sense. Of course, in a situation like this, the analyst in charge (or data scientist, the way we refer to them these days), must come halfway and prescribe exactly what target and comparison definitions would be most effective for that particular user. That requires lots of preliminary data exploration, and it is not all science, but half art.

Now, if I may provide a shortcut in defining the comparison universe, just draw the representable sample from “the pool of names that are eligible for your marketing efforts.” The key word is “eligible” here. For example, many businesses operate within certain areas with certain restrictions or predetermined targeting criteria. It would make no sense to use the U.S. population sample for models for supermarket chains, telecommunications, or utility companies with designated footprints. If the business in question is selling female apparel items, first eliminate the male population from the comparison universe (but I’d leave “unknown” genders in the mix, so that the model can work its magic in that shady ground). You must remember, however, that all this means you need different models when you change the prospecting universe, even if the target definition remains unchanged. Because the model algorithm is the expression of the difference between T and C, you need a new model if you swap out the C part, even if you left the T alone.

Multiple Targets
Sometimes it gets twisted the other way around, where the comparison universe is relatively stable (i.e., your prospecting universe is stable) but there could be multiple targets (i.e., multiple Ts, like T1, T2, etc.) in your customer base.

Let me elaborate with a real-life example. A while back, we were helping a company that sells expensive auto accessories for luxury cars. The client, following his intuition, casually told us that he only cares for big spenders whose average order sizes are more than $300. Now, the trouble with this statement is that:

  1. Such a universe could be too small to be used effectively as a target for models, and
  2. High spenders do not tend to purchase often, so we may end up leaving out the majority of the potential target buyers in the whole process.

This is exactly why some type of customer profiling must precede the actual target definition. A series of simple distribution reports clearly revealed that this particular client was dealing with a dual-universe situation, where the first group (or segment) is made of infrequent, but high-dollar spenders whose average orders were even greater than $300, and the second group is made of very frequent buyers whose average order sizes are well below the $100 mark. If we had ignored this finding, or worse, neglected to run preliminary reports and just relying on our client’s wishful thinking, we would have created a “phantom” target, which is just an average of these dual universes. A model designed for such a phantom target will yield phantom results. The solution? If you find two distinct targets (as in T1 and T2), just bite the bullet and develop two separate models (T1 vs. C and T2 vs. C).

Multi-step Approach
There are still other reasons why you may need multiple models. Let’s talk about the case of “target within a target.” Some may relate this idea to a “drill-down” concept, and it can be very useful when the prospecting universe is very large, and the marketer is trying to reach only the top 1 percent (which can be still very large, if the pool contains hundreds of millions of people). Correctly finding the top 5 percent in any universe is difficult enough. So what I suggest in this case is to build two models in sequence to get to the “Best of the Best” in a stepwise fashion.

  • The first model would be more like an “elimination” model, where obviously not-so-desirable prospects would be removed from the process, and
  • The second-step model would be designed to go after the best prospects among survivors of the first step.

Again, models are expressions of differences between targets and non-targets, so if the first model eliminated the bottom 80 percent to 90 percent of the universe and leaves the rest as the new comparison universe, you need a separate model—for sure. And lots of interesting things happen at the later stage, where new variables start to show up in algorithms or important variables in the first step lose steam in later steps. While a bit cumbersome during deployment, the multi-step approach ensures precision targeting, much like a sniper rifle at close range.

I also suggest this type of multi-step process when clients are attempting to use the result of segmentation analysis as a selection tool. Segmentation techniques are useful as descriptive analytics. But as a targeting tool, they are just too much like a shotgun approach. It is one thing to describe groups of people such as “young working mothers,” “up-and-coming,” and “empty-nesters with big savings” and use them as references when carving out messages tailored toward them. But it is quite another to target such large groups as if the population within a particular segment is completely homogeneous in terms of susceptibility to specific offers or products. Surely, the difference between a Mercedes buyer and a Lexus buyer ain’t income and age, which may have been the main differentiator for segmentation. So, in the interest of maintaining a common theme throughout the marketing campaigns, I’d say such segments are good first steps. But for further precision targeting, you may need a model or two within each segment, depending on the size, channel to be employed and nature of offers.

Another case where the multi-step approach is useful is when the marketing and sales processes are naturally broken down into multiple steps. For typical B-to-B marketing, one may start the campaign by mass mailing or email (I’d say that step also requires modeling). And when responses start coming in, the sales team can take over and start contacting responders through more personal channels to close the deal. Such sales efforts are obviously very time-consuming, so we may build a “value” model measuring the potential value of the mail or email responders and start contacting them in a hierarchical order. Again, as the available pool of prospects gets smaller and smaller, the nature of targeting changes as well, requiring different types of models.

This type of funnel approach is also very useful in online marketing, as the natural steps involved in email or banner marketing go through lifecycles, such as blasting, delivery, impression, clickthrough, browsing, shopping, investigation, shopping basket, checkout (Yeah! Conversion!) and repeat purchases. Obviously, not all steps require aggressive or precision targeting. But I’d say, at the minimum, initial blast, clickthrough and conversion should be looked at separately. For any lifetime value analysis, yes, the repeat purchase is a key step; which, unfortunately, is often neglected by many marketers and data collectors.

Inversely Related Targets
More complex cases are when some of these multiple response and conversion steps are “inversely” related. For example, many responders to invitation-to-apply type credit card offers are often people with not-so-great credit. Well, if one has a good credit score, would all these credit card companies have left them alone? So, in a case like that, it becomes very tricky to find good responders who are also credit-worthy in the vast pool of a prospect universe.

I wouldn’t go as far as saying that it is like finding a needle in a haystack, but it is certainly not easy. Now, I’ve met folks who go after the likely responders with potential to be approved as a single target. It really is a philosophical difference, but I much prefer building two separate models in a situation like this:

  • One model designed to measure responsiveness, and
  • Another to measure likelihood to be approved.

The major benefit for having separate models is that each model will be able employ different types and sources of data variables. A more practical benefit for the users is that the marketers will be able to pick and choose what is more important to them at the time of campaign execution. They will obviously go to the top corner bracket, where both scores are high (i.e., potential responders who are likely to be approved). But as they dial the selection down, they will be able to test responsiveness and credit-worthiness separately.

Mixing Multiple Model Scores
Even when multiple models are developed with completely different intentions, mixing them up will produce very interesting results. Imagine you have access to scores for “High-Value Customer Model” and “Attrition Model.” If you cross these scores in a simple 2×2 matrix, you can easily create a useful segment in one corner called “Valuable Vulnerable” (a term that my mentor created a long time ago). Yes, one score is predicting who is likely to drop your service, but who cares if that customer shows little or no value to your business? Take care of the valuable customers first.

This type of mixing and matching becomes really interesting if you have lots of pre-developed models. During my tenure at a large data compiling company, we built more than 120 models for all kinds of consumer characteristics for general use. I remember the real fun began when we started mixing multiple models, like combining a “NASCAR Fan” model with a “College Football Fan” model; a “Leaning Conservative” model with an “NRA Donor” model; an “Organic Food” one with a “Cook for Fun” model or a “Wine Enthusiast” model; a “Foreign Vacation” model with a “Luxury Hotel” model or a “Cruise” model; a “Safety and Security Conscious” model or a “Home Improvement” model with a “Homeowner” model, etc., etc.

You see, no one is one dimensional, and we proved it with mathematics.

No One is One-dimensional
Obviously, these examples are just excerpts from a long playbook for the art of targeting. My intention is to emphasize that marketers must consider target, comparison and methodologies separately; and a combination of these three elements yields the most fitting solutions for each challenge, way beyond what some popular toolsets or new statistical methodologies presented in some technical conferences can acomplish. In fact, when the marketers are able to define the target in a logical fashion with help from trained analysts and data scientists, the effectiveness of modeling and subsequent marketing campaigns increase dramatically. Creating and maintaining an analytics department or hiring an outsourcing analytics vendor aren’t enough.

One may be concerned about the idea of building multiple models so casually, but let me remind you that it is the reality in which we already reside, anyway. I am saying this, as I’ve seen too many marketers who try to fix everything with just one hammer, and the results weren’t ideal—to say the least.

It is a shame that we still treat people with one-dimensional tools, such segmentations and clusters, in this age of ubiquitous and abundant data. Nobody is one-dimensional, and we must embrace that reality sooner than later. That calls for rapid model development and deployment, using everything that we’ve got.

Arguing about how difficult it is to build one or two more models here and there is so last century.

Chicken or the Egg? Data or Analytics?

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first.

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first. And I had the same answer when such a title didn’t even exist and CTOs or other types of executives covered that role in data-rich environments. As soon as an executive with a seemingly technical title starts representing the technology, that business is doomed. (Unless, of course, the business itself is about having fun with the technology. How nice!)

Nonetheless, if I really have to pick just one out of the two choices, I would definitely pick the analytics over data, as that is the key to providing answers to business questions. Data and databases must be supporting that critical role of analytics, not the other way around. Unfortunately, many organizations are completely backward about it, where analysts are confined within the limitations of database structures and affiliated technologies, and the business owners and decision-makers are dictated to by the analysts and analytical tool sets. It should be the business first, then the analytics. And all databases—especially marketing databases—should be optimized for analytical activities.

In my previous columns, I talked about the importance of marketing databases and statistical modeling in the age of Big Data; not all depositories of information are necessarily marketing databases, and statistical modeling is the best way to harness marketing answers out of mounds of accumulated data. That begs for the next question: Is your marketing database model-ready?

When I talk about the benefits of statistical modeling in data-rich environments (refer to my previous column titled “Why Model?”), I often encounter folks who list reasons why they do not employ modeling as part of their normal marketing activities. If I may share a few examples here:

  • Target universe is too small: Depending on the industry, the prospect universe and customer base are sometimes very small in size, so one may decide to engage everyone in the target group. But do you know what to offer to each of your prospects? Customized offers should be based on some serious analytics.
  • Predictive data not available: This may have been true years back, but not in this day and age. Either there is a major failure in data collection, or collected data are too unstructured to yield any meaningful answers. Aren’t we living in the age of Big Data? Surely we should all dig deeper.
  • 1-to-1 marketing channels not in plan: As I repeatedly said in my previous columns, “every” channel is, or soon will be, a 1-to-1 channel. Every audience is secretly screaming, “Entertain us!” And customized customer engagement efforts should be based on modeling, segmentation and profiling.
  • Budget doesn’t allow modeling: If the budget is too tight, a marketer may opt in for some software solution instead of hiring a team of statisticians. Remember that cookie-cutter models out of software packages are still better than someone’s intuitive selection rules (i.e., someone’s “gut” feeling).
  • The whole modeling process is just too painful: Hmm, I hear you. The whole process could be long and difficult. Now, why do you think it is so painful?

Like a good doctor, a consultant should be able to identify root causes based on pain points. So let’s hear some complaints:

  • It is not easy to find “best” customers for targeting
  • Modelers are fixing data all the time
  • Models end up relying on a few popular variables, anyway
  • Analysts are asking for more data all the time
  • It takes too long to develop and implement models
  • There are serious inconsistencies when models are applied to the database
  • Results are disappointing
  • Etc., etc…

I often get called in when model-based marketing efforts yield disappointing results. More often than not, the opening statement in such meetings is that “The model did not work.” Really? What is interesting is that in more than nine times out of 10 cases like that, the models are the only elements that seem to have been done properly. Everything else—from pre-modeling steps, such as data hygiene, conversion, categorization, and summarization; to post-modeling steps, such as score application and validation—often turns out to be the root cause of all the troubles, resulting in pain points listed here.

When I speak at marketing conferences, talking about this subject of this “model-ready” environment, I always ask if there are statisticians and analysts in the audience. Then I ask what percentage of their time goes into non-statistical activities, such as data preparation and remedying data errors. The absolute majority of them say they spend of 80 percent to 90 percent of their time fixing the data, devoting the rest to the model development work. You don’t need me to tell you that something is terribly wrong with this picture. And I am pretty sure that none of those analysts got their PhDs and master’s degrees in statistics to spend most of their waking hours fixing the data. Yeah, I know from experience that, in this data business, the last guy who happens to touch the dataset always ends up being responsible for all errors made to the file thus far, but still. No wonder it is often quoted that one of the key elements of being a successful data scientist is the programming skill.

When you provide datasets filled with unstructured, incomplete and/or missing data, diligent analysts will devote their time to remedying the situation and making the best out of what they have received. I myself often tell newcomers that analytics is really about making the best of what you’ve got. The trouble is that such data preparation work calls for a different set of skills that have nothing to do with statistics or analytics, and most analysts are not that great at programming, nor are they trained for it.

Even if they were able to create a set of sensible variables to play with, here comes the bigger trouble; what they have just fixed is just a “sample” of the database, when the models must be applied to the whole thing later. Modern databases often contain hundreds of millions of records, and no analyst in his or her right mind uses the whole base to develop any models. Even if the sample is as large as a few million records (an overkill, for sure) that would hardly be the entire picture. The real trouble is that no model is useful unless the resultant model scores are available on every record in the database. It is one thing to fix a sample of a few hundred thousand records. Now try to apply that model algorithm to 200 million entries. You see all those interesting variables that analysts created and fixed in the sample universe? All that should be redone in the real database with hundreds of millions of lines.

Sure, it is not impossible to include all the instructions of variable conversion, reformat, edit and summarization in the model-scoring program. But such a practice is the No. 1 cause of errors, inconsistencies and serious delays. Yes, it is not impossible to steer a car with your knees while texting with your hands, but I wouldn’t call that the best practice.

That is why marketing databases must be model-ready, where sampling and scoring become a routine with minimal data transformation. When I design a marketing database, I always put the analysts on top of the user list. Sure, non-statistical types will still be able to run queries and reports out of it, but those activities should be secondary as they are lower-level functions (i.e., simpler and easier) compared to being “model-ready.”

Here is list of prerequisites of being model-ready (which will be explained in detail in my future columns):

  • All tables linked or merged properly and consistently
  • Data summarized to consistent levels such as individuals, households, email entries or products (depending on the ranking priority by the users)
  • All numeric fields standardized, where missing data and zero values are separated
  • All categorical data edited and categorized according to preset business rules
  • Missing data imputed by standardized set of rules
  • All external data variables appended properly

Basically, the whole database should be as pristine as the sample datasets that analysts play with. That way, sampling should take only a few seconds, and applying the resultant model algorithms to the whole base would simply be the computer’s job, not some nerve-wrecking, nail-biting, all-night baby-sitting suspense for every update cycle.

In my co-op database days, we designed and implemented the core database with this model-ready philosophy, where all samples were presented to the analysts on silver platters, with absolutely no need for fixing the data any further. Analysts devoted their time to pondering target definitions and statistical methodologies. This way, each analyst was able to build about eight to 10 “custom” models—not cookie-cutter models—per “day,” and all models were applied to the entire database with more than 200 million individuals at the end of each day (I hear that they are even more efficient these days). Now, for the folks who are accustomed to 30-day model implementation cycle (I’ve seen as long as 6-month cycles), this may sound like a total science fiction. And I am not even saying that all companies need to build and implement that many models every day, as that would hardly be a core business for them, anyway.

In any case, this type of practice has been in use way before the words “Big Data” were even uttered by anyone, and I would say that such discipline is required even more desperately now. Everyone is screaming for immediate answers for their questions, and the questions should be answered in forms of model scores, which are the most effective and concise summations of all available data. This so-called “in-database” modeling and scoring practice starts with “model-ready” database structure. In the upcoming issues, I will share the detailed ways to get there.

So, here is the answer for the chicken-or-the-egg question. It is the business posing the questions first and foremost, then the analytics providing answers to those questions, where databases are optimized to support such analytical activities including predictive modeling. For the chicken example, with the ultimate goal of all living creatures being procreation of their species, I’d say eggs are just a means to that end. Therefore, for a business-minded chicken, yeah, definitely the chicken before the egg. Not that I’ve seen too many logical chickens.

‘Big Data’ Is Like Mining Gold for a Watch – Gold Can’t Tell Time

It is often quoted that 2.5 quintillion bytes of data are collected each day. That surely sounds like a big number, considering 1 quintillion bytes (or exabytes, if that sounds fancier) are equal to 1 billion gigabytes. … My phone can hold about 65 gigabytes; which, by the way, means nothing to me. I just know that figure equates to about 6,000 songs, plus all my personal information, with room to spare for hundreds of photos and videos. 

It is often quoted that 2.5 quintillion bytes of data are collected each day. That surely sounds like a big number, considering 1 quintillion bytes (or exabytes, if that sounds fancier) are equal to 1 billion gigabytes. Looking back only about 20 years, I remember my beloved 386-based desktop computer had a hard drive that can barely hold 300 megabytes, which was considered to be quite large in those ancient days. Now, my phone can hold about 65 gigabytes; which, by the way, means nothing to me. I just know that figure equates to about 6,000 songs, plus all my personal information, with room to spare for hundreds of photos and videos. So how do I fathom the size of 2.5 quintillion bytes? I don’t. I give up. I’d rather count the number stars in the universe. And I have been in the database business for more than 25 years.

But I don’t feel bad about that. If a pile of data requires a computer to process it, then it is already too “big” for our brains. In the age of “Big Data,” size matters, but emphasizing the size element is missing the point. People want to understand the data in their own terms and want to use them in decision-making processes. Throwing the raw data around to people without math or computing skills is like galleries handing out paint and brushes to people who want paintings on the wall. Worse yet, continuing to point out how “big” the Big Data world is to them is like quoting the number of rice grains on this planet in front of a hungry man, when he doesn’t even care how many grains are in one bowl. He just wants to eat a bowl of “cooked” rice, and right this moment.

To be a successful data player, one must be the master of the following three steps:

  • Collection;
  • Refinement; and
  • Delivery.

Collection and storage are obviously important in the age of Big Data. However, that in itself shouldn’t be the goal. I hear lots of bragging about how much data can be collected and stored, and how fast the data can be retrieved.

Great, you can retrieve any transaction detail going back 20 years in less than 0.5 seconds. Congratulations. But can you now tell me whom are more likely to be loyal customers for the next five years, with annual spending potential of more than $250? Or who is more likely to quit using the service in next 60 days? Who is more likely to be on a cruise ship leaving the dock on the East Coast heading for Europe between Thanksgiving and Christmas, with onboard spending potential greater than $300? Who is more likely to respond to emails with free shipping offers? Where should I open my next store selling fancy children’s products? What do my customers look like, and where do they go between 6 and 9 p.m.?

Answers to these types of questions do not come from the raw data, but they should be derived from the data through the data refinement process. And that is the hard part. Asking the right questions, expressing the goals in a mathematical format, throwing out data that don’t fit the question, merging data from a diverse array of sources, summarizing the data into meaningful levels, filling in the blanks (there will be plenty—even these days), and running statistical models to come up with scores that look like an answer to the question are all parts of the data refinement process. It is a lot like manufacturing gold watches, where mining gold is just an important first step. But a piece of gold won’t tell you what time it is.

The final step is to deliver that answer—which, by now, should be in a user-friendly format—to the user at the right time in the right format. Often, lots of data-related products only emphasize this part, as it is the most intimate one to the users. After all, it provides an illusion that the user is in total control, being able to touch the data so nicely displayed on the screen. Such tool sets may produce impressive-looking reports and dazzling graphics. But, lest we forget, they are only representations of the data refinement processes. In addition, no tool set will ever do the thinking part for anyone. I’ve seen so many missed opportunities where decision-makers invested obscene amounts of money in fancy tool sets, believing they will conduct all the logical and data refinement work for them, automatically. That is like believing that purchasing the top of the line Fender Stratocaster will guarantee that you will play like Eric Clapton in the near future. Yes, the tool sets are important as delivery mechanisms of refined data, but none of them replace the refinement part. Doing so would be like skipping guitar practice after spending $3,000 on a guitar.

Big Data business should be about providing answers to questions. It should be about humans who are the subjects of data collection and, in turn, the ultimate beneficiaries of information. It’s not about IT or tool sets that come and go like hit songs. But it should be about inserting advanced use of data into everyday decision-making processes by all kinds of people, not just the ones with statistics degrees. The goal of data players must be to make it simple—not bigger and more complex.

I boldly predict that missing these points will make “Big Data” a dirty word in the next three years. Emphasizing the size element alone will lead to unbalanced investments, which will then lead to disappointing results with not much to show for them in this cruel age of ROI. That is a sure way to kill the buzz. Not that I am that fond of the expression “Big Data”; though, I admit, one benefit has been that I don’t have to explain what I do for living for 10 minutes any more. Nonetheless, all the Big Data folks may need an exit plan if we are indeed heading for the days when it will be yet another disappointing buzzword. So let’s do this one right, and start thinking about refining the data first and foremost.

Collection and storage are just so last year.

Will There Be a ‘Snowden Effect’ on Marketing Data?

I didn’t even want to write this headline or blog post, given the fault-filled linkages some people make between marketing and something completely different from marketing. But it never seems to fail: Whenever some big news event captures the media’s attention, politicians’ attention surely follows. And when it has to do with consumer privacy, the results for the private sector—and use of marketing information in particular—are rarely favorable

I didn’t even want to write this headline or blog post, given the fault-filled linkages some people make between marketing and something completely different from marketing.

But it never seems to fail: Whenever some big news event captures the media’s attention, politicians’ attention surely follows. And when it has to do with consumer privacy, the results for the private sector—and use of marketing information in particular—are rarely favorable. This is true even when the responsible use of marketing data has NOTHING to do with the scenarios presented in the news.

U.S. legislative history is strewn with such evidence, linking (erroneously) marketing with some sensational occurrence other than marketing. Here are just three of them:

  • An actress is murdered in Los Angeles (1989). It turns out the murderer hired a private investigator to get her address from the state motor vehicle department, and then stalked and killed her. A bevy of state and federal anti-stalking laws are passed—but Congress passes an additional one, the Driver’s Privacy Protection Act (1994). Would you believe, state motor vehicle registration and license data is curtailed for marketing purposes (data that had been worth millions to the states, never mind losing the beneficial impact to automotive and insurance marketers and consumers), even though such data had nothing to do with the crime?
  • A child is kidnapped and killed, again in California (1993). A grieving father goes on a publicity rampage against presence of children in marketing databases—even though the horrible crime had nothing to with marketing, and even with state law enforcement officials testifying in public hearings following the crime that perpetrators of crimes against children most often stalk their victims physically (from an era prior to social media). Nonetheless, California and national media go after compilers of marketing data related to children. The stage is set later that decade for new privacy restrictions for children’s marketing data online.
  • Judge Robert Bork is nominated by President Reagan for the U.S. Supreme Court (1987). An enterprising reporter manages to publish a list of video titles rented by the nominee (all of them benign, by the way). A concerned Congress—no doubt thinking of its members’ own video rental history—passes the Video Privacy Protection Act (1988), shutting down marketing access to video titles from customer rentals/purchases.

And this summer, we have the National Security Administration revelations from Edward Snowden regarding public surveillance of U.S. citizens in the name of anti-terrorism. Now, we can only guess on what potential debilitating effects may be ahead for marketers, but you can bet some politicians or regulators are drumming beats for a response.

Privacy law in America should be about protecting individual liberty from abuse of information by the public sector—and leave the private sector alone, except in cases where there are demonstrable or probable harms from data misuse or errors. Such is the case with personal financial, credit and health data, for example, where the U.S. government wisely has taken a sector, pragmatic approach.

But Snowden’s government surveillance revelations could very well have a “chilling” effect on more broad marketing data collection and use, too. Politicians, in the name of protecting consumer privacy, may very well rush to curb data-driven marketing activity, rather than tackling the much-harder and real culprit, that is, spying on innocent Americans (and government acquiescence of such activity).

Concurrent to the NSA revelations, the Federal Trade Commission increasingly is vocal on “data brokers” and marketing activity—and trying to link data collection for marketing purposes to non-marketing purposes. Yet, it is dishonest, disingenuous and spurious to do so—and doing so fans fear and hypotheticals, instead of rational thought. There is no relationship between responsible data collection for marketing purposes—which only delivers benefits to the economy, and tax revenue, too—and data used for insurance and premiums, hiring purposes, and certainly the federal government’s activities to monitor internet and telecommunications in order to profile or detect would-be terrorists.

Marketers—for 40 years—have operated under a successful self-regulation code of notice, consumer choice, security and enforcement—and central to this is the use of marketing data for marketing purposes only. That’s as true online as offline. Where would we be without consumer trust in this process?

It may be very appropriate here to legislate what government may access—and how they may access—when it comes to personally identifiable information for surveillance or anti-terrorist purposes. But don’t even utter the word “marketing” in the same sentence. Let marketers continue with self-regulation: We offer consumers notice and opt-out, we focus strictly on marketing purposes only—and everyone benefits in the process.

Privacy in the Age of Big Data

Consumers reveal more than ever before consciously through social media and, just as importantly, unconsciously through their behaviors. This data gives marketers great power, which they can use to design better products, hone messages and, most importantly, sell more by providing consumers what they want. That’s all good from a marketer’s perspective, but for consumers, the scope of data collection can often cross a line, becoming too intrusive or too loosely held. Marketers have to balance the opportunities of Big Data with the concerns of consumers or they risk a serious backlash.

Consumers reveal more than ever before consciously through social media and, just as importantly, unconsciously through their behaviors. This data gives marketers great power, which they can use to design better products, hone messages and, most importantly, sell more by providing consumers what they want. That’s all good from a marketer’s perspective, but for consumers, the scope of data collection can often cross a line, becoming too intrusive or too loosely held. Marketers have to balance the opportunities of Big Data with the concerns of consumers or they risk a serious backlash.

For some people, the line has already been crossed. When Edward Snowden revealed information about the government’s data collection policies, he presented it as a scandal. But, in many ways, what the NSA does differs mainly in scope from what many private marketers do. A German politician recently went to court to force T-Mobile to release the full amount of metadata that it collects from his cellphone behavior. The results highlighted just how much a company can know from this data—not just about an individual’s behavior and interests, but also about his or her friends, and whom among them are most influential. Even for marketers who strongly believe in the social utility that this enables, it highlights just how core an issue privacy has become.

So how can marketers get the most from data without alarming consumers? Transparency and value. For some consumers, there’s really no good use of personal data, so opt-outs have to be clear and easy to use. The best way to collect and use data is if the value to the consumer is so clear that he or she will opt in to a program.

One company that has framed its data collection as a service that’s worth joining is Waze, which Google recently bought for over a billion dollars. Google beat out rivals Facebook and Apple because high-quality maps are one of the most important infrastructure tools for the big mobile players. Well, before Google bought the company, CEO David Bardin said, “Waze relies on the wisdom of crowds: We haven’t spent billions of dollars a year, we’ve cooperated with millions of users. Google is the No. 1 player. But, a few years out, there’s no excuse that we wouldn’t pass them.”

Drivers sign up for the app to find out about traffic conditions ahead; inherently useful information. Once they’re logged in, they automatically send information about their speed and location to Waze. Waze invites active participation, too, encouraging users to fix map errors and report accidents, weather disruptions, police and gas stations. Users get points for using the service and more points for actively reporting issues. With 50 million users, this decentralized data entry system is incredibly efficient at producing real-time road conditions and maps. Even Google’s own map service can’t match the refresh speed of Waze maps.

Points for check-ins don’t fully explain why people are so invested in Waze. Dynamic graphics help, with charming icons and those de rigueur 3D zooming maps. Even more important than a great interface, however, is that Waze serves a real-time need while making users feel part of a community working together to solve problems. Waze is part of a broader movement to crowdsource solutions that rely on consumers or investors who believe in the mission of a company, not just its utility. People contribute to Waze because they want to help fellow travelers, as well as speed their own journeys. It’s shared self-interest.

Waze has a relatively easy task of proving the worth of a data exchange. Other companies need to work harder to show that they use data to enhance user experiences—but the extra effort is not optional.

As marketers become better at using data, they will need to prove the value of the data they use, and they need to be transparent on how they’re using it. If they don’t, marketers will have their own Snowdens to worry about.

Trade Associations Bond on Industry Self-Regulation

With the threat of an online privacy bill looming, some of the nation’s largest media and marketing trade associations released self-regulatory principles on July 2 to protect consumer privacy in ad-supported interactive media.

With the threat of an online privacy bill looming, some of the nation’s largest media and marketing trade associations released self-regulatory principles on July 2 to protect consumer privacy in ad-supported interactive media.

The seven principles will require advertisers and Web sites to clearly inform consumers about data collection practices and enable them to exercise control over that information. The collaboration includes the American Association of Advertising Agencies, the Association of National Advertisers, the Direct Marketing Association and the Interactive Advertising Bureau.

The Council of Better Business Bureaus is also part of the effort and has agreed, along with the DMA, to implement accountability programs to promote widespread adoption of the principles.

This is a big deal: Taken collectively, the participating associations represent more than 5,000 U.S. companies, and the task force represents the first time all advertising and marketing industry associations have come together to develop self-regulatory principles.

And it should be a big deal, quite frankly. Concerns around government regulation on the use and collection of data on the Internet has been swelling in the industry over the past few years as the medium has become all-encompassing. What’s more, the House Communications Subcommittee Chairman Rick Boucher (D-Va.) is preparing an online privacy bill right now that may contain an “opt-in” provision that would prevent companies from targeting consumers without their explicit permission.

The seven principles
The self-regulatory program is expected to be implemented at the beginning of 2010. The process, however, started in January, when the task force announced it was working on developing these principles in direct response to calls by the Federal Trade Commission.
The principles are designed to address consumer concerns about the use of personal information and interest-based advertising, while preserving the robust advertising that supports free online content and the ability to deliver relevant advertising to consumers.
The seven principles include the following:

  • The Education Principle, which calls for organizations to educate individuals and businesses about online behavioral advertising. Along these lines, a major campaign — expected to exceed 500 million online advertising impressions — will be launched over the next 18 months to educate consumers about online behavioral advertising, the benefits of these practices and the means to exercise choice.
  • The Transparency Principle, which calls for clearer and easily accessible disclosures to consumers about data collection and use practices associated with online behavioral advertising.
  • The Consumer Control Principle, which requires Internet access service providers and providers of desktop applications software such as Web browser toolbars to obtain the consent of users before engaging in online behavioral advertising, and take steps to de-identify the data used for such purposes.
  • The Data Security Principle, which calls for organizations to provide reasonable security for, and limited retention of, data collected and used for online behavioral advertising purposes.
  • The Material Changes Principle, which calls on organizations to obtain consent for any material change to their online behavioral advertising data collection and use policies and practices to data collected prior to such change.
  • The Sensitive Data Principle, which recognizes that data collected from children and used for online behavioral advertising merits heightened protection, and requires parental consent for behavioral advertising to consumers known to be less than 13 on child-directed Web sites. This principle also provides heightened protections to certain health and financial data when attributable to a specific individual.
  • The Accountability Principle, which calls for the development of programs to further advance these principles, including programs to monitor and report instances of uncorrected noncompliance with these principles to appropriate government agencies.

Sounds like a plan to me. What do you think? Do you think this initiative will stave off government regulation for good, or should these trade groups be doing more? Leave a comment here, and let us know how you feel.