Sex and the Schoolboy: Predictive Modeling – Who’s Doing It? Who’s Doing it Right?

Forgive the borrowed interest, but predictive modeling is to marketers as sex is to schoolboys. They’re all talking about it, but few are doing it. And among those who are, fewer are doing it right. In customer relationship marketing (CRM), predictive modeling uses data to predict the likelihood of a customer taking a specific action. It’s a three-step process.

Forgive the borrowed interest, but predictive modeling is to marketers as sex is to schoolboys.

They’re all talking about it, but few are doing it. And among those who are, fewer are doing it right.

In customer relationship marketing (CRM), predictive modeling uses data to predict the likelihood of a customer taking a specific action. It’s a three-step process:

1. Examine the characteristics of the customers who took a desired action

2. Compare them against the characteristics of customers who didn’t take that action

3. Determine which characteristics are most predictive of the customer taking the action and the value or degree to which each variable is predictive

Predictive modeling is useful in allocating CRM resources efficiently. If a model predicts that certain customers are less likely respond to a specific offer, then fewer resources can be allocated to those customers, allowing more resources to be allocated to those who are more likely to respond.

Data Inputs
A predictive model will only be as good as the input data that’s used in the modeling process. You need the data that define the dependent variable; that is, the outcome the model is trying to predict (such as response to a particular offer). You’ll also need the data that define the independent variables, or the characteristics that will be predictive of the desired outcome (such as age, income, purchase history, etc.). Attitudinal and behavioral data may also be predictive, such as an expressed interest in weight loss, fitness, healthy eating, etc.

The more variables that are fed into the model at the beginning, the more likely the modeling process will identify relevant predictors. Modeling is an iterative process, and those variables that are not at all predictive will fall out in the early iterations, leaving those that are most predictive for more precise analysis in later iterations. The danger in not having enough independent variables to model is that the resultant model will only explain a portion of the desired outcome.

For example, a predictive model created to determine the factors affecting physician prescribing of a particular brand was inconclusive, because there weren’t enough dependent variables to explain the outcome fully. In a standard regression analysis, the number of RXs written in a specific timeframe was set as the dependent variable. There were only three independent variables available: sales calls, physician samples and direct mail promotions to physicians. And while each of the three variables turned out to have a positive effect on prescriptions written, the “Multiple R” value of the regression equation was high at 0.44, meaning that these variables only explained 44 percent of the variance in RXs. The other 56 percent of the variance is from factors that were not included in the model input.

Sample Size
Larger samples will produce more robust models than smaller ones. Some modelers recommend a minimum data set of 10,000 records, 500 of those with the desired outcome. Others report acceptable results with as few as 100 records with the desired outcome. But in general, size matters.

Regardless, it is important to hold out a validation sample from the modeling process. That allows the model to be applied to the hold-out sample to validate its ability to predict the desired outcome.

Important First Steps

1. Define Your Outcome. What do you want the model to do for your business? Predict likelihood to opt-in? Predict likelihood to respond to a particular offer? Your objective will drive the data set that you need to define the dependent variable. For example, if you’re looking to predict likelihood to respond to a particular offer, you’ll need to have prospects who responded and prospects who didn’t in order to discriminate between them.

2. Gather the Data to Model. This requires tapping into several data sources, including your CRM database, as well as external sources where you can get data appended (see below).

3. Set the Timeframe. Determine the time period for the data you will analyze. For example, if you’re looking to model likelihood to respond, the start and end points for the data should be far enough in the past that you have a sufficient sample of responders and non-responders.

4. Examine Variables Individually. Some variables will not be correlated with the outcome, and these can be eliminated prior to building the model.

Data Sources
Independent variable data
may include

  • In-house database fields
  • Data overlays (demographics, HH income, lifestyle interests, presence of children,
    marital status, etc.) from a data provider such as Experian, Epsilon or Acxiom.

Don’t Try This at Home
While you can do regression analysis in Microsoft Excel, if you’re going to invest a lot of promotion budget in the outcome, you should definitely leave the number crunching to the professionals. Expert modelers know how to analyze modeling results and make adjustments where necessary.

Chicken or the Egg? Data or Analytics?

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first.

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first. And I had the same answer when such a title didn’t even exist and CTOs or other types of executives covered that role in data-rich environments. As soon as an executive with a seemingly technical title starts representing the technology, that business is doomed. (Unless, of course, the business itself is about having fun with the technology. How nice!)

Nonetheless, if I really have to pick just one out of the two choices, I would definitely pick the analytics over data, as that is the key to providing answers to business questions. Data and databases must be supporting that critical role of analytics, not the other way around. Unfortunately, many organizations are completely backward about it, where analysts are confined within the limitations of database structures and affiliated technologies, and the business owners and decision-makers are dictated to by the analysts and analytical tool sets. It should be the business first, then the analytics. And all databases—especially marketing databases—should be optimized for analytical activities.

In my previous columns, I talked about the importance of marketing databases and statistical modeling in the age of Big Data; not all depositories of information are necessarily marketing databases, and statistical modeling is the best way to harness marketing answers out of mounds of accumulated data. That begs for the next question: Is your marketing database model-ready?

When I talk about the benefits of statistical modeling in data-rich environments (refer to my previous column titled “Why Model?”), I often encounter folks who list reasons why they do not employ modeling as part of their normal marketing activities. If I may share a few examples here:

  • Target universe is too small: Depending on the industry, the prospect universe and customer base are sometimes very small in size, so one may decide to engage everyone in the target group. But do you know what to offer to each of your prospects? Customized offers should be based on some serious analytics.
  • Predictive data not available: This may have been true years back, but not in this day and age. Either there is a major failure in data collection, or collected data are too unstructured to yield any meaningful answers. Aren’t we living in the age of Big Data? Surely we should all dig deeper.
  • 1-to-1 marketing channels not in plan: As I repeatedly said in my previous columns, “every” channel is, or soon will be, a 1-to-1 channel. Every audience is secretly screaming, “Entertain us!” And customized customer engagement efforts should be based on modeling, segmentation and profiling.
  • Budget doesn’t allow modeling: If the budget is too tight, a marketer may opt in for some software solution instead of hiring a team of statisticians. Remember that cookie-cutter models out of software packages are still better than someone’s intuitive selection rules (i.e., someone’s “gut” feeling).
  • The whole modeling process is just too painful: Hmm, I hear you. The whole process could be long and difficult. Now, why do you think it is so painful?

Like a good doctor, a consultant should be able to identify root causes based on pain points. So let’s hear some complaints:

  • It is not easy to find “best” customers for targeting
  • Modelers are fixing data all the time
  • Models end up relying on a few popular variables, anyway
  • Analysts are asking for more data all the time
  • It takes too long to develop and implement models
  • There are serious inconsistencies when models are applied to the database
  • Results are disappointing
  • Etc., etc…

I often get called in when model-based marketing efforts yield disappointing results. More often than not, the opening statement in such meetings is that “The model did not work.” Really? What is interesting is that in more than nine times out of 10 cases like that, the models are the only elements that seem to have been done properly. Everything else—from pre-modeling steps, such as data hygiene, conversion, categorization, and summarization; to post-modeling steps, such as score application and validation—often turns out to be the root cause of all the troubles, resulting in pain points listed here.

When I speak at marketing conferences, talking about this subject of this “model-ready” environment, I always ask if there are statisticians and analysts in the audience. Then I ask what percentage of their time goes into non-statistical activities, such as data preparation and remedying data errors. The absolute majority of them say they spend of 80 percent to 90 percent of their time fixing the data, devoting the rest to the model development work. You don’t need me to tell you that something is terribly wrong with this picture. And I am pretty sure that none of those analysts got their PhDs and master’s degrees in statistics to spend most of their waking hours fixing the data. Yeah, I know from experience that, in this data business, the last guy who happens to touch the dataset always ends up being responsible for all errors made to the file thus far, but still. No wonder it is often quoted that one of the key elements of being a successful data scientist is the programming skill.

When you provide datasets filled with unstructured, incomplete and/or missing data, diligent analysts will devote their time to remedying the situation and making the best out of what they have received. I myself often tell newcomers that analytics is really about making the best of what you’ve got. The trouble is that such data preparation work calls for a different set of skills that have nothing to do with statistics or analytics, and most analysts are not that great at programming, nor are they trained for it.

Even if they were able to create a set of sensible variables to play with, here comes the bigger trouble; what they have just fixed is just a “sample” of the database, when the models must be applied to the whole thing later. Modern databases often contain hundreds of millions of records, and no analyst in his or her right mind uses the whole base to develop any models. Even if the sample is as large as a few million records (an overkill, for sure) that would hardly be the entire picture. The real trouble is that no model is useful unless the resultant model scores are available on every record in the database. It is one thing to fix a sample of a few hundred thousand records. Now try to apply that model algorithm to 200 million entries. You see all those interesting variables that analysts created and fixed in the sample universe? All that should be redone in the real database with hundreds of millions of lines.

Sure, it is not impossible to include all the instructions of variable conversion, reformat, edit and summarization in the model-scoring program. But such a practice is the No. 1 cause of errors, inconsistencies and serious delays. Yes, it is not impossible to steer a car with your knees while texting with your hands, but I wouldn’t call that the best practice.

That is why marketing databases must be model-ready, where sampling and scoring become a routine with minimal data transformation. When I design a marketing database, I always put the analysts on top of the user list. Sure, non-statistical types will still be able to run queries and reports out of it, but those activities should be secondary as they are lower-level functions (i.e., simpler and easier) compared to being “model-ready.”

Here is list of prerequisites of being model-ready (which will be explained in detail in my future columns):

  • All tables linked or merged properly and consistently
  • Data summarized to consistent levels such as individuals, households, email entries or products (depending on the ranking priority by the users)
  • All numeric fields standardized, where missing data and zero values are separated
  • All categorical data edited and categorized according to preset business rules
  • Missing data imputed by standardized set of rules
  • All external data variables appended properly

Basically, the whole database should be as pristine as the sample datasets that analysts play with. That way, sampling should take only a few seconds, and applying the resultant model algorithms to the whole base would simply be the computer’s job, not some nerve-wrecking, nail-biting, all-night baby-sitting suspense for every update cycle.

In my co-op database days, we designed and implemented the core database with this model-ready philosophy, where all samples were presented to the analysts on silver platters, with absolutely no need for fixing the data any further. Analysts devoted their time to pondering target definitions and statistical methodologies. This way, each analyst was able to build about eight to 10 “custom” models—not cookie-cutter models—per “day,” and all models were applied to the entire database with more than 200 million individuals at the end of each day (I hear that they are even more efficient these days). Now, for the folks who are accustomed to 30-day model implementation cycle (I’ve seen as long as 6-month cycles), this may sound like a total science fiction. And I am not even saying that all companies need to build and implement that many models every day, as that would hardly be a core business for them, anyway.

In any case, this type of practice has been in use way before the words “Big Data” were even uttered by anyone, and I would say that such discipline is required even more desperately now. Everyone is screaming for immediate answers for their questions, and the questions should be answered in forms of model scores, which are the most effective and concise summations of all available data. This so-called “in-database” modeling and scoring practice starts with “model-ready” database structure. In the upcoming issues, I will share the detailed ways to get there.

So, here is the answer for the chicken-or-the-egg question. It is the business posing the questions first and foremost, then the analytics providing answers to those questions, where databases are optimized to support such analytical activities including predictive modeling. For the chicken example, with the ultimate goal of all living creatures being procreation of their species, I’d say eggs are just a means to that end. Therefore, for a business-minded chicken, yeah, definitely the chicken before the egg. Not that I’ve seen too many logical chickens.