How Well Do You Know Your Customer Data?

Some marketers seem to keep their distance from customer data. When I ask what kind of customer information they are working with, I hear things like, “Oh, Mary is in charge of our data. I leave it to her.” This is unfortunate.

Some marketers seem to keep their distance from customer data. When I ask what kind of customer information they are working with, I hear things like, “Oh, Mary is in charge of our data. I leave it to her.” This is unfortunate. I realize that the marketing profession may attract people who prefer to focus on “softer” functions like research, competitive strategy, and value propositions. But these days, it’s a real disadvantage, professionally and personally, to shun data. So, let me offer some painless steps to up your comfort level.

In this context, I am thinking about customer data at its most basic level: the customer or prospect record, which is usually found in a marketing database or a CRM system. This record contains the contact information, and descriptive and behavioral data elements we know about the customer. For B2B marketers, it will describe the account as well as the individual contacts. This subject arose in my mind recently as I read Steven Hayes’s interesting article called “Do Marketers Really Want to be Data Scientists?” in Oracle’s Modern Marketing blog. Hayes correctly concluded that marketers don’t need to do the science — build the models, run the experiments — but they do need to be familiar with the variables that drive customer behavior, in order to apply the science to marketing decision-making.

So, it behooves marketers to be deeply familiar with the customer record, which is where these variables are housed. This information serves as what I like to call “the recorded memory of the customer relationship,” and it reveals all sorts of insights into the nature of the customers and competitors, what they value, and how to communicate with them effectively.

So, how do you get familiar with your customer records, and make them your friends? Here are three steps to consider.

1. Take Mary — or whoever manages your customer data — out to lunch. Demonstrate your interest in understanding her world, her challenges and her interests. This puts a personal face to the data, and also makes Mary an ally and mentor in your quest.

2. Examine a handful of customer records. You’ll find all kinds of interesting things: What do we know about this person? Any ideas on how better to communicate and sell to him/her? How complete and accurate is the record? What additional data would help you develop even better ideas on how to treat the customer? Set an hour on your calendar every quarter or so, to repeat the process, becoming familiar with records from various types of customers and prospects.

3. Launch an initiative to develop a data strategy for your department or your company as a whole. This means a written policy that identifies the data elements you should collect on each customer, where each element will come from, and how you will use it to drive business value.

I guarantee, if you dive into the data records, your comfort level will rise dramatically. And so will your insight, and your skill as a marketer.

A version of this article appeared in Biznology, the digital marketing blog.

Watch the Attitude, Data Geeks

One-dimensional techies will be replaced by machines in the near future. So what if they’re the smartest ones in the room? If decision-makers can’t use data, does the information really exist?

Data Geeks
Data geeks may be the smartest people in the room, but maybe not if decision-makers don’t know what to do with their information.

Data do not exist just for data geeks and nerds. All of these data activities are inevitably funded by people who want to harness business value out of data. Whether it is about increasing revenue or reducing cost, in the end, the data game is about creating tangible value in forms of dollars, pounds, Euros or Yuans.

It really has nothing to do with the coolness of the toolsets or latest technologies, but it is all about the business — plain and simple. In other words, the data and analytics field is not some playground reserved for math or technology geeks, who sometimes think that belonging to exclusive clubs with secret codes and languages is the goal in itself. At the risk of sounding like an unapologetic capitalist, data don’t flow if money stops flowing. If you doubt me, watch where the budgets get cut first when going gets rough.

Data and analytics folks may feel secure, as they may know something in which non-technical people may not be well-versed in the age of Big Data. Maybe their bosses leave techies alone in a corner, as technical details and math jargon give them headaches. Their jobs may indeed be secure, for as long as the financial value coming out of the unit is net positive. Others may tolerate some techie talk, condescending attitudes, or mathematical dramas, for as long as data and analytics help them monetarily. Otherwise? Buh-bye geeks!

I am writing this piece to provide a serious attitude adjustment to some data players. If data and analytics are not for geeks, but for the good of businesses (and all of the decision-makers who may not be technical), what does useful information look like?

Allow me to share some ideas for all the beneficiaries of data, not a selected few who speak the machine language.

  • Data Must Be in Forms That Are Easy to Understand without mathematical or technical expertise. It should be as simple and easy to understand as a weather report. That means all of the data and statistical modeling to fill in the gaps must be done before the information reaches the users.
  • Data Must Be Small, not mounds of unfiltered and unstructured information. Useful data must look like answers to questions, not something that comes with a 500-page data dictionary. Data players should never brag about the size of the data or speed of processing, as users really don’t care about such details.
  • Data Must Be Accurate. Inaccurate information is worse than not having any at all. Users also must remember that not everything that comes out of computers is automatically accurate. Conversely, data players must be responsible to fix all of the previous mistakes that were made to datasets before they even reached them. Not fair, but that’s the job.
  • Data Must Be Consistent. It can be argued that consistency is even more important than sheer accuracy. Often, being consistently off may be more desirable than having large fluctuations, as even a dead clock is completely accurate twice a day. This is especially true for information that is inferred via statistical work.
  • Data Must Be Applicable Most of the Time, not just for limited cases. Too many data are locked in silos serving myopic purposes. Data become more powerful when they are consolidated properly, reaching broader audiences.
  • Data Must Be Accessible to users through devices of their choices. Even good information that fits the above criteria becomes useless if it does not reach decision-makers when needed. Data players’ jobs are not done until data are delivered to the right people in the right format and a timely manner.

Who are these data players who should be responsible for all of this, and where do they belong? They may have titles such as Chief Data Officer (who would be in charge of data governance); Data Strategist or Analytics Strategist: Data Scientist; Statistical Analyst or Program Developer. They may belong to IT, marketing, or a separate data or analytics department. No matter. They must be translators of information for the benefit of users, speaking languages of both business and technology fluently. They should never be just guard dogs of information. Ultimately, they should represent the interests of business first, not waving some fictitious IT or data rules.

So-called specialists, who habitually spit out reasons why certain information must be locked away somewhere and why they should not be available to users in a more user-friendly form, must snap out of their technical, analytical or mathematical comfort zone, pronto.

Techies who are that one-dimensional will be replaced by a machine in the near future.

The future belongs to people who can connect dots among different worlds and paradigms, not to some geeks with limited imaginations and skill sets that could become obsolete soon.

So, if self-preservation is an instinct that techies possess, they should figure out who is paying the bills, including their salaries and benefits, and make it absolutely easy for these end-users in all ways listed here. If not for altruistic reasons, for their own benefit in this results-oriented business world.

If information is not used by decision-makers, does the information really exist?

Chicken or the Egg? Data or Analytics?

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first.

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first. And I had the same answer when such a title didn’t even exist and CTOs or other types of executives covered that role in data-rich environments. As soon as an executive with a seemingly technical title starts representing the technology, that business is doomed. (Unless, of course, the business itself is about having fun with the technology. How nice!)

Nonetheless, if I really have to pick just one out of the two choices, I would definitely pick the analytics over data, as that is the key to providing answers to business questions. Data and databases must be supporting that critical role of analytics, not the other way around. Unfortunately, many organizations are completely backward about it, where analysts are confined within the limitations of database structures and affiliated technologies, and the business owners and decision-makers are dictated to by the analysts and analytical tool sets. It should be the business first, then the analytics. And all databases—especially marketing databases—should be optimized for analytical activities.

In my previous columns, I talked about the importance of marketing databases and statistical modeling in the age of Big Data; not all depositories of information are necessarily marketing databases, and statistical modeling is the best way to harness marketing answers out of mounds of accumulated data. That begs for the next question: Is your marketing database model-ready?

When I talk about the benefits of statistical modeling in data-rich environments (refer to my previous column titled “Why Model?”), I often encounter folks who list reasons why they do not employ modeling as part of their normal marketing activities. If I may share a few examples here:

  • Target universe is too small: Depending on the industry, the prospect universe and customer base are sometimes very small in size, so one may decide to engage everyone in the target group. But do you know what to offer to each of your prospects? Customized offers should be based on some serious analytics.
  • Predictive data not available: This may have been true years back, but not in this day and age. Either there is a major failure in data collection, or collected data are too unstructured to yield any meaningful answers. Aren’t we living in the age of Big Data? Surely we should all dig deeper.
  • 1-to-1 marketing channels not in plan: As I repeatedly said in my previous columns, “every” channel is, or soon will be, a 1-to-1 channel. Every audience is secretly screaming, “Entertain us!” And customized customer engagement efforts should be based on modeling, segmentation and profiling.
  • Budget doesn’t allow modeling: If the budget is too tight, a marketer may opt in for some software solution instead of hiring a team of statisticians. Remember that cookie-cutter models out of software packages are still better than someone’s intuitive selection rules (i.e., someone’s “gut” feeling).
  • The whole modeling process is just too painful: Hmm, I hear you. The whole process could be long and difficult. Now, why do you think it is so painful?

Like a good doctor, a consultant should be able to identify root causes based on pain points. So let’s hear some complaints:

  • It is not easy to find “best” customers for targeting
  • Modelers are fixing data all the time
  • Models end up relying on a few popular variables, anyway
  • Analysts are asking for more data all the time
  • It takes too long to develop and implement models
  • There are serious inconsistencies when models are applied to the database
  • Results are disappointing
  • Etc., etc…

I often get called in when model-based marketing efforts yield disappointing results. More often than not, the opening statement in such meetings is that “The model did not work.” Really? What is interesting is that in more than nine times out of 10 cases like that, the models are the only elements that seem to have been done properly. Everything else—from pre-modeling steps, such as data hygiene, conversion, categorization, and summarization; to post-modeling steps, such as score application and validation—often turns out to be the root cause of all the troubles, resulting in pain points listed here.

When I speak at marketing conferences, talking about this subject of this “model-ready” environment, I always ask if there are statisticians and analysts in the audience. Then I ask what percentage of their time goes into non-statistical activities, such as data preparation and remedying data errors. The absolute majority of them say they spend of 80 percent to 90 percent of their time fixing the data, devoting the rest to the model development work. You don’t need me to tell you that something is terribly wrong with this picture. And I am pretty sure that none of those analysts got their PhDs and master’s degrees in statistics to spend most of their waking hours fixing the data. Yeah, I know from experience that, in this data business, the last guy who happens to touch the dataset always ends up being responsible for all errors made to the file thus far, but still. No wonder it is often quoted that one of the key elements of being a successful data scientist is the programming skill.

When you provide datasets filled with unstructured, incomplete and/or missing data, diligent analysts will devote their time to remedying the situation and making the best out of what they have received. I myself often tell newcomers that analytics is really about making the best of what you’ve got. The trouble is that such data preparation work calls for a different set of skills that have nothing to do with statistics or analytics, and most analysts are not that great at programming, nor are they trained for it.

Even if they were able to create a set of sensible variables to play with, here comes the bigger trouble; what they have just fixed is just a “sample” of the database, when the models must be applied to the whole thing later. Modern databases often contain hundreds of millions of records, and no analyst in his or her right mind uses the whole base to develop any models. Even if the sample is as large as a few million records (an overkill, for sure) that would hardly be the entire picture. The real trouble is that no model is useful unless the resultant model scores are available on every record in the database. It is one thing to fix a sample of a few hundred thousand records. Now try to apply that model algorithm to 200 million entries. You see all those interesting variables that analysts created and fixed in the sample universe? All that should be redone in the real database with hundreds of millions of lines.

Sure, it is not impossible to include all the instructions of variable conversion, reformat, edit and summarization in the model-scoring program. But such a practice is the No. 1 cause of errors, inconsistencies and serious delays. Yes, it is not impossible to steer a car with your knees while texting with your hands, but I wouldn’t call that the best practice.

That is why marketing databases must be model-ready, where sampling and scoring become a routine with minimal data transformation. When I design a marketing database, I always put the analysts on top of the user list. Sure, non-statistical types will still be able to run queries and reports out of it, but those activities should be secondary as they are lower-level functions (i.e., simpler and easier) compared to being “model-ready.”

Here is list of prerequisites of being model-ready (which will be explained in detail in my future columns):

  • All tables linked or merged properly and consistently
  • Data summarized to consistent levels such as individuals, households, email entries or products (depending on the ranking priority by the users)
  • All numeric fields standardized, where missing data and zero values are separated
  • All categorical data edited and categorized according to preset business rules
  • Missing data imputed by standardized set of rules
  • All external data variables appended properly

Basically, the whole database should be as pristine as the sample datasets that analysts play with. That way, sampling should take only a few seconds, and applying the resultant model algorithms to the whole base would simply be the computer’s job, not some nerve-wrecking, nail-biting, all-night baby-sitting suspense for every update cycle.

In my co-op database days, we designed and implemented the core database with this model-ready philosophy, where all samples were presented to the analysts on silver platters, with absolutely no need for fixing the data any further. Analysts devoted their time to pondering target definitions and statistical methodologies. This way, each analyst was able to build about eight to 10 “custom” models—not cookie-cutter models—per “day,” and all models were applied to the entire database with more than 200 million individuals at the end of each day (I hear that they are even more efficient these days). Now, for the folks who are accustomed to 30-day model implementation cycle (I’ve seen as long as 6-month cycles), this may sound like a total science fiction. And I am not even saying that all companies need to build and implement that many models every day, as that would hardly be a core business for them, anyway.

In any case, this type of practice has been in use way before the words “Big Data” were even uttered by anyone, and I would say that such discipline is required even more desperately now. Everyone is screaming for immediate answers for their questions, and the questions should be answered in forms of model scores, which are the most effective and concise summations of all available data. This so-called “in-database” modeling and scoring practice starts with “model-ready” database structure. In the upcoming issues, I will share the detailed ways to get there.

So, here is the answer for the chicken-or-the-egg question. It is the business posing the questions first and foremost, then the analytics providing answers to those questions, where databases are optimized to support such analytical activities including predictive modeling. For the chicken example, with the ultimate goal of all living creatures being procreation of their species, I’d say eggs are just a means to that end. Therefore, for a business-minded chicken, yeah, definitely the chicken before the egg. Not that I’ve seen too many logical chickens.

Why Model?

Why model? Uh, because someone is ridiculously good looking, like Derek Zoolander? No, seriously, why model when we have so much data around? The short answer is because we will never know the whole truth. That would be the philosophical answer. Physicists construct models to make new quantum field theories more attractive theoretically and more testable physically. If a scientist already knows the secrets of the universe, well, then that person is on a first-name basis with God Almighty, and he or she doesn’t need any models to describe things like particles or strings. And the rest of us should just hope the scientist isn’t one of those evil beings in “Star Trek.”

Why model? Uh, because someone is ridiculously good looking, like Derek Zoolander? No, seriously, why model when we have so much data around?

The short answer is because we will never know the whole truth. That would be the philosophical answer. Physicists construct models to make new quantum field theories more attractive theoretically and more testable physically. If a scientist already knows the secrets of the universe, well, then that person is on a first-name basis with God Almighty, and he or she doesn’t need any models to describe things like particles or strings. And the rest of us should just hope the scientist isn’t one of those evil beings in “Star Trek.”

Another answer to “why model?” is because we don’t really know the future, not even the immediate future. If some object is moving toward a certain direction at a certain velocity, we can safely guess where it will end up in one hour. Then again, nothing in this universe is just one-dimensional like that, and there could be a snowstorm brewing up on its path, messing up the whole trajectory. And that weather “forecast” that predicted the snowstorm is a result of some serious modeling, isn’t it?

What does all this mean for the marketers who are not necessarily masters of mathematics, statistics or theoretical physics? Plenty, actually. And the use of models in marketing goes way back to the days of punch cards and mainframes. If you are too young to know what those things are, well, congratulations on your youth, and let’s just say that it was around the time when humans first stepped on the moon using a crude rocket ship equipped with less computing power than an inexpensive passenger car of the modern days.

Anyhow, in that ancient time, some smart folks in the publishing industry figured that they would save tons of money if they could correctly “guess” who the potential buyers were “before” they dropped any expensive mail pieces. Even with basic regression models—and they only had one or two chances to get it right with glacially slow tools before the all-too-important Christmas season came around every year—they could safely cut the mail quantity by 80 percent to 90 percent. The savings added up really fast by not talking to everyone.

Fast-forward to the 21st Century. There is still a beauty of knowing who the potential buyers are before we start engaging anyone. As I wrote in my previous columns, analytics should answer:

1. To whom you should be talking; and
2. What you should offer once you’ve decided to engage someone.

At least the first part will be taken care of by knowing who is more likely to respond to you.

But in the days when the cost of contacting a person through various channels is dropping rapidly, deciding to whom to talk can’t be the only reason for all this statistical work. Of course not. There are plenty more reasons why being a statistician (or a data scientist, nowadays) is one of the best career choices in this century.

Here is a quick list of benefits of employing statistical models in marketing. Basically, models are constructed to:

  • Reduce cost by contacting prospects more wisely
  • Increase targeting accuracy
  • Maintain consistent results
  • Reveal hidden patterns in data
  • Automate marketing procedures by being more repeatable
  • Expand the prospect universe while minimizing the risk
  • Fill in the gaps and summarize complex data into an easy-to-use format—A must in the age of Big Data
  • Stay relevant to your customers and prospects

We talked enough about the first point, so let’s jump to the second one. It is hard to argue about the “targeting accuracy” part, though there still are plenty of non-believers in this day and age. Why are statistical models more accurate than someone’s gut feeling or sheer guesswork? Let’s just say that in my years of dealing with lots of smart people, I have not met anyone who can think about more than two to three variables at the same time, not to mention potential interactions among them. Maybe some are very experienced in using RFM and demographic data. Maybe they have been reasonably successful with choices of variables handed down to them by their predecessors. But can they really go head-to-head against carefully constructed statistical models?

What is a statistical model, and how is it built? In short, a model is a mathematical expression of “differences” between dichotomous groups. Too much of a mouthful? Just imagine two groups of people who do not overlap. They may be buyers vs. non-buyers; responders vs. non-responders; credit-worthy vs. not-credit-worthy; loyal customers vs. attrition-bound, etc. The first step in modeling is to define the target, and that is the most important step of all. If the target is hanging in the wrong place, you will be shooting at the wrong place, no matter how good your rifle is.

And the target should be expressed in mathematical terms, as computers can’t read our minds, not just yet. Defining the target is a job in itself:

  • If you’re going after frequent flyers, how frequent is frequent enough for you? Five times a year or 10 times a year? Or somewhere in between? Or should it remain continuous?
  • What if the target is too small or too large? What then?
  • If you are looking for more valuable prospects, how would you express that? In terms of average spending, lifetime spending or sheer number of transactions?
  • What if there is an inverse relationship between frequency and dollar spending (i.e., high spenders shopping infrequently)?
  • And what would be the borderline number to be “valuable” in all this?

Once the target is set, after much pondering, then the job is to select the variables that describe the “differences” between the two groups. For example, I know how much marketers love to use income variables in various situations. But if that popular variable does not explain the differences between the two groups (target and non-target), the mathematics will mercilessly throw it out. This rigorous exercise of examining hundreds or even thousands of variables is one of the most critical steps, during which many variables go through various types of transformations. Statisticians have different preferences in terms of ideal numbers of variables in a model, while non-statisticians like us don’t need to be too concerned, as long as the resultant model works. Who cares if a cat is white or black, as long as it catches mice?

Not all selected variables are equally important in model algorithms, either. More powerful variables will be assigned with higher weight, and the sum of these weighted values is what we call model score. Now, non-statisticians who have been slightly allergic to math since the third grade only need to know that the higher the score, the more likely the record in question is to be like the target. To make the matter even simpler, let’s just say that you want higher scores over lower scores. If you are a salesperson, just call the high-score prospects first. And would you care how many variables are packed into that score, for as long as you get the good “Glengarry Glen Ross” leads on top?

So, let me ask again. Does this sound like something a rudimentary selection rule with two to three variables can beat when it comes to identifying the right target? Maybe someone can get lucky once or twice, but not consistently.

That leads to the next point, “consistency.” Because models do not rely on a few popular variables, they are far less volatile than simple selection rules or queries. In this age of Big Data, there are more transaction and behavioral data in the mix than ever, and they are far more volatile than demographic and geo-demographic data. Put simply, people’s purchasing behavior and preferences change much faster than family composition or their income, and that volatility factor calls for more statistical work. Plus, all facets of marketing are now more about measurable results (ah, that dreaded ROI, or “Roy,” the way I call it), and the businesses call for consistent hitters over one-hit wonders.

“Revealing hidden patterns in data” is my favorite. When marketers are presented with thousands of variables, I see a majority of them just sticking to a few popular ones all the time. Some basic recency and frequency data are there, and among hundreds of demographic variables, the list often stops after income, age, gender, presence of children, and some regional variables. But seriously, do you think that the difference between a luxury car buyer and an SUV buyer is just income and age? You see, these variables are just the ones that human minds are accustomed to. Mathematics do not have such preconceived notions. Sticking to a few popular variables is like children repeatedly using three favorite colors out of a whole box of crayons.

I once saw a neighborhood-level U.S. Census variable called “% Households with Septic Tanks” in a model built for a high-end furniture catalog. Really, the variable was “percentage of houses with septic tanks in the neighborhood.” Then I realized it made a lot of sense. That variable was revealing how far away that neighborhood was located in comparison to populous city centers. As the percentage of septic tanks increased, the further away the residents were from the city center. And maybe those folks who live in scarcely populated areas were more likely to shop for furniture through catalogs than the folks who live closer to commercial areas.

This is where we all have that “aha” moment. But you and I will never pick that variable in anything that we do, not in million years, no matter how effective it may be in finding the target prospects. The word “septic” may scare some people off at “hello.” In any case, modeling procedures reveal hidden connections like that all of the time, and that is a very important function in data-rich environments. Otherwise, we will not know what to throw out without fear, and the databases will continuously become larger and more unusable.

Moving on to the next points, “Repeatable” and “Expandable” are somewhat related. Let’s say a marketer has been using a very innovative selection logic that she came across almost by accident. In pursuing special types of wealthy people, she stumbled upon a piece of data called “owner of swimming pool.” Now, she may have even had a few good runs with it, too. But eventually, that success will lead to the question of:

1. Having to repeat that success again and again; and
2. Having to expand that universe, when the “known” universe of swimming pool owners become depleted or saturated.

Ah, the chagrin of a one-hit-wonder begins.

Use of statistical models, with help of multiple variables and scalable scoring, would avoid all of those issues. You want to expand the prospect universe? No trouble. Just dial down the scores on the scale a little further. We can even measure the risk of reaching into the lower-scoring groups. And you don’t have to worry about coverage issues related to a few variables, as those won’t be the only ones in the model. Want to automate the selection process? No problem there, as using a score, which is a summary of key predictors, is far simpler than having to carry a long list of data variables into any automated system.

Now, that leads to the next point, “Filling in the gaps and summarizing the complex data into an easy-to-use format.” In the age of ubiquitous and “Big” data, this is the single-most important point, way beyond the previous examples for traditional 1-to-1 marketing applications. We are definitely going through massive data overloads everywhere, and someone better refine the data and provide some usable answers.

As I mentioned earlier, we build models because we will never know the whole truth. I believe that the Big Data movement should be all about:

1. Filtering the noise from valuable information; and
2. Filling the gaps.

“Gaps,” you say? Believe me, there are plenty of gaps in any dataset, big or small.

When information continues to get piled on, the resultant database may look big. And they are physically large. But in marketing, as I repeatedly emphasized in my previous columns, the data must be realigned to “buyer-centric” formats, with every data point describing each individual, as marketing is all about people.

Sure, you may have tons of mobile phone-related data. In fact, it could be quite huge in size. But let me turn that upside down for you (more like sideways-up, in practice). Now, try to describe everyone in your footprint in terms of certain activities. Say, “every smart phone owner who used more than 80 percent of his or her monthly data allowance on the average for the past 12 months, regardless of the carrier.” Hey, don’t blame me for asking these questions just because it’s inconvenient for data handlers to answer them. Some marketers would certainly benefit from information like that, and no one cares about just bits and pieces of data, other than for some interesting tidbits at a party.

Here’s the main trouble when you start asking buyer-related questions like that. Once we try to look at the world from the “buyer-centric” point of view, we will realize there are tons of missing data (i.e., a whole bunch of people with not much information). It may be that you will never get this kind of data from all carriers. Maybe not everyone is tracked this way. In terms of individuals, you may end up with less than 10 percent in the database with mobile information attached to them. In fact, many interesting variables may have less than 1 percent coverage. Holes are everywhere in so-called Big Data.

Models can fill in those blanks for you. For all those data compilers who sell age and income data for every household in the country, do you believe that they really “know” everyone’s age and income? A good majority of the information is based on carefully constructed models. And there is nothing wrong with that.

If you don’t get to “know” something, we can get to a “likelihood” score—of “being like” that something. And in that world, every measurement is on a scale, with no missing values. For example, the higher the score of a model built for a telecommunication company, the more likely that the prospect is going to use a high-speed data plan, or the international long distance services, depending on the purpose of the model. Or the more likely the person will buy sports packages via cable or satellite. Or the person is more likely to subscribe to premium movie channels. Etc., etc. With scores like these, a marketer can initiate the conversation with—not just talking to—a particular prospect with customized product packages in his hand.

And that leads us to the final point in all this, “Staying relevant to your customers and prospects.” That is what Big Data should be all about—at least for us marketers. We know plenty about a lot of people. And they are asking us why we are still so random about marketing messages. With all these data that are literally floating around, marketers can do so much better. But not without statistical models that fill in the gaps and turn pieces of data into marketing-ready answers.

So, why model? Because a big pile of information doesn’t provide answers on its own, and that pile has more holes than Swiss cheese if you look closely. That’s my final answer.