Marketers Find the Least-Wrong Answers Via Modeling

Why do marketers still build models when we have ample amounts of data everywhere? Because we will never have every piece of data about everything. We just don’t know what we don’t know.

marketers

Why do marketers still build models when we have ample amounts of data everywhere? Because we will never have every piece of data about everything. We just don’t know what we don’t know.

Okay, then — we don’t get to know about everything, but what are the data that we possess telling us?

We build models to answer that question. Even scientists who wonder about the mysteries of the universe and multiverses use models for their research.

I have been emphasizing the importance of modeling in marketing through this column for a long time. If I may briefly summarize a few benefits here:

  • Models Fill in the Gaps, covering those annoying “unknowns.” We may not know for sure if someone has an affinity for luxury gift items, but we can say that “Yes, with data that we have, she is very likely to have such an affinity.” With a little help from the models, the “unknowns” turn into “potentials.”
  • Models Summarize Complex Data into simple-to-use “scores.” No one has time to dissect hundreds of data variables every time we make a decision. Model scores provide simple answers, such as “Someone likely to be a bargain-seeker.” Such a model may include 10 to 20 variables, but the users don’t need to worry about those details at the time of decision-making. Just find suitable offers for the targets, based on affinities and personas (which are just forms of models).
  • Models are Far More Accurate Than Human Intuition. Even smart people can’t imagine interactions among just two or three variables in their heads. Complex multivariate interaction detection is a job for a computer.
  • Models Provide Consistent Results. Human decision-makers may get lucky once in a while, but it will be hard to keep it up with machines. Mathematics do not fluctuate too much in terms of performance, provided with consistent and accurate data feeds.
  • Models Reveal Hidden Patterns in data. When faced with hundreds of data variables, humans often resort to what they are accustomed to (often fewer than four to five factors). Machines indiscriminately find new patterns, relentlessly looking for the best suitable answers.
  • Models Help Expand the Targeting Universe. If you want a broader target, just go after slightly lower score targets. You can even measure the risk factors while in such an expansion mode. That is not possible with some man-made rules.
  • When Done Right, Models Save Time and Effort. Marketing automation gets simpler, too, as even machines can tell high and low scores apart easily. But the keywords here are “when done right.”

There are many benefits of modeling, even in the age of abundant data. The goal of any data application is to help in the decision-making process, not aid in hoarding the data and bragging about it. Do you want to get to the accurate, consistent, and simple answers — fast? Don’t fight against modeling, embrace it. Try it. And if it doesn’t work, try it in another way, as the worst model often beats man-made rules, easily.

But this time, I’m not writing this article just to promote the benefits of modeling again. Assuming that you embrace the idea already, let’s now talk about the limitations of it. With any technique, users must be fully aware of the downsides of it.

It Mimics Existing Patterns

By definition, models identify and mimic the patterns in the existing data. That means, if the environment changes drastically, all models built in the old world will be rendered useless.

For example, if there are significant changes in the supply chain in a retail business, product affinity models built for old lines of products won’t work anymore (even if products may look similar). More globally, if there were major disruptions, such as a market crash or proliferation of new technologies, none of the old assumptions would continue to be applicable.

The famous economics phrase Ceteris paribus — all other things being equal — governs conventional modeling. If you want your models to be far more adaptive, then consider total automation of modeling through machine learning. But I still suggest trying a few test models in an old-fashioned way, before getting into a full automation mode.

If the Target Is Off, Everything Is Off

If the target mark is hung on a wrong spot, no sharpshooter will be able to hit the real target. A missile without a proper guidance system is worse than not having one at all. Setting the right target for a model is the most critical and difficult part in the whole process, requiring not only technical knowledge, but also deep understanding of the business at stake, the nature of available data, and the deployment mechanism at the application stage.

This is why modeling is often called “half science, half art.” A model is only as accurate as the target definition of the model. (For further details on this complex subject, refer to “Art of Targeting”).

The Model Is Only as Good as the Input Data

No model can be saved if there are serious errors or inconsistencies in the data. It is not just about bluntly wrong data. If the nature of the data is not consistent between the model development sample and the practical pool of data (where the model will be applied and used), the model in question will be useless.

This is why the “Analytics Sandbox” is important. Such a sandbox environment is essential — not just for simplification of model development, but also for consistent application of models. Most mishaps happen before or after the model development stage, mostly due to data inconsistencies in terms of shapes and forms, and less due to sheer data errors (not that erroneous data is acceptable).

The consistency factor matters a lot: If some data variables are “consistently” off, they may still possess some predictive power. I would even go as far as stating that consistency matters more than sheer accuracy.

Accuracy Is a Relative Term

Users often forget this important fact, but model scores aren’t pinpoint accurate all of the time. Some models are sharper than others, too.

A model score is just the best estimate with the existing data. In other words, we should take model scores as the least-wrong answers in a given situation.

So, when I say it is accurate, I mean to say a model is more accurate than human intuition based on a few basic data points.

Therefore, the user must always consider the risk of being wrong. Now, being wrong about “Who is more likely to respond to this 15% discount offer?” is a lot less grave than being wrong about “Who is more likely to be diabetic?”

In fact, if I personally face such a situation, I won’t even recommend building the latter model, as the cost of being wrong is simply too high. (People are very sensitive about their medical information.) Some things should not just be estimated.

Even with innocuous models, such as product affinities and user propensities, users should never treat them as facts. Don’t act like you “know” the target, simply because some model scores are available to you. Always approach your target with a gentle nudge; as in, “I don’t know for sure if you would be interested in our new line of skin care products, but would you want to hear more about it?” Such gentle approaches always sound friendlier than acting like you “know” something about them for sure. That seems just rude on the receiving end, and recipients of blunt messages may even think that you are indeed creepy.

Users sometimes make bold moves with an illusion that data and analytics always provide the right answers. Maybe the worst fallacy in the modern age is the belief that anything a computer spits out is always correct.

Users Abuse Models

Last month, I shared seven ways users abuse models and ruin the results (refer to “Don’t Ruin Good Models by Abusing Them”). As an evangelist of modeling techniques, I always try to prevent abuse cases, but they still happen in the application stages. All good intentions of models go out the window if they are used for the wrong reasons or in the wrong settings.

I am not at all saying that anyone should back out of using models in their marketing practices for the shortfalls that I listed here. Nonetheless, to be consistently successful, users must be aware of limitations of models, as well. Especially if you are about to go on full marketing automation. With improper application of models, you may end up automating bad or wrong practices really fast. For the sake of customers on the receiving end — not just for the safety of your position in the marketing industry — please be more careful with this sharp-edged tool called modeling.

Author: Stephen H. Yu

Stephen H. Yu is a world-class database marketer. He has a proven track record in comprehensive strategic planning and tactical execution, effectively bridging the gap between the marketing and technology world with a balanced view obtained from more than 30 years of experience in best practices of database marketing. Currently, Yu is president and chief consultant at Willow Data Strategy. Previously, he was the head of analytics and insights at eClerx, and VP, Data Strategy & Analytics at Infogroup. Prior to that, Yu was the founding CTO of I-Behavior Inc., which pioneered the use of SKU-level behavioral data. “As a long-time data player with plenty of battle experiences, I would like to share my thoughts and knowledge that I obtained from being a bridge person between the marketing world and the technology world. In the end, data and analytics are just tools for decision-makers; let’s think about what we should be (or shouldn’t be) doing with them first. And the tools must be wielded properly to meet the goals, so let me share some useful tricks in database design, data refinement process and analytics.” Reach him at stephen.yu@willowdatastrategy.com.

Leave a Reply

Your email address will not be published. Required fields are marked *