Why Many Marketing Automation Projects Go South

There are so many ways to mess up data or analytics projects, may they be CDP, Data Lake, Digital Transformation, Marketing Automation, or whatever sounds cool these days. First off, none of these items are simple to develop, or something that you just buy off the shelf.

As a data and analytics consultant, I often get called in when things do not work out as planned or expected. I guess my professional existence is justified by someone else’s problems. If everyone follows the right path from the beginning and everything goes smoothly all of the time, I would not have much to clean up after.

In that sense, maybe my role model should be Mr. Wolf in the movie “Pulp Fiction.” Yeah, that guy who thinks fast and talks fast to help his clients get out of trouble pronto.

So, I get to see all kinds of data, digital, and analytical messes. The keyword in the title of this series “Big Data, Small Data, Clean Data, Messy Data” is definitely not “Big” (as you might have guessed already), but “Messy.” When I enter the scene, I often see lots of bullet holes created by blame games and traces of departed participants of the projects. Then I wonder how things could have gone so badly.

There are so many ways to mess up data or analytics projects, may they be CDP, Data Lake, Digital Transformation, Marketing Automation, or whatever sounds cool these days. First off, none of these items are simple to develop, or something that you just buy off the shelf. Even if you did, someone would have to tweak more than a few buttons to customize the toolset to meet your unique requirements.

What did I say about those merchants of buzzwords? I don’t remember the exact phrase, but I know I wouldn’t have used those words.

Like a veteran cop, I’ve developed some senses to help me figure out what went wrong. So, allow me to share some common traps that many marketing organizations fall into.

No Clear Goal or Blueprint

Surprisingly, a great ,many organizations get into complex data or analytics projects only with vague ideas or wish lists. Imagine building a building without any clear purpose or a blueprint. What is the building for? For whom, and for what purpose? Is it a residential building, an office building, or a commercial property?

Just like a building is not just a simple sum of raw materials, databases aren’t sums of random piles of data, either. But do you know how many times I get to sit in on a meeting where “putting every data source together in one place” is the goal in itself? I admit that would be better than data scattered all over the place, but the goal should be defined much more precisely. How they are going to be used, by whom, for what, through what channel, using what types of toolsets, etc. Otherwise, it just becomes a monster that no one wants to get near.

I’ve even seen so-called data-oriented companies going out of business thanks to monstrous data projects. Like any major development project, what you don’t put in is as important as what you put in. In other words, the summary of absolutely everyone’s wish list is no blueprint at all, but the first step toward inevitable demise of the project. The technical person in charge must be business–oriented, and be able to say “no” to some requests, looking 10 steps down the line. Let’s just say that I’ve seen too many projects that hopelessly got stuck, thanks to features that would barely matter in practice (as in “You want what in real-time?!”). Might as well design a car that flies, as well.

No Predetermined Success Metrics

Sometimes, the project goes well, but executives and colleagues still define it as a failure. For instance, a predictive model, no matter how well it is constructed mathematically, cannot single-handedly overcome bad marketing. Even with effective marketing messages, it cannot just keep doubling the performance level indefinitely. Huge jumps in KPI (e.g., doubling the response rate) may be possible for the very first model ever (as it would be, compared to the previous campaigns without any precision targeting), but no one can expect such improvement year after year.

Before a single bite of data is manipulated, project champions must determine the success criteria for the project. In terms of coverage, accuracy, speed of execution, engagement level, revenue improvement (by channel), etc. Yes, it would be hard to sell the idea with lots of disclaimers attached to the proposal, but maybe not starting the project at all would be better than being called a failure after spending lots of precious time and money.

Some goals may be in conflict with each other, too. For instance, response rate is often inversely related to the value of the transaction. So, if the blame game starts, how are you going to defend the predictive model that is designed primarily to drive the response rate, not necessarily the revenue per transaction? Set the clear goals in numeric format, and more importantly, share the disclaimer upfront. Otherwise, “something” would look wrong to someone.

But what if your scary boss wants to boost rate of acquisition, customer value, and loyalty all at the same time, no matter what? Maybe you should look for an exit.

Top-Down Culture

By nature, analytics-oriented companies are flatter and less hierarchical in structure. In such places, data and empirical evidences win the argument, not organizational rank of the speaker. It gets worse when the highest-ranking officer has very little knowledge in data or analytics, in general. In a top-down culture, no one would question that C-level executive in a nice suit. Foremost, the executive wouldn’t question his own gut feelings, as those gut-feelings put him in that position in the first place. How can he possibly be wrong?

Trouble is that the world is rapidly changing around any organization. And monitoring the right data from the right place is the best way to keep informed and take actions preemptively. I haven’t encountered any gut-feeling — including my own — that stood the test of time better than data-based decision-making.

Now sometimes, the top-down culture is a good thing, though. If the organizational goals are clearly set, and if the top executive does not launch blame games and support a big data project (no pun intended here). Then, an indefinite amount of inter-departmental conflicts will be mitigated upfront (as in, “Hey, everyone, we are doing this, alright?).

Conflicts Among Teams — No Buy-in, No Use

But no amount of executive force can eliminate all infighting that easily. Some may say “Yeah, yeah, yeah” in front of the CEO or CMO, but sabotage the whole project behind the scene. In fact, I’ve seen many IT departments get in the way of the noble idea of “Customer-360.”

Why? It could be the data ownership issue, security concerns, or lack of understanding of 1:1 marketing or advanced analytics. Maybe they just want the status quo, or see any external influence on data-related matters as a threat. In any case, imagine the situation where the very people who hold the key to the of source data are NOT cooperating with data or analytics projects for the benefit of other departments. Or worse, maybe you have “seen” such cases, as they are so common.

Another troublesome example would be on the user side. Imagine a situation where sales or marketing personnel do not buy into any new way of doing things, such as using model scores to understand the target better. Maybe they got burned by bad models in the past. Or maybe they just don’t want to change things around, like those old school talent scouts in the movie “Moneyball.” Regardless, no buy-in, no use. So much for that shiny marketing automation project that sucked up seven-figure numbers to develop and deploy.

Every employee puts their prolonged employment status over any dumb or smart project. Do not underestimate the people’s desire to keep their jobs with minimal changes.

Players Haven’t Seen Really Messy Situations Before

As you can see, data or analytics projects are not just about technologies or mathematics. Further, data themselves can be a hindrance. I’ve written many articles about “good” data, but they are indeed quite rare in real life. Data must be accurate, consistent, up-to-date, and applicable in most cases, without an excessive amount of missing values. And keeping them that way is a team sport, not something a lone tech genius can handle.

Unfortunately, most graduates with degrees in computer science or statistics don’t get to see a real bloody mess before they get thrown into a battlefield. In school, problems are nicely defined by the professors, and the test data are always in pristine conditions. But I don’t think I have seen such clean and error-free data since school days, which was indeed a lifetime ago.

Dealing with organizational conflicts, vague instructions, and messy data is the part of the job of any data professional. It requires quite a balancing act to provide “the least wrong answers” consistently to all constituents who have vastly different interests. If the balance is even slightly off, you may end up with a technically sound solution that no one adopts into their practices. Forget about full automation of anything in that situation.

Already Spent Money on Wrong Things

This one is a heart-breaker for me, personally. I get onto the scene, examine the case, and provide step-by-step solutions to get to the goal, only to find out that the client company spent money on the wrong things already and has no budget left to remedy the situation. We play with data to make money, but playing with data and technology costs money, too.

There are so many snake oil salespeople out there, over-promising left and right with lots of sweet-to-the-ears buzzwords. Yeah, if you buy this marketing automation toolset armed with state-of-the-art machine-learning features, you will get actionable insights out of any kind of data in any form through any channel. Sounds too good to be true?

Marketing automation is really about the “combination” of data, analytics, digital content, and display technologies (for targeted messaging). It is not just one thing, and there is no silver bullet. Even if some other companies may have found one, will it be applicable to your unique situation, as is? I highly doubt it.

The Last Word on How to Do Marketing Automation Right

There are so many reasons why marketing automation projects go south (though I don’t understand why going “south” is a bad thing). But one thing is for sure. Marketing automation — or any data-related project — is not something that one or two zealots in an organization can achieve single-handedly with some magic toolset. It requires organizational commitment to get it done, get it utilized, and get improved over time. Without understanding what it should be about, you will end up automating the wrong things. And you definitely don’t want to get to the wrong answer any faster.

Machine Learning? I Don’t Think Those Words Mean What You Think They Mean

I find more and more people use the term “machine learning” when they really mean to say “modeling.” I guess that is like calling all types of data activities — with big and small data — “Big Data.” And that’s OK.

I find more and more people use the term “machine learning” when they really mean to say “modeling.” I guess that is like calling all types of data activities — with big and small data — “Big Data.” And that’s OK.

Languages are developed to communicate with other human beings more effectively. If most people use the term to include broader meanings than the myopic definition of the words in question, and if there is no trouble understanding each other that way, who cares? I’m not here to defend the purity of the meaning, but to monetize big or small data assets.

The term “Big Data” is not even a thing in most organizations with ample amounts of data anymore, but there are many exceptions, too. I visit other countries for data and analytics consulting, and those two words still work like “open sesame” to some boardrooms. Why would I blame words for having multiple meanings? The English dictionary is filled with such colloquial examples.

I recently learned that famous magic words “Hocus Pocus” came from the Latin phrase “hoc est corpus,” which means “This is the body (of Christ)” as spoken during Holy Communion in Roman Catholic Churches. So much for the olden-day priests only speaking in Latin to sound holier; ordinary people understood the process as magic — turning a piece of bread into the body of Christ — and started applying the phrase to all kinds of magic tricks.

However, if such transformations of words start causing confusion, we all need to be more specific. Especially when the words are about specific technical procedures (not magic). Going back to my opening statement, what does “machine learning” mean to you?

  • If spoken among data scientists, I guess that could mean a very specific way to describe modeling techniques that include Supervised Learning, Unsupervised Learning, Reinforced Learning, Deep Learning, or any other types of Neural Net modeling, indicating specific methods to construct models that serve predetermined purposes.
  • If used by decision-makers, I think it could mean that the speaker wants minimal involvement of data scientists or modelers in the end, and automate the model development process as much as possible. As in “Let’s set up Machine Learning to classify all the inbound calls into manageable categories of inquiries,” for instance. In that case, the key point would be “automation.”
  • If used by marketing or sales; well, now, we are talking about really broad set of meanings. It could mean that the buyers of the service will require minimal human intervention to achieve goals. That the buyer doesn’t even have to think too much (as the toolset would just work). Or, it could mean that it will run faster than existing ways of modeling (or pattern recognition). Or, they meant to say “modeling,” but they somehow thought that it sounded antiquated. Or, it could just mean that “I don’t even know why I said Machine Learning, but I said it because everyone else is saying it” (refer to “Why Buzzwords Suck”).

I recently interviewed a candidate fresh out of a PhD program for a data scientist position, whose resume is filled with “Machine Learning.” But when we dug a little deeper into actual projects he finished for school work or internship programs, I found out that most of his models were indeed good, old regression models. So I asked why he substituted words like that, and his answer was staggering; he said his graduate school guided him that way.

Why Marketers Need to Know What Words Mean

Now, I’m not even sure whom to blame in a situation like this, where even academia has fallen under the weight of buzzwords. After all, the schools are just trying to help their students getting high paying jobs before the summer is over. I guess then the blame is on the hiring managers who are trying to recruit candidates based on buzzwords, not necessarily knowing what they should look for in the candidates.

And that is a big problem. This is why even non-technical people must understand basic meanings of technical terms that they are using; especially when they are hiring employees or procuring outsourcing vendors to perform specific tasks. Otherwise, some poor souls would spend countless hours to finish things that don’t mean anything for the bottom-line. In a capitalistic economy, we play with data for only two reasons:

  1. to increase revenue, or
  2. to reduce cost.

If it’s all the same for the bottom line, why should a non-technician care about the “how the job is done” part?

Why It Sucks When Marketers Demand What They Don’t Understand

I’ve been saying that marketers or decision-makers should not be bad patients. Bad patients won’t listen to doctors; and further, they will actually command doctors prescribe certain medications without testing or validation. I guess that is one way to kill themselves, but what about the poor, unfortunate doctor?

We see that in the data and analytics business all of the time. I met a client who just wanted to have our team build neural net models for him. Why? Why not insist on a random forest method? I think he thought that “neural net” sounded cool. But when I heard his “business” problems out, he definitely needed something different as a solution. He didn’t have the data infrastructure to support any automated solutions; he wanted to know what went on in the modeling process (neural net models are black boxes, by definition), he didn’t have enough data to implement such things at the beginning stage, and projected gains (by employing models) wouldn’t cover the cost of such implementation for the first couple of years.

What he needed was a short-term proof of concept, where data structure must be changed to be more “analytics-ready.” (It was far from it.) And the models should be built by human analysts, so that everyone would learn more about the data and methodology along the way.

Imagine a junior analyst fresh out of school, whose resume is filled with buzzwords, meeting with a client like that. He wouldn’t fight back, but would take the order verbatim and build neural net models, whether they helped in achieving the business goals or not. Then the procurer of the service would still be blaming the concept of machine learning itself. Because bad patients will never blame themselves.

Even advanced data scientists sometimes lose the battle with clients who insist on implementing Machine Learning when the solution is something else. And such clients are generally the ones who want to know every little detail, including how the models are constructed. I’ve seen data scientists who’d implemented machine learning algorithms (for practical reasons, such as automation and speed gain), and reverse-engineered the models, using traditional regression techniques, only to showcase what variables were driving the results.

One can say that such is the virtue of a senior-level data scientist. But then what if the analyst is very green? Actually some decision-makers may like that, as a more junior-level person won’t fight back too hard. Only after a project goes south, those “order takers” will be blamed (as in “those analysts didn’t know what they were doing”).

Conclusion

Data and analytics businesses will continually evolve, but the math and the human factors won’t change much. What will change, however, is that we will have fewer and fewer middlemen between the decision-makers (who are not necessarily well-versed in data and analytics) and human analysts or machines (who are not necessarily well-versed in sales or marketing). And it will all be in the name of automation, or more specifically, Machine Learning or AI.

In that future, the person who orders the machine around — ready or not — will be responsible for bad results and ineffective implementations. That means, everyone needs to be more logical. Maybe not as much as a Vulcan, but somewhere between a hardcore coder and a touchy-feely marketer. And they must be more aware of capabilities and limitations of technologies and techniques; and, more importantly, they should not blindly trust machine-based solutions.

The scary part is that those who say things like “Just automate the whole thing with AI, somehow” will be the first in line to be replaced by the machines. That future is not far away.

3 Top 2018 Marketing Posts That Predict 2019 Outcomes

As a regular contributor to Target Marketing, I thought I would use my last post of 2018 to take stock of the marketing posts I did through out the year. Being data-driven, I began by looking at the data to find the most-read posts.

As a regular contributor to Target Marketing, I thought I would use my last post of 2018 to take stock of the marketing posts I did through out the year. Being data-driven, I began by looking at the data to find the most-read posts.

A clear lesson for me is that the wonkier my post, the less popular. (I know! I am surprised as you. I have so much technical and boring perspective to give!)

Nevertheless, below are two posts that the wisdom of the market indicated were my better contributions to the marketing world. I also added my closing thoughts for the year for both posts. Lastly, I also include my personal-favorite post, which I file under the “business fiction” category — for the benefit of the Pulitzer Prize Board.

Data and the Decline of Sears

My top post for 2018 discusses how the downfall of Sears was not about its refusal to adopt new technology and embrace data. In fact, since 2005, Sears strongly embraced a data-driven culture.

Rather, the problem was that Sears’ leadership did not show visionary boldness, and focused its data-driven energies on mostly tactical wins.

I would like to emphasize that data-driven thinking was not the downfall of Sears. In fact, it yielded great results where applied. Rather, it was the narrow-minded application of data-driven thinking that resulted in the downfall. This is an important lesson for those who believe that transforming into a data-driven culture is an inoculation from obsolescence.

Marketing Strategy: Nike’s #JustDoIt Campaign and Kaepernick

The second-most popular post hypothesized what the long run game plan was behind Nike’s campaign featuring Colin Kaepernick. There were three hypotheses.

  • First, that Nike is simply focused on the issue of racial justice and not looking to weigh in on all of politics.
  • Second, that Nike is trying to drive dialog by alternating between liberal and conservative talking points, and the Kaepernick ads were the starting point.
  • Finally, that Nike is actively seeking to become a brand associated with left-leaning politics.

It is the last hypothesis that worried me the most. Not because of my political beliefs. Rather, I think it is bad for the country if companies also join the hyper-polarized state of American politics.

To my personal relief, it seems since then that Nike is focused on the specific issue of race. Their follow-up campaign, featuring professional soccer player Raheem Sterling, addresses the need to speak out against racism — even if it isn’t easy to do so.

Looking at 2019 and beyond, I think Nike has wisely positioned itself on the right side of history.

‘Nobody’ Knows the Trouble I’ve Seen

My personal favorite post, which came in sixth, is a fun read. It features a dialog with a fictional consultant named “Nobody.” It distills, through dialog, a reoccurring theme in most of my posts. Data and analytics cannot replace managerial courage.

To 2019 and Beyond!

If there is a prediction for 2019 I would like to make, it is that we will begin (just begin) to see data and analytics become accepted as valuable tools and not a replacement for decisive action.

For a concrete example of this, I would refer the reader to my top post of the year regarding Sears.

Best wishes to all for a happy and prosperous 2019!

Replacing Unskilled Data Marketers With AI

People react to words like “machine learning” or “artificial intelligence” very differently, depending on their interests and levels of understanding of technology. Some get scared, and among them are smart people like Elon Musk or the late Stephen Hawking. Others, including data marketers who lack strategic skills, may react based on a vague fear of becoming irrelevant, thinking that a machine will replace them in the job market soon.

People react to words like “machine learning” or “artificial intelligence” very differently, depending on their interests and levels of understanding of technology. Some get scared, and among them are smart people like Elon Musk or the late Stephen Hawking. Others, including data marketers who lack strategic skills, may react based on a vague fear of becoming irrelevant, thinking that a machine will replace them in the job market soon.

On the contrary, I find that most marketers welcome terms like machine learning. Many think that, in the near future, computers will automatically perform all the number-crunching and just tell them what to do. In marketing environments where “Do more with less” is the norm, the idea of machines making decisions for them may sound attractive to many marketers. How great it would be if some super-duper-computer would do all of the hard work for us? The trouble is that the folks who think like that will be the first ones to be replaced by the machines.

Modern marketing is closely tied into the world of data and analytics (the operative word being “modern,” as there are plenty of marketers still going with their gut feelings). There are countless types of data and analytics applications influencing operations management, R&D or even training programs for world-class athletes, but most of the funding for analytical activities is indeed related to marketing. I’d go even further and claim that most of data-related work is profit-driven; either to make more money for organizations or to cut costs in running businesses. In other words, without the bottom-line profit, why bother with any of this geeky stuff?

Yet, many marketers aren’t interested in analytics and some even have fears of lots of numbers being thrown at them. A set of numbers that would excite analytical minds would scare off many marketers. For the record, I blame such an attitude on school systems and jock cultures that have been devaluing the importance of mathematics. It is no accident that most “nerdy” analysts nowadays are from foreign places, where people who are really good at math are not ridiculed among other teenage students but praised or even worshiped.

The joke is that those geeky analysts will be replaced by machines first, as any semi-complex analytical work is delegated to them already. Or will they?

I find it ironic that marketers who have a strong aversion to words like “advanced analytics” or “modeling” would freely embrace machine learning or AI. Because that is like saying you don’t like music, unless it is played by machines. What do they think machine learning is? Some “thinking-slave” that will do all of the work without complaint or asking too many questions?

Machine learning is one of many ways of modeling, whether it is for prediction or pattern recognition. It just became more attractive to the business community as computing power increased over time to accommodate heavy iterations of calculations, and because words like neural net models were replaced by easier sounding “machine learning.”

To wield such machines, nonetheless, one must possess “some” idea about how they work and what they require. Otherwise, it would be like a musically illiterate person trying to produce a piece of music all automatically. Yes, I’ve heard that now there are algorithms that can compose music or write novels on their own, but I would argue that such formulaic music will be a filler in a hotel elevator, at best. If emotionally moving another human being is the goal, one can’t eliminate all human factors out of the equation.

Machines are to automate things that humans already know how to do. And it takes ample amounts of “man-hours” to train the machine, even for the relatively simple task of telling the difference between dogs and cats in pictures. And some other human would have decided that such a task would be meaningful for other humans. Of course, once the machines are set up to learn on their own, a huge momentum will kick in and millions of pictures will be sorted out automatically.

And as such evolution goes on, a whole lot of people may lose their jobs. But not the ones who know how to set the machines up and give them purposes for such work.

Let’s Take a Breath Here

Dialing back to something much simpler: Operations. In automating reports and creating custom messages for target audiences, the goals must be set by stakeholders and machines must be tweaked for such purposes at the beginning. Someday soon, AI will reach the level where it can operate with very general guidelines; but at least for now, requesters must provide logical instructions.

Let’s say a set of reports come out of the computer for the use of marketing analysis. “What reports to show”-type decisions are still being made by humans, but producing useful intelligence in an automated fashion isn’t a difficult task these days. Then what? The users still have to make sense out of all of those reports. Then they must decide what to do about the findings.

There are folks who hope that machine will tell them exactly what to do out of such intel. The first part may come close to their expectation sometime soon, if not already for some. Producing tidbits like “Hi, human: It looks like over 80% of your customers who shopped last year never came back,” or “The top 10% of your customers, in terms of lifetime spending level, account for over 70% your yearly revenue, but about half of them show days between transactions far longer than a year.” By the way, mimicking human speech isn’t easy, but if all these numbers are sitting somewhere in the computer, yes, it is possible to expect something like this out of machines.

The hard part for the machines would be picking five to six of the most important tidbits out of hundreds, if not thousands of other “facts,” as that requires understanding of business goals. But we can fake even that type of decision-making by assuming most businesses are about “increasing revenue by acquiring new valuable customers, and retaining them for as long as possible.”

Then the really hard part would be deciding what to do about it. What should you do to make your valuable customers come back? Answering that type of question requires not only an analytical mindset, but a deep understanding in human psychology and business acumen. Analytics consultants are generally multi-dimensional thinkers, and the one-trick ponies who just spit out formulaic answers do not last too long. The same rule would apply to machines, and we may call those one-dimensional machines “posers” too (refer to “Don’t Hire Data Posers”).

But let’s say that by entering thousands business cases with final solutions and results as a training set into machines, we finally get to have such machine intelligence. Would we be free from having to “think” even a bit?

The short answer is that, like I said in the beginning, such folks who don’t want to analyze anything will become irrelevant even sooner. Why would we need illogical people when the machines are much cheaper and smarter? Besides, even future computers shown in science fiction movies will require “logical” inquiries to function properly. “Asking the right question” will remain a human function, even in a faraway future. And the logical mindset is a result of mathematical training with some aptitude for it, much like musical abilities.

The word “illiterate” used to mean folks who didn’t know how to read and write. In the age of machines, “logic” is the new language. So, dear humans, do not give up on math, if self-preservation is an instinct that you possess. I am not asking everyone to get a degree in mathematics, but I am insisting that we all must learn about ways of scientific approaches to problem-solving and logical methods of defining inquiries. In the future, people who can wield machines will be in secure places — whether they are coders or not — while new breeds of logically illiterate people will be replaced by the machines, one-by-one.

So, before you freely invite advanced thinking machines into your marketing operations, think carefully if you are either the one who gives purpose to such machines (by understanding what’s at stake, and what those numbers all mean), or one who can train machines to solve those pre-defined (by humans) problems.

I am not talking about some doomsday scenario of machines killing people to take over the world; but like any historical events that are described as “revolutions,” this machine revolution will have real impact on our lives. And like anything, it will be good for some, and bad for others. I am saying that data illiterates who would say things like, “I don’t understand what all those numbers mean,” may be ignored by machines — just like they are by smartass analysts. (But maybe without the annoying attitudes.)

Marketing Success Sans ‘Every Breath They Take, Every Move They Make’

Last month, I talked about how to measure success when there are many conflicting goals and available metrics flying around (refer to “Marketing Success Metrics: Response or Dollars?”). This time, let’s start thinking about how to act on data and intelligence that we’ve gathered. And that means we get to touch different kinds of advanced analytics.

Last month, I talked about how to measure success when there are many conflicting goals and available metrics flying around (refer to “Marketing Success Metrics: Response or Dollars?”). This time, let’s start thinking about how to act on data and intelligence that we’ve gathered. And that means we get to touch different kinds of advanced analytics.

But before we get into boring analytics talk, citing words like “predictive analytics” and “segmentation,” let’s talk about what kind of data are required to make predictions better and more accurate. After all, no data, no analytics.

I often get questions like what the “best” kind of data are. And my answer is, to the inquirer’s disappointment, “it depends.” It really depends on what you are trying to predict, or ultimately, do. If you would like to have an accurate forecast of futures sales, such an effort calls for a past sales history (but not necessarily on an individual or transactional level); past and current marcom spending by channel; web and other channel traffic data; and environmental data, such as economic indicators, just to start off.

Conversely, if you’d like to predict an individual’s product affinity, preferred offer types or likelihood to respond to certain promotion types, such predictive modeling requires data about the past behavior of the target. And that word “behavior” may evoke different responses, even among seasoned marketers. Yes, we are all reflections of our past behavior, but what does that mean? Every breath you take, every move you make?

Thanks to the Big Data hype a few years back, many now believe that we should just collect anything and everything about everybody. Surely, cost for data collection, storage and maintenance has decreased quite a bit over the years, but that doesn’t mean that we should just hoard data mindlessly. Because you may be deferring inevitable data hygiene, standardization, categorization and consolidation to future users — or machines — who must sort out unorganized and unrefined data and provide applicable insights.

So, going back to that question of what makes up data about human behavior, let’s define what that means in a categorical fashion. With proliferation of digital data collection and analytics, the definition of behavioral data has expanded considerably.

In short, what people casually refer to as “behavioral data” may include this to measure success:

  • Online Behavior: Web data regarding click, view and other shopping behavior.
  • Purchase: Transactional data, made of who, what, when, how much and through what channel.
  • Response: Response history, in relation to specific promotions, covering open, click-through, opt-out, view, shopping basket, conversion/transaction. Offline response may be as simple as product purchase.
  • Channel: Channel usage data, not necessarily limited to shopping behavior.
  • Payment: Payment and related delinquent history — essential for credit purchases and continuity and subscription businesses.
  • Communication: Call, chat or other communication log data, positive or negative in nature.
  • Movement: Physical proximity or movement data, in store or store area, for example.
  • Survey: Responses to various surveys.
  • Opt-in/Opt-out: Sign-up specific 2-way communications and channel specific opt-out requests.
  • Social Media: Product review, social media posting and product/service-related sentiment data.

I am sure some will think of more categories. But before we create an exhaustive list of data types, let’s pause and think about what we are trying to do here.

First off, all of these data traceable to a person are being collected for one major reason (at least for marketers): To sell more things to them. If the goal is to predict the who, what, when and why of buying behavior, do we really need all of this?

The ‘Who’ of Buying Behavior

In the prediction business, predicting “who” (as in “who will buy this product?”) is the simplest kind of action. We’d need some PII (personally identifiable information) that can link to buying behaviors of the target. After all, the whole modeling technique was invented to rank target individuals and set up contact priority — in that order. Like sending expensive catalogs only to high-score individuals, in terms of “likely to respond,” or sales teams contacting high “likely to convert” targets as priorities in B2B businesses.

The ‘What’ of Buying Behavior

The next difficulty level lies with the prediction of “what” (as in “what is that target individual going to buy next?”). This type of prediction is generally a hit-or-miss, so even mighty Amazon displays multiple product offers at the end of a successful transaction, by saying “Customers who purchased this item are also interested in these products.” Such a gentle push, based on collaborative filtering, requires massive purchase history by many buyers to be effective. But, provided with ample amounts of data, it is not terribly difficult, and the risk of being wrong is relatively low. Pinpointing the very next product for 1:1 messaging can be challenging, but product basket analysis can easily lead to popular combinations of products, at the minimum.