Money Talks: It’s Past Time That Marketers Reconsider Reward-Based Promotions

Recently, Bill Warshauer provided Target Marketing readers with a timely reminder of the efficacy of reward-based promotions and their utility in replacing discounts in the promotional mix. Now let’s take it a step further with some modeling.

Recently, Bill Warshauer provided Target Marketing readers with a timely reminder of the efficacy of reward-based promotions and their utility in replacing discounts in the promotional mix.

Extremely interested, I commented and a somewhat updated summary follows:

It is past time that marketers reconsider reward-based promotions.

Let me give you one simple case history.

Some years ago, a client, a leading bank, found that they were sending pre-approved but unrequested credit cards to customers and were getting unacceptably few people “deblocking” or “activating” their cards despite a regular monthly conversion series urging them with various incentives like no annual charges for one year, to accept and use the new cards. The low conversion rate, plus the costs of up to six efforts per prospect with no revenue, were at least discouraging.

When we were asked for help, we proposed turning this system on its head. Our objective was simple: incentivize card recipients to use their new cards immediately instead of allowing the costly slow drip from promotions over months: make them, as mafia lore says, an offer they couldn’t afford to refuse but we could afford to make.

To do this we designed an elaborate mailing to be sent without fail, the day after the card, rehearsing all the card holder benefits but telling the prospect there was another “secret” benefit which could only be had by calling a special toll-free number.

Having determined in advance, the allowable cost per active card, we knew that we could afford to offer $50 per active card user. The caller was not to be asked to “deblock” or “activate” their card (unnecessarily bureaucratic words in our view) but rather, “May we credit your card account with a $50 gift you can spend right away when you use your card within the next five days?” (We also tested $20, $30, and $40 as alternatives to determine the most profitable offer.)

Want to know which was best? Ask me.

While the cost of the mailing package nearly got us fired, the results turned client fury into joy. We achieved the client’s objective of conversion, even after consideration of the cost of the reward, at a cost-per-conversion of less than half of what the client had been spending. And it was right away, a significant cash advantage.

Not exactly the proverbial rocket science, is it?

I had drilled into me over the years by the iconic Dick Benson and by others that Rule No. 1 of what was then called direct marketing was:

It doesn’t matter a damn how much you spend in marketing money so long as in the end, the unit cost of achieving your order gives you the pre-determined profit you desire, or more!

Why many marketers have the least trouble getting their minds around this is a total mystery to me. It should be hardwired into them.

However, taking advantage of the isolation of the pandemic, it seemed worthwhile to have another go at trying to explain it and providing those readers who are interested, with the tools to build a simple model to help them calculate it for themselves.

Using the activation of distributed credit cards as an example, let’s assume that the bank has determined it can afford to spend a maximum of $87.50 to generate each active credit card. In arriving at that assessment of the “allowable,” the bank has determined that it wishes to achieve a profit of a similar $87.50. And to make certain that their marketing geniuses won’t forget the need for that profit, the bank’s managers have made believe the profit is a “cost” and reserved it so the marketing guys cannot get their mitts on it.

Now, if the actual cost of generating the activation is exactly the $87.50 allowable, the profit will still be there. And if, due to the marketing team’s genius (or luck, or a combination of the two), the activation costs less than the allowable, the difference will go right to the bottom line and the applause will be deafening.

Let’s look at this starting with the knowns:

Determining the allowable figure 1For this hypothetical example, let’s accept that the historic active card revenue is $350 and that operating the card system and covering general and administrative costs is 50% of this revenue. Now let’s reserve an additional 25% ($87.50) for profit before determining how much we can spend for marketing.

As you can see, that leaves us with $87.50 which we can afford to obtain an activated card user and make a 25% profit. Put another way, if we spend exactly $87.50 for marketing including the cost of any incentive, then we will make the mandated 25% profit. Spend more and we will make less profit or even have a loss; spend less and the saving against the $87.50 allowable will add to the profit.

Now the fun starts.

What we wish to project is how much our response needs to increase from a non-incentivized baseline to justify the incremental cost of each level of incentive. Let’s assume that including the variable cost of converting the incoming telephone calls, the per-thousand marketing cost is $1,500. Before adding the cost of any incentive, the baseline at different response levels looks like this.

Non-incentivized cost per activated card, figure 2
Note that the white outlined cells are inputs and the shaded cells are driven by formulae. In working with the model, this allows you to play “what ifs?” and input your own assumptions, for example, different percentage responses or cost per thousand.

As you can plainly see, only the cost of a 1% response exceeds the $87.50 allowable and eats into the reserved profit. At 2%, the cost is $75 which means that the bottom line is the $87.50 allowable plus the $87.50 reserved profit objective, less the $75 cost per response, for a total contribution of $100 or 35% of revenue, a comfortable ROMI of 1.33.

But as aggressive marketers, we want more responses and profit. And we are prepared to offer an incentive of $20 to achieve that. Our question must be: How much does the addition of this incentive have to increase the percentage return to justify the additional $20 (or some other number you choose)?

To find the answer to this and provide a tool for answering other questions with different inputs, we built this model. It references the ACPO model above and matrixes the calculated cost of a given response percentage with the addition of the incentive.

Actual Cost, Figure 3To this has been added a table which permits us to input any combination of response rate and incentive and generate how much additional percentage response would be required to justify using the specified incentive, in the case of this example a $30 cash promise.

Incentive Justification, Figure 4
To help you should you care to build your own model, the equations which drive it are explained below the numbers.

As you’ll see, to justify the $30 incentive, response must rise by 0.86% from 2.0% to 2.86%. Any increase greater than 0.86% would make the use of the $30 incentive a winner.

There is no question that money talks. We just need to understand the language.

Any interested reader who wishes to have a copy of this Excel model, email pjrosenwald@gmail.com with “Model Please” in the subject line.

Don’t Blame Personalization After Messing It Up

In late 2019, Gartner predicted “80% of marketers who have invested in personalization efforts will abandon them by 2025 because of lack of ROI, the peril of customer data, or both.” But before giving up because the first few rounds didn’t pay off, shouldn’t marketers stop and think about what could have gone wrong?

In late 2019, Gartner predicted “80% of marketers who have invested in personalization efforts will abandon them by 2025 because of lack of ROI, the peril of customer data, or both.” Interesting that I started my last article quoting only about 20% of analytics works are properly applied to businesses. What is this, some 80/20 hell for marketers?

Nonetheless, the stat that I shared here begs for further questioning, especially the ROI part. Why do so many marketers think that ROI isn’t there? Simply, ROI doesn’t look good when:

  1. You invested too much money (the denominator of the ROI equation), and
  2. The investment didn’t pay off (the numerator of the same).

Many companies must have spent large sums of money on teams of specialists and service providers, data platforms featuring customer 360, personalization software (on the delivery side), analytics work for developing segments and personas, third-party data, plus the maintenance cost of it all. To justify the cost, some marginal improvements here and there wouldn’t cut it.

Then, there are attribution challenges even when there are returns. Allocating credit among all the things that marketers do isn’t very simple, especially in multichannel environments. To knock CEOs and CFOs off their chairs – basically the bottom-line people, not math or data geeks – the “credited” results should look pretty darn good. Nothing succeeds like success.

After all, isn’t that why marketers jumped onto this personalization bandwagon in the first place? For some big payoff? Wasn’t it routinely quoted that, when done right, 1:1 personalization efforts could pay off 20 times over the investment?

Alas, the key phrase here was “when done right,” while most were fixated on the dollar signs. Furthermore, personalization is a team sport, and it’s a long-term game.  You will never see that 20x return just because you bought some personalization engine and turned the default setting on.

If history taught us anything, any game that could pay off so well can’t be that simple. There are lots of in-between steps that could go wrong. Too bad that yet another buzzword is about to go down as a failure, when marketers didn’t play the game right and the word was heavily abused.

But before giving it all up just because the first few rounds didn’t pay off so well, shouldn’t marketers stop and think about what could have gone so wrong with their personalization efforts?

Most Personalization Efforts Are Reactive

If you look at so-called “personalized” messages from the customer’s point of view, most of them are just annoying. You’d say, “Are they trying to annoy me personally?”

Unfortunately, successful personalization efforts of the present day is more about pushing products to customers, as in “If you bought this, you must want that too!” When you treat your customers as mere extensions of their last purchase, it doesn’t look very personal, does it?

Ok, I know that I coveted some expensive electric guitars last time I visited a site, but must I get reminded of that visit every little turn I make on the web, even “outside” the site in question?

I am the sum of many other behaviors and interests – and you have all the clues in your database – not a hollow representation of the last click or the last purchase.  In my opinion, such one-dimensional personalization efforts ruined the term.

Personalization must be about the person, not product, brands, or channels.

Personalization Tactics Are Often Done Sporadically, Not Consistently

Reactive personalization can only be done when there is a trigger, such as someone visiting a site, browsing an item for a while, putting it in a basket without checking out, clicking some link, etc. Other than the annoyance factor I’ve already mentioned, such reactive personalization is quite limited in scale. Basically, you can’t do a damn thing if there is no trigger data coming in.

The result? You end up annoying the heck out of the poor souls who left any trail – not the vast majority for sure – and leave the rest outside the personalization universe.

Now, a 1:1 marketing effort is a number’s game. If you don’t have a large base to reach, you cannot make significant differences even with a great response rate.

So, how would you get out of that “known-data-only” trap? Venture into the worlds of “unknowns,” and convert them into “high potential opportunities” using modeling techniques. We may not know for sure if a particular target is interested in purchasing high-end home electronics, but we can certainly calculate the probability of it using all the data that we have on him.

This practice alone will increase the target base from a few percentage points to 100% coverage, as model scores can be put on every record. Now you can consistently personalize messages at a much larger scale. That will certainly help with your bottom-line, as more will see your personalized messages in the first place.

But It’s Too Creepy

Privacy concerns are for real. Many consumers are scared of know-it-all marketers, on top of being annoyed by incessant bombardments of impersonal messages; yet another undesirable side effect of heavy reliance on “known” data. Because to know for sure, you have to monitor every breath they take and every move they make.

Now, there is another added bonus of sharing data in the form of model scores. Even the most aggressive users (i.e., marketers) wouldn’t act like they actually “know” the target when all they have is a probability. When the information is given to them, like “This target is 70% likely to be interested in children’s education products,” no one would come out and say “I know you are interested in children’s education products. So, buy this!”

The key in modern day marketing is a gentle nudge, not a hard sell. Build many personas – because consumers are interested in many different things – and kindly usher them to categories that they are “highly likely” to be interested in.

Too Many Initiatives Are Set on Auto-Pilot

People can smell machines from miles away. I think humans will be able to smell the coldness of a machine even when most AIs will have passed the famous Turing Test (Definition: a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human).

In the present day, detecting a machine pushing particular products is even easier than detecting a call-center operator sitting in a foreign country (not that there is anything wrong about that).

On top of that, machines are only as versatile as we set them up to be. So, don’t fall for some sales pitch that a machine can automatically personalize every message utilizing all available data. You may end up with some rudimentary personalization efforts barely superior to basic collaborative filtering, mindlessly listing all related products to what the target just clicked, viewed, or purchased.

Such efforts, of course, would be better than nothing.  For some time.  But remember that the goal is to “wow” your target customers and your bosses. Do not settle for some default settings of campaign or analytics toolsets.

Important Factors Are Ignored

When most investments are sunk in platforms, engines, and toolsets, only a little are left for tweaking, maintenance, and expansion. As all businesses are unique (even in similar industries), the last mile effort for custom fitting often makes or breaks the project. At times, unfortunately, even big items such as analytics and content libraries for digital asset management get to be ignored.

Even through a state-of-the-art AI engine, refined data works better than raw data. Your personalization efforts will fail if there aren’t enough digital assets to rotate through, even with a long list of personas and segments for everyone in the database. Basically, can you show different contents for different personas at different occasions through different media?

Data, analytics, contents, and display technologies must work harmoniously for high level personalization to work.

So What Now?

It would be a real shame if marketers hastily move away from personalization efforts when sophistication level is still elementary for the most.

Maybe we need a new word to describe the effort to pamper customers with suitable products, services and offers. Regardless of what we would call it, staying relevant to your customer is not just an option anymore. Because if you don’t, your message will categorically be dismissed as yet another annoying marketing message.

 

Marketers Find the Least-Wrong Answers Via Modeling

Why do marketers still build models when we have ample amounts of data everywhere? Because we will never have every piece of data about everything. We just don’t know what we don’t know.

Why do marketers still build models when we have ample amounts of data everywhere? Because we will never have every piece of data about everything. We just don’t know what we don’t know.

Okay, then — we don’t get to know about everything, but what are the data that we possess telling us?

We build models to answer that question. Even scientists who wonder about the mysteries of the universe and multiverses use models for their research.

I have been emphasizing the importance of modeling in marketing through this column for a long time. If I may briefly summarize a few benefits here:

  • Models Fill in the Gaps, covering those annoying “unknowns.” We may not know for sure if someone has an affinity for luxury gift items, but we can say that “Yes, with data that we have, she is very likely to have such an affinity.” With a little help from the models, the “unknowns” turn into “potentials.”
  • Models Summarize Complex Data into simple-to-use “scores.” No one has time to dissect hundreds of data variables every time we make a decision. Model scores provide simple answers, such as “Someone likely to be a bargain-seeker.” Such a model may include 10 to 20 variables, but the users don’t need to worry about those details at the time of decision-making. Just find suitable offers for the targets, based on affinities and personas (which are just forms of models).
  • Models are Far More Accurate Than Human Intuition. Even smart people can’t imagine interactions among just two or three variables in their heads. Complex multivariate interaction detection is a job for a computer.
  • Models Provide Consistent Results. Human decision-makers may get lucky once in a while, but it will be hard to keep it up with machines. Mathematics do not fluctuate too much in terms of performance, provided with consistent and accurate data feeds.
  • Models Reveal Hidden Patterns in data. When faced with hundreds of data variables, humans often resort to what they are accustomed to (often fewer than four to five factors). Machines indiscriminately find new patterns, relentlessly looking for the best suitable answers.
  • Models Help Expand the Targeting Universe. If you want a broader target, just go after slightly lower score targets. You can even measure the risk factors while in such an expansion mode. That is not possible with some man-made rules.
  • When Done Right, Models Save Time and Effort. Marketing automation gets simpler, too, as even machines can tell high and low scores apart easily. But the keywords here are “when done right.”

There are many benefits of modeling, even in the age of abundant data. The goal of any data application is to help in the decision-making process, not aid in hoarding the data and bragging about it. Do you want to get to the accurate, consistent, and simple answers — fast? Don’t fight against modeling, embrace it. Try it. And if it doesn’t work, try it in another way, as the worst model often beats man-made rules, easily.

But this time, I’m not writing this article just to promote the benefits of modeling again. Assuming that you embrace the idea already, let’s now talk about the limitations of it. With any technique, users must be fully aware of the downsides of it.

It Mimics Existing Patterns

By definition, models identify and mimic the patterns in the existing data. That means, if the environment changes drastically, all models built in the old world will be rendered useless.

For example, if there are significant changes in the supply chain in a retail business, product affinity models built for old lines of products won’t work anymore (even if products may look similar). More globally, if there were major disruptions, such as a market crash or proliferation of new technologies, none of the old assumptions would continue to be applicable.

The famous economics phrase Ceteris paribus — all other things being equal — governs conventional modeling. If you want your models to be far more adaptive, then consider total automation of modeling through machine learning. But I still suggest trying a few test models in an old-fashioned way, before getting into a full automation mode.

If the Target Is Off, Everything Is Off

If the target mark is hung on a wrong spot, no sharpshooter will be able to hit the real target. A missile without a proper guidance system is worse than not having one at all. Setting the right target for a model is the most critical and difficult part in the whole process, requiring not only technical knowledge, but also deep understanding of the business at stake, the nature of available data, and the deployment mechanism at the application stage.

This is why modeling is often called “half science, half art.” A model is only as accurate as the target definition of the model. (For further details on this complex subject, refer to “Art of Targeting”).

The Model Is Only as Good as the Input Data

No model can be saved if there are serious errors or inconsistencies in the data. It is not just about bluntly wrong data. If the nature of the data is not consistent between the model development sample and the practical pool of data (where the model will be applied and used), the model in question will be useless.

This is why the “Analytics Sandbox” is important. Such a sandbox environment is essential — not just for simplification of model development, but also for consistent application of models. Most mishaps happen before or after the model development stage, mostly due to data inconsistencies in terms of shapes and forms, and less due to sheer data errors (not that erroneous data is acceptable).

The consistency factor matters a lot: If some data variables are “consistently” off, they may still possess some predictive power. I would even go as far as stating that consistency matters more than sheer accuracy.

Accuracy Is a Relative Term

Users often forget this important fact, but model scores aren’t pinpoint accurate all of the time. Some models are sharper than others, too.

A model score is just the best estimate with the existing data. In other words, we should take model scores as the least-wrong answers in a given situation.

So, when I say it is accurate, I mean to say a model is more accurate than human intuition based on a few basic data points.

Therefore, the user must always consider the risk of being wrong. Now, being wrong about “Who is more likely to respond to this 15% discount offer?” is a lot less grave than being wrong about “Who is more likely to be diabetic?”

In fact, if I personally face such a situation, I won’t even recommend building the latter model, as the cost of being wrong is simply too high. (People are very sensitive about their medical information.) Some things should not just be estimated.

Even with innocuous models, such as product affinities and user propensities, users should never treat them as facts. Don’t act like you “know” the target, simply because some model scores are available to you. Always approach your target with a gentle nudge; as in, “I don’t know for sure if you would be interested in our new line of skin care products, but would you want to hear more about it?” Such gentle approaches always sound friendlier than acting like you “know” something about them for sure. That seems just rude on the receiving end, and recipients of blunt messages may even think that you are indeed creepy.

Users sometimes make bold moves with an illusion that data and analytics always provide the right answers. Maybe the worst fallacy in the modern age is the belief that anything a computer spits out is always correct.

Users Abuse Models

Last month, I shared seven ways users abuse models and ruin the results (refer to “Don’t Ruin Good Models by Abusing Them”). As an evangelist of modeling techniques, I always try to prevent abuse cases, but they still happen in the application stages. All good intentions of models go out the window if they are used for the wrong reasons or in the wrong settings.

I am not at all saying that anyone should back out of using models in their marketing practices for the shortfalls that I listed here. Nonetheless, to be consistently successful, users must be aware of limitations of models, as well. Especially if you are about to go on full marketing automation. With improper application of models, you may end up automating bad or wrong practices really fast. For the sake of customers on the receiving end — not just for the safety of your position in the marketing industry — please be more careful with this sharp-edged tool called modeling.

Stop Expecting Data Scientists to Be Magical: Analytics Is a Team Sport

Many organizations put unreasonable expectations on data scientists. Their job descriptions and requirements are often at a super-human level. “They” say — and who are they? — that modern-day data scientists must be good at absolutely everything. Okay, then, what’s “everything,” in this case?

Many organizations put unreasonable expectations on data scientists. Their job descriptions and requirements are often at a super-human level. “They” say — and who are they? — that modern-day data scientists must be good at absolutely everything. Okay, then, what’s “everything,” in this case?

First, data scientists have to have a deep understanding in mathematics and statistics, covering regression models, machine learning, decision trees, clustering, forecasting, optimization, etc. Basically, if you don’t have a post-graduate degree in statistics, you will fail at “hello.” The really bad news is that even people with statistics degrees are not well-versed in every technique and subject matter. They all have their specialties, like medical doctors.

Then data scientists have to have advanced programming skills and deep knowledge in database technologies. They must be fluent in multiple computer languages in any setting, easily handling all types of structured and unstructured databases and files in any condition. This alone is a full-time job, requiring expert-level experience, as most databases are NOT in analytics-ready form. It is routinely quoted that most data scientists spend over 80% of their time fixing the data. I am certain that these folks didn’t get an advanced degree in statistics to do data plumbing and hygiene work all of the time. But that is how it is, as they won’t see what we call a “perfect” dataset outside schools.

Data scientists also have to have excellent communication and data visualization skills, being able to explain complex ideas in plain English. It is hard enough to derive useful insights out of mounds of data; now they have to construct interesting stories out of them, filled with exciting punchlines and actionable recommendations at the end. Because most mortals don’t understand technical texts and numbers very well — many don’t even try, and some openly say they don’t want to think — data scientists must develop eye-popping charts and graphs, as well, using the popular visualization tool du jour. (Whatever that tool is, they’d better learn it fast).

Finally, to construct the “right” data strategies and solutions for the business in question, the data scientist should have really deep domain and industry knowledge, at a level of a management and/or marketing consultant. On top of all of that, most job requirements also mention soft skills — as “they” don’t want some data geeks with nerdy attitudes. In other words, data scientists must come with kind and gentle bedside manners, while being passionate about the business and boring stuff like mathematics. Some even ask for child-like curiosity and ability to learn things extremely fast. At the same time, they must carry authority like a professor, being able to influence non-believers and evangelize the mind-numbing subject of analytics. This last part about business acumen, by the way, is the single-most important factor that divides excellent data scientists who add value every time they touch data, and data plumbers who just move data around all day long. It is all about being able to give the right type of homework to themselves.

Now, let me ask you: Do you know anyone like this, having all of these skills and qualities in “one” body? If you do, how many of them do you personally know? I am asking this question in the sincerest manner (though I am quite sarcastic, by nature), as I keep hearing that we need tens of thousands of such data scientists, right now.

There are musicians who can write music and lyrics, determine the musical direction as a producer, arrange the music, play all necessary instruments, sing the song, record, mix and master it, publish it, and promote the product, all by themselves. It is not impossible to find such talents. But if you insist that only such geniuses can enter the field of music, there won’t be much music to listen to. The data business is the same way.

So, how do we divide the task up? I have been using this three-way division of labor — as created by my predecessors — for a long time, as it has been working very well in any circumstance:

  • A Statistical Analyst will have deep knowledge in statistical modeling and machine learning. They would be at the core of what we casually call analytics, which goes way beyond some rule-based decision-making. But these smart people need help.
  • A Master Data Manipulator will have excellent coding skills. These folks will provide analytics-ready datasets on silver platters for the analysts. They will essentially take care of all of the “before” and “after” steps around statistical modeling and other advanced analytics. It is important to remember that most projects go wrong in data preparation and post-analytics application stages.
  • A Business Analyst will need to have a deep understanding of business challenges and the industry landscape, as well as functional knowledge in modeling and database technologies. These are the folks who will prescribe solutions to business challenges, create tangible projects out of vague requests, evaluate data sources and data quality, develop model specifications, apply the results to businesses, and present all of this in the form of stories, reports, and data visualization.

Now, achieving master-level expertise in one of these areas is really difficult. People who are great in two of these three areas are indeed rare, and they will already have “chief” or “head” titles somewhere, or have their own analytics practices. If you insist only procuring data scientists who are great at everything? Good luck to you.

Too many organizations that are trying to jump onto this data bandwagon hire just one or two data scientists, dump all kinds of unorganized and unstructured data on them, and ask them to produce something of value, all on their own. Figuring out what type of data or analytics activity will bring monetary value to the organization isn’t a simple task. Many math geeks won’t be able to jump that first hurdle by themselves. Most business goals are not in the form of logical expressions, and the majority of data they will encounter in that analytics journey won’t be ready for analytics, either.

Then again, strategic consultants who develop a data and analytics roadmap may not be well-versed in actual modeling, machine learning implementation, or database constructs. But such strategists should operate on a different plane, by design. Evaluating them based on coding or math skills would be like judging an architect based on his handling of building materials. Should they be aware of values and limitations of data-related technologies and toolsets? Absolutely. But that is not the same as being hands-on, at a professional level, in every area.

Analytics has always been a team sport. It was like that when the datasets were smaller and the computers were much slower, and it is like that when databases are indeed huge and computing speed is lightning fast. What remains constant is that, in data play, someone must see through the business goals and data assets around them to find the best way to create business value. In executing such plans, they will inevitably encounter many technical challenges and, of course, they will need expert-level technicians to plow through data firsthand.

Like any creative work, such as music producing or movie-making, data and analytics work must start with a vision, tangible business goals, and project specifications. If these elements are misaligned, no amount of mathematical genius will save the day. Even the best rifles will be useless if the target is hung in a wrong place.

Technical aspects of the work matter only when all stakeholders share the idea of what the project is all about. Simple statements like “maximizing the customer value” need a translation by a person who knows both business and technology, as the value can be expressed in dollars, visits, transactions, dates, intervals, status, and any combination of these variables. These seemingly simple decisions must be methodically made with a clear purpose, as a few wrong assumptions by the analyst at-hand — who may have never met the end-user — can easily derail the project toward a wrong direction.

Yes, there are people who can absolutely see through everything and singlehandedly take care of them all. But if your business plan requires such superheroes and nothing but such people, you must first examine your team development roadmap, org chart, and job descriptions. Keep on pushing those poor and unfortunate recruiters who must find unicorns within your budget won’t get you anywhere; that is not how you’re supposed to play this data game in the first place.

How to Find New Customers, Based on Current Customers, With a Targeted Mail List

When you need to acquire new customers, purchasing a targeted mail list is the way to reach them. However, some lists are better than others. We talked about five types of prospecting lists in the last post; now, we will discuss analytics for profiling and modeling lists.

Your mailing list is critical to your mailing results. When you need to acquire new customers, purchasing a targeted mail list is the way to reach them. However, some lists are better than others. We talked about five types of prospecting lists in the last post; now, we will discuss analytics for profiling and modeling lists.

The better your list is targeted, the better your response rate will be.

  • Descriptive Analytics: Is a profile that describes common attributes of your customers and helps to target, based on demographic lookalikes. The market penetration of each attribute shows the comparison between customers and overall population living in the same geo area, with the same attributes, where each element is examined separately. Basically, you will see who your best customers are and find prospects just like them.
  • Predictive Analytics: Is a model that finds how two or more groups are similar or dissimilar. For example, buyers vs. non-buyers; or responders vs. non-responders. Then it assigns a score that represents a probability-to-act, based on the interaction of several attributes. That way, you can get a better idea of who buys what in order to find more people like them.

So why would you want to try one of these options? You can expect an improved response rate, more effective cross-sell and up-sell opportunities, and the ability to build better loyalty programs, because you understand people better. These processes help you identify prospects who “look like” your best customers.

Profiling allows you to profile your best customers (B2C or B2B) and find out what makes them different from others in your target market. You can target new prospects who are the most likely to respond, purchase, or renew, based on your customer data. You can gain precise information about your customers, based on the statistical analysis of key activities. Finally, you will understand the lifetime value of customers, including their probability to respond and purchase products, with a highly advanced model.

Predictive modeling is a process that analyzes the past and current activities or behaviors of two groups to improve future results. This is accomplished between the comparisons of two groups of data. The differences are assessed to identify whether a pattern exists and if it is likely to repeat itself. Scores can be applied to prospect data purchases, or to segment client data for marketing.

Both provide great opportunities for you to target and reach prospects who are more likely to be interested in what you are selling. This way, your offer resonates with them and compels action. This is another way to increase your ROI, as well as save money. You are mailing to only qualified people, so there are less pieces to print and mail. Keep in mind that your customer list is going to get the best response rates, but a highly targeted list like these will have higher response rates than an average purchase list. Are you ready to profile and model your list?

 

Machine Learning? I Don’t Think Those Words Mean What You Think They Mean

I find more and more people use the term “machine learning” when they really mean to say “modeling.” I guess that is like calling all types of data activities — with big and small data — “Big Data.” And that’s OK.

I find more and more people use the term “machine learning” when they really mean to say “modeling.” I guess that is like calling all types of data activities — with big and small data — “Big Data.” And that’s OK.

Languages are developed to communicate with other human beings more effectively. If most people use the term to include broader meanings than the myopic definition of the words in question, and if there is no trouble understanding each other that way, who cares? I’m not here to defend the purity of the meaning, but to monetize big or small data assets.

The term “Big Data” is not even a thing in most organizations with ample amounts of data anymore, but there are many exceptions, too. I visit other countries for data and analytics consulting, and those two words still work like “open sesame” to some boardrooms. Why would I blame words for having multiple meanings? The English dictionary is filled with such colloquial examples.

I recently learned that famous magic words “Hocus Pocus” came from the Latin phrase “hoc est corpus,” which means “This is the body (of Christ)” as spoken during Holy Communion in Roman Catholic Churches. So much for the olden-day priests only speaking in Latin to sound holier; ordinary people understood the process as magic — turning a piece of bread into the body of Christ — and started applying the phrase to all kinds of magic tricks.

However, if such transformations of words start causing confusion, we all need to be more specific. Especially when the words are about specific technical procedures (not magic). Going back to my opening statement, what does “machine learning” mean to you?

  • If spoken among data scientists, I guess that could mean a very specific way to describe modeling techniques that include Supervised Learning, Unsupervised Learning, Reinforced Learning, Deep Learning, or any other types of Neural Net modeling, indicating specific methods to construct models that serve predetermined purposes.
  • If used by decision-makers, I think it could mean that the speaker wants minimal involvement of data scientists or modelers in the end, and automate the model development process as much as possible. As in “Let’s set up Machine Learning to classify all the inbound calls into manageable categories of inquiries,” for instance. In that case, the key point would be “automation.”
  • If used by marketing or sales; well, now, we are talking about really broad set of meanings. It could mean that the buyers of the service will require minimal human intervention to achieve goals. That the buyer doesn’t even have to think too much (as the toolset would just work). Or, it could mean that it will run faster than existing ways of modeling (or pattern recognition). Or, they meant to say “modeling,” but they somehow thought that it sounded antiquated. Or, it could just mean that “I don’t even know why I said Machine Learning, but I said it because everyone else is saying it” (refer to “Why Buzzwords Suck”).

I recently interviewed a candidate fresh out of a PhD program for a data scientist position, whose resume is filled with “Machine Learning.” But when we dug a little deeper into actual projects he finished for school work or internship programs, I found out that most of his models were indeed good, old regression models. So I asked why he substituted words like that, and his answer was staggering; he said his graduate school guided him that way.

Why Marketers Need to Know What Words Mean

Now, I’m not even sure whom to blame in a situation like this, where even academia has fallen under the weight of buzzwords. After all, the schools are just trying to help their students getting high paying jobs before the summer is over. I guess then the blame is on the hiring managers who are trying to recruit candidates based on buzzwords, not necessarily knowing what they should look for in the candidates.

And that is a big problem. This is why even non-technical people must understand basic meanings of technical terms that they are using; especially when they are hiring employees or procuring outsourcing vendors to perform specific tasks. Otherwise, some poor souls would spend countless hours to finish things that don’t mean anything for the bottom-line. In a capitalistic economy, we play with data for only two reasons:

  1. to increase revenue, or
  2. to reduce cost.

If it’s all the same for the bottom line, why should a non-technician care about the “how the job is done” part?

Why It Sucks When Marketers Demand What They Don’t Understand

I’ve been saying that marketers or decision-makers should not be bad patients. Bad patients won’t listen to doctors; and further, they will actually command doctors prescribe certain medications without testing or validation. I guess that is one way to kill themselves, but what about the poor, unfortunate doctor?

We see that in the data and analytics business all of the time. I met a client who just wanted to have our team build neural net models for him. Why? Why not insist on a random forest method? I think he thought that “neural net” sounded cool. But when I heard his “business” problems out, he definitely needed something different as a solution. He didn’t have the data infrastructure to support any automated solutions; he wanted to know what went on in the modeling process (neural net models are black boxes, by definition), he didn’t have enough data to implement such things at the beginning stage, and projected gains (by employing models) wouldn’t cover the cost of such implementation for the first couple of years.

What he needed was a short-term proof of concept, where data structure must be changed to be more “analytics-ready.” (It was far from it.) And the models should be built by human analysts, so that everyone would learn more about the data and methodology along the way.

Imagine a junior analyst fresh out of school, whose resume is filled with buzzwords, meeting with a client like that. He wouldn’t fight back, but would take the order verbatim and build neural net models, whether they helped in achieving the business goals or not. Then the procurer of the service would still be blaming the concept of machine learning itself. Because bad patients will never blame themselves.

Even advanced data scientists sometimes lose the battle with clients who insist on implementing Machine Learning when the solution is something else. And such clients are generally the ones who want to know every little detail, including how the models are constructed. I’ve seen data scientists who’d implemented machine learning algorithms (for practical reasons, such as automation and speed gain), and reverse-engineered the models, using traditional regression techniques, only to showcase what variables were driving the results.

One can say that such is the virtue of a senior-level data scientist. But then what if the analyst is very green? Actually some decision-makers may like that, as a more junior-level person won’t fight back too hard. Only after a project goes south, those “order takers” will be blamed (as in “those analysts didn’t know what they were doing”).

Conclusion

Data and analytics businesses will continually evolve, but the math and the human factors won’t change much. What will change, however, is that we will have fewer and fewer middlemen between the decision-makers (who are not necessarily well-versed in data and analytics) and human analysts or machines (who are not necessarily well-versed in sales or marketing). And it will all be in the name of automation, or more specifically, Machine Learning or AI.

In that future, the person who orders the machine around — ready or not — will be responsible for bad results and ineffective implementations. That means, everyone needs to be more logical. Maybe not as much as a Vulcan, but somewhere between a hardcore coder and a touchy-feely marketer. And they must be more aware of capabilities and limitations of technologies and techniques; and, more importantly, they should not blindly trust machine-based solutions.

The scary part is that those who say things like “Just automate the whole thing with AI, somehow” will be the first in line to be replaced by the machines. That future is not far away.

Use People-Oriented Marketing: Because Products Change, But People Rarely Do

In 1:1 marketing, product-level targeting is “almost” taken for granted. I say almost, because most so-called personalized messages are product-based, rarely people-oriented marketing. Even from mighty Amazon, we see rudimentary product recommendations as soon as we buy something. As in: “Oh, you just bought a yoga mat! We will send you absolutely everything that is related to yoga on a weekly basis until you opt out of email promotions completely. Because we won’t quit first.”

In 1:1 marketing, product-level targeting is “almost” taken for granted. I say almost, because most so-called personalized messages are product-based, rarely people-oriented marketing. Even from mighty Amazon, we see rudimentary product recommendations as soon as we buy something. As in: “Oh, you just bought a yoga mat! We will send you absolutely everything that is related to yoga on a weekly basis until you opt out of email promotions completely. Because we won’t quit first.”

How nice of them. Taking care of my needs so thoroughly.

Annoying as they may be, both marketers and consumers tolerate such practices. For marketers, the money talks. Even rudimentary product recommendations — all in the name of personalization — work much better than no targeting at all. Ain’t the bar really low here, in the age of abundant data and technologies? Yes, such a product recommendation is a hit-or-miss, but who cares? Those “hits” will still generate revenue.

For consumers, aren’t we all well-trained to ignore annoying commercials when we want to? And who knows? I may end up buying a decent set of yoga mat cleaners with a touch of lavender scent because of such emails. Though we all know purchase of that item will start a whole new series of product offerings.

Now, marketers may want to call this type of collaborative filtering an active form of personalization, but it isn’t. It is still a very reactive form of marketing, at the tail end of another purchase. It may not be as passive as waiting for someone to type in keywords, but product recommendations are mixture of reactive and active (because you may send out a series of emails) forms of marketing.

And I’m not devaluing such endeavors, either. After all, it works, and it generates revenue. All I am saying is that marketers should recognize that a reactive product recommendation is only a part of personalization efforts.

As I have been writing for five years now, 1:1 marketing is about effectively deciding:

  1. whom to contact, and
  2. what to offer.

Part One is good old targeting for outbound efforts, and there are a wide variety of techniques for it, starting with rules that marketers made up, basic segmentation, and all of the way to sophisticated modeling.

The second part is a little tricky; not because we don’t know how to list relevant products based on past purchases, but because it is not easy to support multiple versions of creatives when there is no immediate shopping basket to copy (like cases for recent purchases or abandoned carts).

In between unlimited product choices and relevant offers, we must walk the fine lines among:

  1. dynamic display technology,
  2. content and creative library,
  3. data (hopefully clean and refined), and
  4. analytics in forms of segments, models or personas (refer to “Key Elements of Complete Personalization”).

If specific product categories are not available (i.e., a real indicator that a buyer is interested in certain items), we must get the category correct at the minimum, using modeling techniques. I call it personas, and some may call it architypes. (But they are NOT segments. Refer to “Segments vs. Personas”).

Using the personas, it is not too difficult to map proper products to potential buyers. In fact, marketers are free to use their imaginations when they do such mapping. Plus, while inferred, these model scores are never missing, unlike those hard-to-get “real” data. No need to worry about targeting only a small part of potential buyers.

What should a marketer offer to fashionistas? To trendsetters? To bargain seekers? To active, on-the-go types? To seasonal buyers? To big spenders? Even for a niche brand, we can create 10 to 20 personas that represent key product categories and behavioral types, and the deployment of personalized messages become much simpler.

And it gets better. Imagine a situation where you have to launch a new product or a product line. It gets tricky for the fashion industry, and even trickier for tech companies that are bold enough to launch something that didn’t exist before, such as a new line of really expensive smartphones. Who among the fans of cutting-edge technologies would actually shell out over a grand for a “phone”? This kind of question applies not just to manufacturers, but every merchant who sells peripherals for such phones.

Let’s imagine that a marketer would go with an old marketing plan for “similar” products that were introduced in the past. They could be similar in terms of “newness” and some basic features, but what if they differ in terms of specific functionality, look-and-feel, price point and even the way users would use them? Trying to copy some old targeting methods may lead to big misses, as even consumers hear about them from time to time.

Such mishaps happen because marketers see consumers as simple extensions of products. Pulling out old tricks may work in some cases, but even if just a small bit of product attributes are different, it won’t work.

Luckily for geeks like us, an individual’s behavior does not change so fast. Sure, we all age a bit every year; but in comparison to products in the market, humans do not transform so suddenly. Simply, early adapters will remain early adapters, and bargain seekers will continue to be bargain seekers. Spending level on certain product categories won’t change drastically, either.

Our interests and hobbies do change; but again, not so fast. It took me about two to three years to turn from an avid golfer to a non-golfer. And all golf retailers caught up with my inactivity and stopped sending golf offers.

So, if marketers set up personas that “they” need to push their products, and update them periodically (say once a year), they can gain tremendous momentum in reaching out to customers and prospects more proactively. If they just rely on specific product purchases to trigger a series of product recommendations, outreach programs will remain at the level of general promotions.

Further, even inbound visits can be personalized better (granted that you identified the visitor) using the personas and set of rules in terms of what product goes well with what persona.

The reason why models work well — man-made or machine-built — is because human behavior is predictable with reasonable consistency. We are all extensions of our past behaviors to a greater degree than the evolution rate of products and technologies.

Years ago, we’ve had a heated internal discussion about whether we should create a new series of product categories from VHS to DVD. I argued that such new formats would not change human behavior that much. In fact, genres matter more than video format for the prediction of future purchases. “Godfather” fans will buy the movie again on DVD, and then again in Blu-ray. Now some type of ultra-high-definition download from some cloud somewhere. Through all of this, movie collectors remain movie collectors for their favorite types of movies. In other words, products changed, but not human attributes.

That was what I argued then, and I still stand by it. So, all the analytical efforts must be geared toward humans, not products. In coming days, that may be the shortest path to fake human friendliness using AI and machine-made models.

 

Replacing Unskilled Data Marketers With AI

People react to words like “machine learning” or “artificial intelligence” very differently, depending on their interests and levels of understanding of technology. Some get scared, and among them are smart people like Elon Musk or the late Stephen Hawking. Others, including data marketers who lack strategic skills, may react based on a vague fear of becoming irrelevant, thinking that a machine will replace them in the job market soon.

People react to words like “machine learning” or “artificial intelligence” very differently, depending on their interests and levels of understanding of technology. Some get scared, and among them are smart people like Elon Musk or the late Stephen Hawking. Others, including data marketers who lack strategic skills, may react based on a vague fear of becoming irrelevant, thinking that a machine will replace them in the job market soon.

On the contrary, I find that most marketers welcome terms like machine learning. Many think that, in the near future, computers will automatically perform all the number-crunching and just tell them what to do. In marketing environments where “Do more with less” is the norm, the idea of machines making decisions for them may sound attractive to many marketers. How great it would be if some super-duper-computer would do all of the hard work for us? The trouble is that the folks who think like that will be the first ones to be replaced by the machines.

Modern marketing is closely tied into the world of data and analytics (the operative word being “modern,” as there are plenty of marketers still going with their gut feelings). There are countless types of data and analytics applications influencing operations management, R&D or even training programs for world-class athletes, but most of the funding for analytical activities is indeed related to marketing. I’d go even further and claim that most of data-related work is profit-driven; either to make more money for organizations or to cut costs in running businesses. In other words, without the bottom-line profit, why bother with any of this geeky stuff?

Yet, many marketers aren’t interested in analytics and some even have fears of lots of numbers being thrown at them. A set of numbers that would excite analytical minds would scare off many marketers. For the record, I blame such an attitude on school systems and jock cultures that have been devaluing the importance of mathematics. It is no accident that most “nerdy” analysts nowadays are from foreign places, where people who are really good at math are not ridiculed among other teenage students but praised or even worshiped.

The joke is that those geeky analysts will be replaced by machines first, as any semi-complex analytical work is delegated to them already. Or will they?

I find it ironic that marketers who have a strong aversion to words like “advanced analytics” or “modeling” would freely embrace machine learning or AI. Because that is like saying you don’t like music, unless it is played by machines. What do they think machine learning is? Some “thinking-slave” that will do all of the work without complaint or asking too many questions?

Machine learning is one of many ways of modeling, whether it is for prediction or pattern recognition. It just became more attractive to the business community as computing power increased over time to accommodate heavy iterations of calculations, and because words like neural net models were replaced by easier sounding “machine learning.”

To wield such machines, nonetheless, one must possess “some” idea about how they work and what they require. Otherwise, it would be like a musically illiterate person trying to produce a piece of music all automatically. Yes, I’ve heard that now there are algorithms that can compose music or write novels on their own, but I would argue that such formulaic music will be a filler in a hotel elevator, at best. If emotionally moving another human being is the goal, one can’t eliminate all human factors out of the equation.

Machines are to automate things that humans already know how to do. And it takes ample amounts of “man-hours” to train the machine, even for the relatively simple task of telling the difference between dogs and cats in pictures. And some other human would have decided that such a task would be meaningful for other humans. Of course, once the machines are set up to learn on their own, a huge momentum will kick in and millions of pictures will be sorted out automatically.

And as such evolution goes on, a whole lot of people may lose their jobs. But not the ones who know how to set the machines up and give them purposes for such work.

Let’s Take a Breath Here

Dialing back to something much simpler: Operations. In automating reports and creating custom messages for target audiences, the goals must be set by stakeholders and machines must be tweaked for such purposes at the beginning. Someday soon, AI will reach the level where it can operate with very general guidelines; but at least for now, requesters must provide logical instructions.

Let’s say a set of reports come out of the computer for the use of marketing analysis. “What reports to show”-type decisions are still being made by humans, but producing useful intelligence in an automated fashion isn’t a difficult task these days. Then what? The users still have to make sense out of all of those reports. Then they must decide what to do about the findings.

There are folks who hope that machine will tell them exactly what to do out of such intel. The first part may come close to their expectation sometime soon, if not already for some. Producing tidbits like “Hi, human: It looks like over 80% of your customers who shopped last year never came back,” or “The top 10% of your customers, in terms of lifetime spending level, account for over 70% your yearly revenue, but about half of them show days between transactions far longer than a year.” By the way, mimicking human speech isn’t easy, but if all these numbers are sitting somewhere in the computer, yes, it is possible to expect something like this out of machines.

The hard part for the machines would be picking five to six of the most important tidbits out of hundreds, if not thousands of other “facts,” as that requires understanding of business goals. But we can fake even that type of decision-making by assuming most businesses are about “increasing revenue by acquiring new valuable customers, and retaining them for as long as possible.”

Then the really hard part would be deciding what to do about it. What should you do to make your valuable customers come back? Answering that type of question requires not only an analytical mindset, but a deep understanding in human psychology and business acumen. Analytics consultants are generally multi-dimensional thinkers, and the one-trick ponies who just spit out formulaic answers do not last too long. The same rule would apply to machines, and we may call those one-dimensional machines “posers” too (refer to “Don’t Hire Data Posers”).

But let’s say that by entering thousands business cases with final solutions and results as a training set into machines, we finally get to have such machine intelligence. Would we be free from having to “think” even a bit?

The short answer is that, like I said in the beginning, such folks who don’t want to analyze anything will become irrelevant even sooner. Why would we need illogical people when the machines are much cheaper and smarter? Besides, even future computers shown in science fiction movies will require “logical” inquiries to function properly. “Asking the right question” will remain a human function, even in a faraway future. And the logical mindset is a result of mathematical training with some aptitude for it, much like musical abilities.

The word “illiterate” used to mean folks who didn’t know how to read and write. In the age of machines, “logic” is the new language. So, dear humans, do not give up on math, if self-preservation is an instinct that you possess. I am not asking everyone to get a degree in mathematics, but I am insisting that we all must learn about ways of scientific approaches to problem-solving and logical methods of defining inquiries. In the future, people who can wield machines will be in secure places — whether they are coders or not — while new breeds of logically illiterate people will be replaced by the machines, one-by-one.

So, before you freely invite advanced thinking machines into your marketing operations, think carefully if you are either the one who gives purpose to such machines (by understanding what’s at stake, and what those numbers all mean), or one who can train machines to solve those pre-defined (by humans) problems.

I am not talking about some doomsday scenario of machines killing people to take over the world; but like any historical events that are described as “revolutions,” this machine revolution will have real impact on our lives. And like anything, it will be good for some, and bad for others. I am saying that data illiterates who would say things like, “I don’t understand what all those numbers mean,” may be ignored by machines — just like they are by smartass analysts. (But maybe without the annoying attitudes.)

Resistance Is Futile

Any serious Trekkie would immediately recognize this title. But I am not talking about the Borgs, who are coming to assimilate us into their hive-minded collective. I am talking about a rather benign-sounding subject — and my profession — analytics.

Any serious Trekkie would immediately recognize this title. But I am not talking about the Borgs, who are coming to assimilate us into their hive-minded collective. I am talking about a rather benign-sounding subject — and my profession — analytics.

When you look at job descriptions of analytics leads in various organizations, you will often find the word “evangelization.” If every stakeholder is a believer of analytics, we would not need such a word to describe the position at all. We use that word because an important part of an analyst lead’s job is to convert non-believers to believers. And that is often the hardest part of our profession.

I smile when I see memes (or T-shirts) like “Science doesn’t care about your beliefs.” I’m sure some geek who got frustrated by the people who treat scientific facts as just an opinion came up with this phrase. From their point of view, it may be shocking to realize that scientifically proven facts can be disputed by people without any scientific training. But that is just human nature; most really don’t want to change either their beliefs or their behaviors.

Now, without being too political about this whole subject, I must confess that I face resistance to change all of the time in business environments, too. Why is that? How did activities of making decisions based on numbers and figures became something to resist?

My first guess is that people do not like even remotely complicated stuff. Maybe the word “analytics” or talk of “modeling” bring back all of the childhood memories of their scary math teachers. Maybe that kind of headache is so bad that some would reject things that could actually be helpful to them.

If the users of information feel that way, analysts must aspire to make analytics easier to consume and digest. Customers are not always right, but without the consumers of information, all analytical activities become meaningless — at least in non-academic places.

An 80-page report filled with numbers and figures dumped on someone’s desk should not even be called analytics. Literally, that’s still an extension of an unfiltered data dump. Analysts should never leave the most important part of the job — deriving insights out of mounds of data — to the end-users of analytics. True, the answer may lie somewhere in that pile, but that is like a weather forecaster listing all of the input variables to the general public without providing any useful information. Hey, is it going to rain this morning, or what?

I frequently talk about this issue with fellow analytics professionals. Even in more advanced organizations in terms of data and analytics infrastructure, heads of analytics often worry about low acceptance of data-based decision-making. In many instances, the size of the data and smooth flow of them, often measured in terabytes per second as a bragging point, do not really matter.

Information should be in nugget-sizes for easy consumption (refer to “Big Data Must Get Smaller”). Mining the data to come up with fewer than five bullet points is the hardest part, and should not be left to the users. That is the primary reason why less and less people are talking about “Big Data” nowadays, as even non-data professionals are waking up to realize that “big” is not the answer at all.

However, resistance to analytics doesn’t disappear, even when data are packaged in beautifully summarized reports or model scores. That is because often, the results of analytics uncover an inconvenient truth for many stakeholders — as in, “Dang, we’ve been doing it wrong all of this time?”

If a person or a department is called out as ineffective by some analytical geeks, I can see how involved parties may want to dispute the results any which way they can. Who cares about the facts when their jobs or reputations are at stake? Even if their jobs are safe, who are these analytics guys asking us to “change”? That is not any different from cases where cigarette companies disputed that smoking was actually beneficial in the past, and oil and gas companies have an allergic reaction when the words “climate” and “change” are uttered together in present days.

I’ve seen cases where analytical departments were completely decimated because their analytics revealed other divisions’ shortcomings and caused big political hoopla. Maybe the analysts should have had better bedside manners; but in some cases I’ve heard about, that didn’t even matter — as the big boss used the results of analytics to scold people who were just doing their jobs based on an old set of rules.

You can guess the outcome of that kind of political struggle. The lesson is that newly discovered “facts” should never be used to blame the followers of existing paradigms. Such reactions from the top will further alienate analytics from the rest of the company, as people get genuinely scared of it. Adoption of data-based decision-making? Not when people are afraid of the truth. Forget about the good of the company; that will never win vs. people’s desire for their job security.

Now, at the opposite end of the spectrum, too much unfiltered information forcing decisions can also hurt the organization. Some may call that “Death by KPI.” When there are too many indicators floating around, even seemingly sound decisions made based on numbers and figures may lead to unintended consequences; very often, negatively impacting the overall performance of the company. The question is always, “Which variable should get higher weight over others?” And that type of prioritization comes from clearly defined business goals. When all KPIs are treated to be equally important? Then nothing really is. Not in this complex world.

Misguided interpretation of numbers leads to distrust in analytics. Just because someone quoted an interesting figure within or without proper context, that doesn’t mean that there is just one version of an explanation behind it. Contextual understanding of data is the key to beneficial insights, and in the age of abundant information, even casual users of analytics must understand the differences. Running away from it is not the answer. Blindly driving the business just based on certain indicators should be avoided, as well. Both extremes will turn out to be harmful.

Nevertheless, the No. 1 reason why people do not adopt to analytics is many have gotten burned by “wrong” analytics in the past, often by the posers (refer to “Don’t Hire Data Posers”). In some circles, the reputation of analytics got so bad that I even met a group of executives who boldly claimed that whole practice of statistical modeling was totally bogus and it just didn’t work. Jeez. In the age of machine learning, one doesn’t believe in modeling at all? What do you think that “learning” is based on?

No matter how much data we may have in our custody, we use modeling techniques to predict the future, derive answers out of seemingly disjointed data and fill in the gaps in data — as we will never have every piece of the puzzle nicely lined up all of the time.

In a case of such deep mistrust in basic activities like modeling, I definitely blame the analysts of the past. Maybe those posers overpromised about what models could do. (No, nothing in analytics happens overnight). Maybe they aimed for a wrong target. Maybe they didn’t clean the data enough before plugging them into some off-of-the-shelf modeling engine. Maybe they didn’t properly apply the model to real-life situations, and left the building. No matter. It is their fault if the users didn’t receive a clear benefit from analytical exercises.

I often tell analysts and data scientists that analytics is not about the data journey that they embarked on or the mathematical adventure that they dove into. In the business world, it is about the bottom line. Did the report in question or model in action lead to an increase in revenue or a reduction in cost? It is really that clear-cut.

So, dear data geeks, please spare the rest of the human collectives from technical details, and get to the point fast. Talking about the sample size or arguing about the merits of neural net models – unless the users are equally geeky as you — will only further alienate decision-makers from analytics.

And the folks who think they can still rely on their gut feelings over analytics? Resistance to analytics is indeed futile. You must embrace it — not for all of the buzzwords uttered by the posers out there, but for the survival of your business. When your competitors are embracing advanced analytics, what are you going into the battle with? More unsolicited emails without targeting or personalization? Without knowing what elements of promotions are the key drivers of responses? Without even basic behavioral profiles of your own customer base? Not in this century. Not when consumers are as informed as marketers.

One may think that this whole analytics thing is overly hyped-up. Maybe. But definitely not as much as someone’s gut feelings or so-called business instincts. If analytics didn’t work for you in the past, find ways to make it work. Avoiding it certainly isn’t the answer.

Resistance is futile.

Smart Attribution Modeling

Depending on the size and scope of your advertising and marketing spend, you may have spent time and effort thinking about attribution modeling. Different organizations have very different approaches to attribution.

analyticsDepending on the size and scope of your advertising and marketing spend, you may have spent time and effort thinking about attribution modeling. Different organizations have very different approaches to attribution.

To this end, developing a valuable attribution model that serves your goals and your business can take many forms. Herein, I’ve put together some criteria that’ve been used effectively by a number of organizations we’ve worked with to inform decision-making and use of attribution methods and models.

First Things First: Determine Your End

The most important questions senior marketers need to ask going into an attribution initiative, at any level of investment, include:

  • “What is the purpose for attributing (estimating) media value?” You may be surprised how often that answer is ill-defined. Make sure you can answer, in simple business outcome terms, what the purpose of your attribution is. All else fails if this step is missed.
  • “How logical, defensible and credible is a potential attribution methodology?” While attribution, by its nature, is rarely deterministic, it is requisite that a methodology is credible and has robust basis, or a raison d’etre, if you will, if it is to add value. The understanding individuals often develop is an appreciation that the assumptions underpinning any attribution strategy are tenants of the strategy itself.

The right answers for any brand depend on keeping the end in mind and knowing the expected outcome. So the logical starting point is defining your purpose for attributing media value, as described in that context. For example, “to get the best ROI from our advertising investments.”

3 Strategic Attribution Model Levers

In the spirit of keeping it simple, we think in terms of three strategic attribution levers that an organization can benefit from. These strategic levers are used to inform both the attribution model selection and the weighting of channels. They are as follows:

  • Engagement: Measures a customer’s depth of interaction and potentially, the relationship with the brand.
  • Recency: The amount of time lapsed since the last touch. For example, all other things being equal, a touch yesterday is more valuable than a touch 45 days prior.
  • Intent: Identifies a need the user has or information the user is seeking. Intent is specifically valuable in search, and sometimes in social media. Lead generation programs demonstrate intent, as well. The point of considering “intent” is that it prequalifies traffic in a meaningful way. If the consumer exhibits intent-driven behavior — that should be weighted heavier in your attribution thought process.

While the decision to “attribute” always means judgment is incorporated, the credibility of the attribution is higher when media touches are evaluated within the three strategic levers and should always be based on the nature of the interaction — or lack thereof. If a user did not engage with an ad, then the amount of interaction is lower or even zero.

The following chart breaks out major channels and how you might evaluate each of the strategic levers discussed above.

Ferranti display ad chart

Ferranti display ad chart part two

The ‘Bonus’ Lever: Measurability

Measurability is the “fourth” strategic lever, and can be considered optional for very large brands utilizing traditional non-digital channels extensively. A channel that has evidence associated with its performance is one that can be weighted accordingly. When a channel is measurable, the weighting in the attribution model can be scaled to leverage the predictability of that channel; thereby, improving the efficacy of the attribution. It is a reality that some channels however, will have hard measures, while others require more assumptions and inferences.

Brands should give thoughtful consideration to not inadvertently “reward” a channel, simply because it is hard to measure — and, by the same token, not unnecessarily punish them, either.

Over- or under-weighting channels that have weak evidence of conversion value can actually reduce the performance of the overall media mix.

Viewability and Display-Weighting

While reach, frequency and targeting are hallmarks of display advertising, it has the widely known challenge of “viewability.” Viewability is when an ad is served (and paid for) but a consumer does not see it.

When the objective is to improve the ROI of the media mix, ads that are never seen (un-viewable) should be accounted for in the attributed value of the channel.

One way marketers simplify account viewability concerns is by deducting the percentage of ads that can never be seen on a percentage basis when weighting online display in the model. Bear in mind, “viewable” generally means that only part of the ad was viewable for 1 second. Specific viewability metrics should be discussed and negotiated with media outlets or networks you work with.

How Much Is Viewable or Unviewable?

A recent study done by Google identifies that many display ads are never viewed; therefore, the weighting of display ads should consider this reality (opens as a PDF).

Here are some of the issues with viewability that should influence the weighting of display.

  • 1 percent of all impressions measured are not seen, but the average publisher viewability is 50.2 percent.
  • The most viewable ad sizes are vertical units. Above the fold is not always viewable … Worth considering when weighting display.
  • Page position isn’t always the best indicator of viewability.
  • Viewability varies across industries. While it ranges across content verticals and industries, content that holds a user’s attention has the highest viewability.
  • The most important thing is to give viewability consideration and weight based on your own experience.

Frequently Used Attribution Models

Let’s summarize the most popular attribution models in order of frequency of use, and as based on field experience. There are many more models you may consider, and this list is not intended to be exhaustive.

  • Last Click: 100 percent of the sale is credited to the last click, given its immediacy in driving the sale.
  • Linear Attribution: Equal weighting is given to all touchpoints, regardless of when they occurred. Its strength and weakness is in its simplicity. Not every touch is equal and for good reasons that we’ll describe in some detail below.
  • Time-Decay Models: The media touchpoint closest to conversion gets most of the credit, and the touchpoint prior to that will get less credit. This is the best of the simple approaches. It does not, however, account for brand discovery.
  • Position Model: Position model utilizes intuition and assumption to spread the weights of touches over time, heavying up the first and last touches, and considering the middle touches to spread the difference evenly across them. To be clear, this model presupposes “zero” brand awareness — and, therefore, that every customer “discovered” the brand from a (display/banner) ad impression, for example. Blanketing an audience in advertisements can provide great reach and frequency. It also sets a lot of cookies, which can be used to set the first “position.”

Pointers for Getting Started

The closer you can get to individualized attribution vs. broadcast attribution, the stronger the returns. For example, attribution by segment can provide insights you miss when measuring the aggregate.

Channel measurability should be weighted accordingly. Non-measurable channels should be measured by depth of observable engagement.

The Time-Decay model is widely considered a good place for brands to use when getting started in media attribution. Brands can simply insert logical and evidence-based assumptions and customize the half-life of decay based on the Three Strategic Levers described above.

Follow-up discussion and analysis can refine your thinking and allow you to provide a rationale that helps achieve the most credible, logical and valuable attribution capability.