‘Too Much’ Is a Relative Term for Promotional Marketing

If a marketer sends you 20 promotional emails in a month, is that too much? You may say “yes” without even thinking about it. Then why did you not opt out of Amazon email programs when they send far more promotional stuff to you every month?

If a marketer sends you 20 promotional emails in a month, is that too much? You may say “yes” without even thinking about it. Then why did you not opt out of Amazon email programs when they send far more promotional stuff to you every month? Just because it’s a huge brand? I bet it’s because “some” of its promotions are indeed relevant to your needs.

Marketers are often obsessed with KPIs, such as email delivery, open, and clickthrough rates. Some companies reward their employees based on the sheer number of successful email campaign deployments and deliveries. Inevitably, such a practice leads to “over-promotions.” But does every recipient see it that way?

If a customer responds (opens, clicks, or converts, where the conversion is king) multiple times to those 20 emails, maybe that particular customer is NOT over-promoted. Maybe it is okay for you to send more promotional stuff to that customer, granted that the offers are relevant and beneficial to her. But not if she doesn’t open a single email for some time, that’s the very definition of “over-promotion,” leading to an opt-out.

As you can see, the sheer number of emails (or any other channel promotion) to a person should not be the sole barometer. Every customer is different, and recognition of such differences is the first step toward proper personalization. In other words, before worrying about customizing offers and products for a target individual, figure out her personal threshold for over-promotion. How much is too much for everyone?

Figuring out the magic number for each customer is a daunting task, so start with three basic tiers:

  1. Over-promoted,
  2. Adequately promoted, and
  3. Under-promoted.

To get to that, you must merge promotional history data (not just for emails, but for every channel) and response history data (which includes open, clickthrough, browse, and conversion data) on an individual level.

Sounds simple? But marketing organizations rarely get into such practices. Most attributions are done on a channel level, and many do not even have all required data in the same pool. Worse, many don’t have any proper match keys and rules that govern necessary matching steps (i.e., individual-level attribution).

The issue is further compounded by inconsistent rules and data availability among channels (e.g., totally different practices for online and offline channels). So much for the coveted “360-Degree Customer View.” Most organizations fail at “hello” when it comes to marrying promotion and response history data, even for the most recent month.

But is it really that difficult of an operation? After all, any respectful direct marketers are accustomed to good old “match-back” routines, complete with resolutions for fractional allocations. For instance, if the target received multiple promotions in the given study period, which one should be attributed to the conversion? The last one? The first one? Or some credit distribution, based on allocation rules? This is where the rule book comes in.

Now, all online marketers are familiar with reporting tools provided by reputable players, like Google or Adobe. Yes, it is relatively simple to navigate through them. But if the goal is to determine who is over-promoted or adequately promoted, how would you go about it? The best way, of course, is to do the match-back on an individual level, like the old days of direct marketing. But thanks to the sheer volume of online activity data and complexity of match-back, due to the frequent nature of online promotions, you’d be lucky if you could just get past basic “last-click” attribution on an individual level for merely the last quarter.

I sympathize with all of the dilemmas associated with individual-level attributions, so allow me to introduce a simpler way (i.e., a cheat) to get to the individual-level statistics of over- and under-promotion.

Step 1: Count the Basic Elements

Set up the study period of one or two years, and make sure to include full calendar years (such as rolling 12 months, 24 months, etc.). You don’t want to skew the figures by introducing the seasonality factor. Then add up all of the conversions (or transactions) for each individual. While at it, count the opens and clicks, if you have extracted data from toolsets. On the promotional side, count the number of emails and direct mails to each individual. You only have to worry about the outbound channels, as the goal is to curb promotional frequency in the end.

Step 2: Once You Have These Basic Figures, Divide ‘Number of Conversions’ by ‘Number of Promotions’

Perform separate calculations for each channel. For now, don’t worry about the overlaps among channels (i.e., double credit of conversions among channels). We are only looking for directional guidelines for each individual, not comprehensive channel attribution, at this point. For example, email responsiveness would be expressed as “Number of Conversions” divided by “Number of Email Promotions” for each individual in the given study period.

Step 3: Now That You Have Basic ‘Response Rates’

These response rates are for each channel and you must group them into good, bad, and ugly categories.

Examine the distribution curve of response rates, and break them into three segments of one.

  1. Under-promoted (the top part, in terms of response rate),
  2. Adequately Promoted (middle part of the curve),
  3. Over-promote (the bottom part, in terms of response rate).

Consult with a statistician, but when in hurry, start with one standard deviation (or one Z-score) from the top and the bottom. If the distribution is in a classic bell-curve shape (in many cases, it may not be), that will give roughly 17% each for over- and under-promoted segments, and conservatively leave about 2/3 of the target population in the middle. But of course, you can be more aggressive with cutoff lines, and one size will not fit all cases.

In any case, if you keep updating these figures at least once a month, they will automatically be adjusted, based on new data. In other words, if a customer stops responding to your promotions, she will consequently move toward the lower segments (in terms of responsiveness) without any manual intervention.

Putting It All Together

Now you have at least three basic segments grouped by their responsiveness to channel promotions. So, how would you use it?

Start with the “Over-promoted” group, and please decrease the promotional volume for them immediately. You are basically training them to ignore your messages by pushing them too far.

For the “Adequately Promoted” segment, start doing some personalization, in terms of products and offers, to increase response and value. Status quo doesn’t mean that you just repeat what you have been doing all along.

For “Under-promoted” customers, show some care. That does NOT mean you just increase the mail volume to them. They look under-promoted because they are repeat customers. Treat them with special offers and exclusive invitations. Do not ever take them for granted just because they tolerated bombardments of promotions from you. Figure out what “they” are about, and constantly pamper them.

Find Your Strategy

Why do I bother to share this much detail? Because as a consumer, I am so sick of mindless over-promotions. I wouldn’t even ask for sophisticated personalization from every marketer. Let’s start with doing away with carpet bombing to all. That begins with figuring out who is being over-promoted.

And by the way, if you are sending two emails a day to everyone, don’t bother with any of this data work. “Everyone” in your database is pretty much over-promoted. So please curb your enthusiasm, and give them a break.

Sometimes less is more.

Why Many Marketing Automation Projects Go South

There are so many ways to mess up data or analytics projects, may they be CDP, Data Lake, Digital Transformation, Marketing Automation, or whatever sounds cool these days. First off, none of these items are simple to develop, or something that you just buy off the shelf.

As a data and analytics consultant, I often get called in when things do not work out as planned or expected. I guess my professional existence is justified by someone else’s problems. If everyone follows the right path from the beginning and everything goes smoothly all of the time, I would not have much to clean up after.

In that sense, maybe my role model should be Mr. Wolf in the movie “Pulp Fiction.” Yeah, that guy who thinks fast and talks fast to help his clients get out of trouble pronto.

So, I get to see all kinds of data, digital, and analytical messes. The keyword in the title of this series “Big Data, Small Data, Clean Data, Messy Data” is definitely not “Big” (as you might have guessed already), but “Messy.” When I enter the scene, I often see lots of bullet holes created by blame games and traces of departed participants of the projects. Then I wonder how things could have gone so badly.

There are so many ways to mess up data or analytics projects, may they be CDP, Data Lake, Digital Transformation, Marketing Automation, or whatever sounds cool these days. First off, none of these items are simple to develop, or something that you just buy off the shelf. Even if you did, someone would have to tweak more than a few buttons to customize the toolset to meet your unique requirements.

What did I say about those merchants of buzzwords? I don’t remember the exact phrase, but I know I wouldn’t have used those words.

Like a veteran cop, I’ve developed some senses to help me figure out what went wrong. So, allow me to share some common traps that many marketing organizations fall into.

No Clear Goal or Blueprint

Surprisingly, a great ,many organizations get into complex data or analytics projects only with vague ideas or wish lists. Imagine building a building without any clear purpose or a blueprint. What is the building for? For whom, and for what purpose? Is it a residential building, an office building, or a commercial property?

Just like a building is not just a simple sum of raw materials, databases aren’t sums of random piles of data, either. But do you know how many times I get to sit in on a meeting where “putting every data source together in one place” is the goal in itself? I admit that would be better than data scattered all over the place, but the goal should be defined much more precisely. How they are going to be used, by whom, for what, through what channel, using what types of toolsets, etc. Otherwise, it just becomes a monster that no one wants to get near.

I’ve even seen so-called data-oriented companies going out of business thanks to monstrous data projects. Like any major development project, what you don’t put in is as important as what you put in. In other words, the summary of absolutely everyone’s wish list is no blueprint at all, but the first step toward inevitable demise of the project. The technical person in charge must be business–oriented, and be able to say “no” to some requests, looking 10 steps down the line. Let’s just say that I’ve seen too many projects that hopelessly got stuck, thanks to features that would barely matter in practice (as in “You want what in real-time?!”). Might as well design a car that flies, as well.

No Predetermined Success Metrics

Sometimes, the project goes well, but executives and colleagues still define it as a failure. For instance, a predictive model, no matter how well it is constructed mathematically, cannot single-handedly overcome bad marketing. Even with effective marketing messages, it cannot just keep doubling the performance level indefinitely. Huge jumps in KPI (e.g., doubling the response rate) may be possible for the very first model ever (as it would be, compared to the previous campaigns without any precision targeting), but no one can expect such improvement year after year.

Before a single bite of data is manipulated, project champions must determine the success criteria for the project. In terms of coverage, accuracy, speed of execution, engagement level, revenue improvement (by channel), etc. Yes, it would be hard to sell the idea with lots of disclaimers attached to the proposal, but maybe not starting the project at all would be better than being called a failure after spending lots of precious time and money.

Some goals may be in conflict with each other, too. For instance, response rate is often inversely related to the value of the transaction. So, if the blame game starts, how are you going to defend the predictive model that is designed primarily to drive the response rate, not necessarily the revenue per transaction? Set the clear goals in numeric format, and more importantly, share the disclaimer upfront. Otherwise, “something” would look wrong to someone.

But what if your scary boss wants to boost rate of acquisition, customer value, and loyalty all at the same time, no matter what? Maybe you should look for an exit.

Top-Down Culture

By nature, analytics-oriented companies are flatter and less hierarchical in structure. In such places, data and empirical evidences win the argument, not organizational rank of the speaker. It gets worse when the highest-ranking officer has very little knowledge in data or analytics, in general. In a top-down culture, no one would question that C-level executive in a nice suit. Foremost, the executive wouldn’t question his own gut feelings, as those gut-feelings put him in that position in the first place. How can he possibly be wrong?

Trouble is that the world is rapidly changing around any organization. And monitoring the right data from the right place is the best way to keep informed and take actions preemptively. I haven’t encountered any gut-feeling — including my own — that stood the test of time better than data-based decision-making.

Now sometimes, the top-down culture is a good thing, though. If the organizational goals are clearly set, and if the top executive does not launch blame games and support a big data project (no pun intended here). Then, an indefinite amount of inter-departmental conflicts will be mitigated upfront (as in, “Hey, everyone, we are doing this, alright?).

Conflicts Among Teams — No Buy-in, No Use

But no amount of executive force can eliminate all infighting that easily. Some may say “Yeah, yeah, yeah” in front of the CEO or CMO, but sabotage the whole project behind the scene. In fact, I’ve seen many IT departments get in the way of the noble idea of “Customer-360.”

Why? It could be the data ownership issue, security concerns, or lack of understanding of 1:1 marketing or advanced analytics. Maybe they just want the status quo, or see any external influence on data-related matters as a threat. In any case, imagine the situation where the very people who hold the key to the of source data are NOT cooperating with data or analytics projects for the benefit of other departments. Or worse, maybe you have “seen” such cases, as they are so common.

Another troublesome example would be on the user side. Imagine a situation where sales or marketing personnel do not buy into any new way of doing things, such as using model scores to understand the target better. Maybe they got burned by bad models in the past. Or maybe they just don’t want to change things around, like those old school talent scouts in the movie “Moneyball.” Regardless, no buy-in, no use. So much for that shiny marketing automation project that sucked up seven-figure numbers to develop and deploy.

Every employee puts their prolonged employment status over any dumb or smart project. Do not underestimate the people’s desire to keep their jobs with minimal changes.

Players Haven’t Seen Really Messy Situations Before

As you can see, data or analytics projects are not just about technologies or mathematics. Further, data themselves can be a hindrance. I’ve written many articles about “good” data, but they are indeed quite rare in real life. Data must be accurate, consistent, up-to-date, and applicable in most cases, without an excessive amount of missing values. And keeping them that way is a team sport, not something a lone tech genius can handle.

Unfortunately, most graduates with degrees in computer science or statistics don’t get to see a real bloody mess before they get thrown into a battlefield. In school, problems are nicely defined by the professors, and the test data are always in pristine conditions. But I don’t think I have seen such clean and error-free data since school days, which was indeed a lifetime ago.

Dealing with organizational conflicts, vague instructions, and messy data is the part of the job of any data professional. It requires quite a balancing act to provide “the least wrong answers” consistently to all constituents who have vastly different interests. If the balance is even slightly off, you may end up with a technically sound solution that no one adopts into their practices. Forget about full automation of anything in that situation.

Already Spent Money on Wrong Things

This one is a heart-breaker for me, personally. I get onto the scene, examine the case, and provide step-by-step solutions to get to the goal, only to find out that the client company spent money on the wrong things already and has no budget left to remedy the situation. We play with data to make money, but playing with data and technology costs money, too.

There are so many snake oil salespeople out there, over-promising left and right with lots of sweet-to-the-ears buzzwords. Yeah, if you buy this marketing automation toolset armed with state-of-the-art machine-learning features, you will get actionable insights out of any kind of data in any form through any channel. Sounds too good to be true?

Marketing automation is really about the “combination” of data, analytics, digital content, and display technologies (for targeted messaging). It is not just one thing, and there is no silver bullet. Even if some other companies may have found one, will it be applicable to your unique situation, as is? I highly doubt it.

The Last Word on How to Do Marketing Automation Right

There are so many reasons why marketing automation projects go south (though I don’t understand why going “south” is a bad thing). But one thing is for sure. Marketing automation — or any data-related project — is not something that one or two zealots in an organization can achieve single-handedly with some magic toolset. It requires organizational commitment to get it done, get it utilized, and get improved over time. Without understanding what it should be about, you will end up automating the wrong things. And you definitely don’t want to get to the wrong answer any faster.

Marketing’s New Role in Product and Service Delivery

This is the first in a series of posts about the three greatest challenges facing marketing organizations in 2018: Becoming more accountable, undergoing a digital transformation and evolving to put the customer experience front-and-center.

This is the first in a series of posts about the three greatest challenges facing marketing organizations in 2018:

  • Becoming more accountable
  • Undergoing a digital transformation
  • Evolving to put the customer experience front-and-center

The first few posts will center on accountability. It would be easy to dive into what KPIs we should have, how to measure them and so forth, but before we go there let’s define what marketers are accountable for in 2018. Analysts report a big gap between the expectations of accountability and the ability to measure it.

76% of marketing organizations are accountable for a P&L, but marketing accountability goes beyond just the profit and loss numbers, doesn’t it?
76% of marketing organizations are accountable for a P&L, but marketing accountability goes beyond just the profit and loss numbers, doesn’t it? | Credit: Kevin Joyce

5 Basics of Marketing Accountability

During our lifetimes marketing has been responsible for the following:

  • Gathering customer requirements and defining the product and service set
  • Helping create and retain customers with demand generation programs, events, social, etc.
  • Increasing brand equity
  • Managing technology and channel partners
  • Empowering the sales channels with market data, prospect data, competitive data and sales tools and collateral

With the exception of demand generation, it is difficult to pin revenue contribution on the other responsibilities. However, something else is at play here. Marketing’s role is evolving, and there are areas where we are being held accountable that are not on the list above.

Marketing’s Evolving Role

Organizations have always looked to marketing for help with communications. Marcom was a standard block in every marketing organization chart, and indeed public relations, press releases, creation of collateral and event management are already included in the five basics listed above. However, the need for customer communication is growing.

The number of channels that customers and prospects use to communicate with us has grown from in person, telephone, fax and events in the 90s to include: email, chat, a variety of social channels, YouTube, podcast channels, websites, blogs, user forums, etc. And which function in the company is most familiar with and engaged in these channels and technologies? Marketing.

Now, before you dismiss this as just an expansion of channels used in the existing demand generation and brand equity responsibilities listed above, consider this. Companies are using marketing to communicate new customer welcome messages, customer feedback communications, license renewal messages, satisfaction surveys, availability of training videos, programs to increase customer adoption, etc.