‘Too Much’ Is a Relative Term for Promotional Marketing

If a marketer sends you 20 promotional emails in a month, is that too much? You may say “yes” without even thinking about it. Then why did you not opt out of Amazon email programs when they send far more promotional stuff to you every month?

If a marketer sends you 20 promotional emails in a month, is that too much? You may say “yes” without even thinking about it. Then why did you not opt out of Amazon email programs when they send far more promotional stuff to you every month? Just because it’s a huge brand? I bet it’s because “some” of its promotions are indeed relevant to your needs.

Marketers are often obsessed with KPIs, such as email delivery, open, and clickthrough rates. Some companies reward their employees based on the sheer number of successful email campaign deployments and deliveries. Inevitably, such a practice leads to “over-promotions.” But does every recipient see it that way?

If a customer responds (opens, clicks, or converts, where the conversion is king) multiple times to those 20 emails, maybe that particular customer is NOT over-promoted. Maybe it is okay for you to send more promotional stuff to that customer, granted that the offers are relevant and beneficial to her. But not if she doesn’t open a single email for some time, that’s the very definition of “over-promotion,” leading to an opt-out.

As you can see, the sheer number of emails (or any other channel promotion) to a person should not be the sole barometer. Every customer is different, and recognition of such differences is the first step toward proper personalization. In other words, before worrying about customizing offers and products for a target individual, figure out her personal threshold for over-promotion. How much is too much for everyone?

Figuring out the magic number for each customer is a daunting task, so start with three basic tiers:

  1. Over-promoted,
  2. Adequately promoted, and
  3. Under-promoted.

To get to that, you must merge promotional history data (not just for emails, but for every channel) and response history data (which includes open, clickthrough, browse, and conversion data) on an individual level.

Sounds simple? But marketing organizations rarely get into such practices. Most attributions are done on a channel level, and many do not even have all required data in the same pool. Worse, many don’t have any proper match keys and rules that govern necessary matching steps (i.e., individual-level attribution).

The issue is further compounded by inconsistent rules and data availability among channels (e.g., totally different practices for online and offline channels). So much for the coveted “360-Degree Customer View.” Most organizations fail at “hello” when it comes to marrying promotion and response history data, even for the most recent month.

But is it really that difficult of an operation? After all, any respectful direct marketers are accustomed to good old “match-back” routines, complete with resolutions for fractional allocations. For instance, if the target received multiple promotions in the given study period, which one should be attributed to the conversion? The last one? The first one? Or some credit distribution, based on allocation rules? This is where the rule book comes in.

Now, all online marketers are familiar with reporting tools provided by reputable players, like Google or Adobe. Yes, it is relatively simple to navigate through them. But if the goal is to determine who is over-promoted or adequately promoted, how would you go about it? The best way, of course, is to do the match-back on an individual level, like the old days of direct marketing. But thanks to the sheer volume of online activity data and complexity of match-back, due to the frequent nature of online promotions, you’d be lucky if you could just get past basic “last-click” attribution on an individual level for merely the last quarter.

I sympathize with all of the dilemmas associated with individual-level attributions, so allow me to introduce a simpler way (i.e., a cheat) to get to the individual-level statistics of over- and under-promotion.

Step 1: Count the Basic Elements

Set up the study period of one or two years, and make sure to include full calendar years (such as rolling 12 months, 24 months, etc.). You don’t want to skew the figures by introducing the seasonality factor. Then add up all of the conversions (or transactions) for each individual. While at it, count the opens and clicks, if you have extracted data from toolsets. On the promotional side, count the number of emails and direct mails to each individual. You only have to worry about the outbound channels, as the goal is to curb promotional frequency in the end.

Step 2: Once You Have These Basic Figures, Divide ‘Number of Conversions’ by ‘Number of Promotions’

Perform separate calculations for each channel. For now, don’t worry about the overlaps among channels (i.e., double credit of conversions among channels). We are only looking for directional guidelines for each individual, not comprehensive channel attribution, at this point. For example, email responsiveness would be expressed as “Number of Conversions” divided by “Number of Email Promotions” for each individual in the given study period.

Step 3: Now That You Have Basic ‘Response Rates’

These response rates are for each channel and you must group them into good, bad, and ugly categories.

Examine the distribution curve of response rates, and break them into three segments of one.

  1. Under-promoted (the top part, in terms of response rate),
  2. Adequately Promoted (middle part of the curve),
  3. Over-promote (the bottom part, in terms of response rate).

Consult with a statistician, but when in hurry, start with one standard deviation (or one Z-score) from the top and the bottom. If the distribution is in a classic bell-curve shape (in many cases, it may not be), that will give roughly 17% each for over- and under-promoted segments, and conservatively leave about 2/3 of the target population in the middle. But of course, you can be more aggressive with cutoff lines, and one size will not fit all cases.

In any case, if you keep updating these figures at least once a month, they will automatically be adjusted, based on new data. In other words, if a customer stops responding to your promotions, she will consequently move toward the lower segments (in terms of responsiveness) without any manual intervention.

Putting It All Together

Now you have at least three basic segments grouped by their responsiveness to channel promotions. So, how would you use it?

Start with the “Over-promoted” group, and please decrease the promotional volume for them immediately. You are basically training them to ignore your messages by pushing them too far.

For the “Adequately Promoted” segment, start doing some personalization, in terms of products and offers, to increase response and value. Status quo doesn’t mean that you just repeat what you have been doing all along.

For “Under-promoted” customers, show some care. That does NOT mean you just increase the mail volume to them. They look under-promoted because they are repeat customers. Treat them with special offers and exclusive invitations. Do not ever take them for granted just because they tolerated bombardments of promotions from you. Figure out what “they” are about, and constantly pamper them.

Find Your Strategy

Why do I bother to share this much detail? Because as a consumer, I am so sick of mindless over-promotions. I wouldn’t even ask for sophisticated personalization from every marketer. Let’s start with doing away with carpet bombing to all. That begins with figuring out who is being over-promoted.

And by the way, if you are sending two emails a day to everyone, don’t bother with any of this data work. “Everyone” in your database is pretty much over-promoted. So please curb your enthusiasm, and give them a break.

Sometimes less is more.

Have We Ruined 1:1 Marketing? How the Corner Grocer Became a Creepy Intruder

When Don Peppers and Martha Rogers wrote “The One to One Future: Building Relationships One Customer at a Time” in 1993, the Internet was a mere twinkle in Al Gore’s eye. But direct marketers felt excited about 1:1 marketing, and even vindicated.

When Don Peppers and Martha Rogers wrote “The One to One Future: Building Relationships One Customer at a Time” in 1993, the Internet was a mere twinkle in Al Gore’s eye. But direct marketers felt excited, and even vindicated, about the promise of a future where data-driven personalization would deliver the right message to the right customer at the right time.

But now that it’s here, are consumers happy with it?

Recently, I had the students in my direct marketing course at Rutgers School of Business read the introduction to “The Complete Database Marketer” by Arthur Hughes, which was published in 1996 when only 22% of people in the U.S. had Internet access. In the intro entitled “The Corner Grocer,” Hughes explains how database marketing can connect marketers with their customers with the same personal touch that the corner grocer had by knowing all of his customers’ names, family members, and usual purchases.

The students then had to compare the 1996 version of database marketing, as described by Hughes, with the current state of online direct/database marketing, where data collection has been enabled by e-commerce, social media, and search engine marketing.

  • What marketing innovations has technology enabled that didn’t exist before?
  • How has online marketing enhanced the concept of database marketing?
  • How have new marketing techniques and technologies changed consumer behavior?
  • How has social media affected direct/data-driven marketing for the marketer and the consumer?
  • What are some of the fundamental differences between the challenges and opportunities that today’s online marketers face vs. those that the 1996 database marketer faced?

Most of these digital natives were born after Hughes’s book was published. The students experience digital marketing every day, and they’ve seen it evolve over their lifetimes. While they concede that the targeted ads they experience are usually relevant, several of them noted that they don’t feel they have been marketed to as individuals; but rather, as a member of a group that was assigned to receive a specific digital advertisement by an algorithm. They felt that the idealized world of database marketing that Hughes described in 1996 was actually more personal than the advanced algorithmic targeting that delivers ads to their social media feeds. Hughes told the tale of Sally Warner and her relationship with the St. Paul’s Luggage Company that started with returning a warranty card and progressed with a series of direct mail and telemarketing. For example, knowing that Sally Warner had a college-bound son, St. Paul’s sent a letter suggesting luggage as a graduation gift. Hughes describes the concept of database marketing:

“Every contact with the customer will be an opportunity to collect more data about the customer. This data will be used to build knowledge about the customer. The knowledge will be used to drive strategy leading to practical, directly personal, long-term relationships, which produce sales. The sales, in turn, will yield more data which will start the process all over again.”

But Arthur couldn’t foresee the data collection capabilities of Google, Facebook, Instagram, and Amazon. Instead of the friendly corner grocer, database marketers have become a creepy intruder. How else could an ad for a product my wife had searched for at Amazon on her laptop generate an ad for the same product in my Instagram feed? (Alright, I will concede that we use the same Amazon Prime membership, but really?) We don’t have a smart speaker in the house, and I dread to think about how much creepier it could become if we did.

Recently, while visiting someone who has a Google Home assistant, I asked about the level of spying they experienced in exchange for the convenience of having voice-activated control over their household lights and appliances. They responded by asking, “Google, are you spying on us?”

The smart speaker replied, “I don’t know how to answer that question.”

Have we ruined 1:1 marketing?

Do you know how to answer that question? Tell me.

Beware the Small Sample

Everyone knows that the outcome of flipping a fair coin will be 50 percent heads/50 percent tails. But that’s only when the number of tries is large enough to generate the 50-50 outcome. In fact, in a small sample of four coin flips, the probability of getting tails when the last toss was a head is almost 60 percent. More about that later.

Everyone knows that the outcome of flipping a fair coin will be 50 percent heads/50 percent tails. But that’s only when the number of tries is large enough to generate the 50-50 outcome. In fact, in a small sample of four coin flips, the probability of getting tails when the last toss was a head is almost 60 percent. More about that later.

I’ve seen all types of people fall prey to the lure of making decisions based on small samples: They include undergraduates in my Advertising Research classes, brand managers in Fortune 500 companies, even seasoned direct marketers. Some examples:

  • Undergrads planning a focus group with 10 college students predict that they will be able to report what percentage of college students are aware of or use a particular product. Of course, we can forgive the undergrads. They’re in class to learn that when you talk to 10 students at Temple, you’re going to find out what those 10 people at Temple think, not what 20 million undergraduates across the U.S. think.
  • Brand managers exposing three focus groups of 10 people (30 subjects, in total) to different creative concepts and concluding that because 70 percent of the people interviewed like a particular concept that it must be a winner. This is a bit more egregious, because we would expect marketers with MBAs to understand that when 21 out of 30 people think something there’s a 90 percent chance that it could have been as few as 17 – slightly more than half, rather than almost three quarters.
  • Direct marketers testing two creative executions against each other in cells of 25,000 and declaring one execution the winner when it generated 125 responses (0.5 percent) vs. 100 responses (0.4 percent). Here, there’s a 90 percent chance that the “winning” cell could have had as few as 108 responses and the loser could have had as many as 145.
    (See the “Statistical Variation Tables” in “Direct Marketing – Strategy, Planning and Execution” by Ed Nash for confirmation of these estimates).

These errors in judgement are the result of a form of sample bias where the sample being studied is not large enough to be representative of the entire population. The temptation to make false conclusions is particularly strong in qualitative research, which is useful for gaining insights into people’s attitudes, beliefs and motivations, but not intended to determine what percentage of the population has those particular traits. Qualitative researchers always caveat their focus group findings with an appropriate disclaimer, but that doesn’t always stop their clients from hearing what they want to hear.

So what about the potential outcomes of four coin tosses?

A set of four tosses produces 16 possible outcomes from HHHH to TTTT. Steven Landsburg of the University of Rochester created a table that shows the probability of a “head” being followed by another head for every possible combination of the four flips. In the small sample of four coin flips, there are 14 possible outcomes where a head can be followed by another head (i.e., excluding TTTH and TTTT).

If we add up the probabilities that a head will be followed by a head in the 14 flips where that is possible (i.e., 100 percent for HHHH, 67 percent for HHHT, 50 percent for HHTH, 0 percent for HTHT, etc.), the result is 567 percent. Dividing that number by 14 (the number of possible outcomes where a head cannot be followed by head) we get 40.5 percent — which is the percentage of time a head will be followed by another head in four coin flips. (See the New York Times piece “Gamblers, Scientists and the Mysterious Hot Hand” for a table of the potential outcomes and a deeper explanation of the phenomenon).

What’s at play here is selection bias. Examining all the potential outcomes from HHHH to TTTT, we get 12 heads and 12 tails. But concentrating only on those 14 where a head can follow another head in the small sample of four tosses, we find that we can only get another head 40 percent of the time.

So be careful about what conclusions you draw from samples that are too small to represent the entire population. And, if you’re betting on a game of four coin flips and the last flip was a head, your odds of getting a tail next time are better than 50-50.