Measuring Customer Engagement: It’s Not Easy and It Takes Time

Here’s what’s easy: Measuring the effect of individual engagements like Web page views, email opens, paid and organic search clicks, call center interactions, Facebook likes, Twitter follows, tweets, retweets, referrals, etc. Here’s what’s hard: Understanding the combined effect of your promotions across all those channels. Many marketers turn to online attribution methods to assign credit for all or part of an individual order across multiple online channels. es as the independent variables.

Here’s what’s easy: Measuring the effect of individual engagements like Web page views, email opens, paid and organic search clicks, call center interactions, Facebook likes, Twitter follows, tweets, retweets, referrals, etc.

Here’s what’s hard: Understanding the combined effect of your promotions across all those channels.

Many marketers turn to online attribution methods to assign credit for all or part of an individual order across multiple online channels. Digital marketing guru Avinash Kaushik points out the strengths of weaknesses of various methods in his blog, Occam’s Razor in “Multichannel Attribution: Definitions, Models and a Reality Check” and concludes that none are perfect and many are far from it.

But online attribution models look to give credit to an individual tactic rather than measuring the combined effects of your entire promotion mix. Here’s a different approach to getting a holistic view of your entire promotion mix. It’s similar to the methodology I discussed in the post “Use Market Research to Tie Brand Awareness and Purchase Intent to Sales,” and like that methodology, it’s not something you’re going to be able to do overnight. It’s an iterative process that will take some time.

Start by assigning a point value to every consumer touch and every consumer action to create an engagement score for each customer. This process will be different for every marketer and will vary according to your customer base and your promotion mix. For illustration’s sake, consider the arbitrary assignments in the table in the media player, at right.

Next, perform this preliminary analysis:

  1. Rank your customers on sales volume for different time periods
    —previous month, quarter, year, etc.
  2. Rank your customers on their engagement score for the same periods
  3. Examine the correlation between sales and engagement
    —How much is each point of engagement worth in sales $$$?

After you’ve done this preliminary scoring, do your best to isolate customers who were not exposed to specific elements of the promotion mix into control groups, i.e., they didn’t engage on Facebook or they didn’t receive email. Compare their revenue against the rest of the file to see how well you’ve weighted that particular element. With several iterations of this process over time, you will be able to place a dollar value on each point of engagement and plan your promotion mix accordingly.

How you assign your point values may seem arbitrary at first, but you will need to work through this iteratively, looking at control cells wherever you can isolate them. For a more scientific approach, run a regression analysis on the customer file with revenue as the dependent variable and the number and types of touches as the independent variables. The more complete your customer contact data is, the lower your p value and the more descriptive the regression will be in identifying the contribution of each element.

As with any methodology, this one is only as good as the data you’re able to put into it, but don’t be discouraged if your data is not perfect or complete. Even in an imperfect world, this exercise will get you closer to a holistic view of customer engagement.

Sex and the Schoolboy: Predictive Modeling – Who’s Doing It? Who’s Doing it Right?

Forgive the borrowed interest, but predictive modeling is to marketers as sex is to schoolboys. They’re all talking about it, but few are doing it. And among those who are, fewer are doing it right. In customer relationship marketing (CRM), predictive modeling uses data to predict the likelihood of a customer taking a specific action. It’s a three-step process.

Forgive the borrowed interest, but predictive modeling is to marketers as sex is to schoolboys.

They’re all talking about it, but few are doing it. And among those who are, fewer are doing it right.

In customer relationship marketing (CRM), predictive modeling uses data to predict the likelihood of a customer taking a specific action. It’s a three-step process:

1. Examine the characteristics of the customers who took a desired action

2. Compare them against the characteristics of customers who didn’t take that action

3. Determine which characteristics are most predictive of the customer taking the action and the value or degree to which each variable is predictive

Predictive modeling is useful in allocating CRM resources efficiently. If a model predicts that certain customers are less likely respond to a specific offer, then fewer resources can be allocated to those customers, allowing more resources to be allocated to those who are more likely to respond.

Data Inputs
A predictive model will only be as good as the input data that’s used in the modeling process. You need the data that define the dependent variable; that is, the outcome the model is trying to predict (such as response to a particular offer). You’ll also need the data that define the independent variables, or the characteristics that will be predictive of the desired outcome (such as age, income, purchase history, etc.). Attitudinal and behavioral data may also be predictive, such as an expressed interest in weight loss, fitness, healthy eating, etc.

The more variables that are fed into the model at the beginning, the more likely the modeling process will identify relevant predictors. Modeling is an iterative process, and those variables that are not at all predictive will fall out in the early iterations, leaving those that are most predictive for more precise analysis in later iterations. The danger in not having enough independent variables to model is that the resultant model will only explain a portion of the desired outcome.

For example, a predictive model created to determine the factors affecting physician prescribing of a particular brand was inconclusive, because there weren’t enough dependent variables to explain the outcome fully. In a standard regression analysis, the number of RXs written in a specific timeframe was set as the dependent variable. There were only three independent variables available: sales calls, physician samples and direct mail promotions to physicians. And while each of the three variables turned out to have a positive effect on prescriptions written, the “Multiple R” value of the regression equation was high at 0.44, meaning that these variables only explained 44 percent of the variance in RXs. The other 56 percent of the variance is from factors that were not included in the model input.

Sample Size
Larger samples will produce more robust models than smaller ones. Some modelers recommend a minimum data set of 10,000 records, 500 of those with the desired outcome. Others report acceptable results with as few as 100 records with the desired outcome. But in general, size matters.

Regardless, it is important to hold out a validation sample from the modeling process. That allows the model to be applied to the hold-out sample to validate its ability to predict the desired outcome.

Important First Steps

1. Define Your Outcome. What do you want the model to do for your business? Predict likelihood to opt-in? Predict likelihood to respond to a particular offer? Your objective will drive the data set that you need to define the dependent variable. For example, if you’re looking to predict likelihood to respond to a particular offer, you’ll need to have prospects who responded and prospects who didn’t in order to discriminate between them.

2. Gather the Data to Model. This requires tapping into several data sources, including your CRM database, as well as external sources where you can get data appended (see below).

3. Set the Timeframe. Determine the time period for the data you will analyze. For example, if you’re looking to model likelihood to respond, the start and end points for the data should be far enough in the past that you have a sufficient sample of responders and non-responders.

4. Examine Variables Individually. Some variables will not be correlated with the outcome, and these can be eliminated prior to building the model.

Data Sources
Independent variable data
may include

  • In-house database fields
  • Data overlays (demographics, HH income, lifestyle interests, presence of children,
    marital status, etc.) from a data provider such as Experian, Epsilon or Acxiom.

Don’t Try This at Home
While you can do regression analysis in Microsoft Excel, if you’re going to invest a lot of promotion budget in the outcome, you should definitely leave the number crunching to the professionals. Expert modelers know how to analyze modeling results and make adjustments where necessary.

How Big Is Your Halo? 3 Ways to Measure the Branding Effect of Your Direct Promotions

Direct marketers take pride in accountability. But as I’ve said before, they can be their own worst enemies when it comes to measurement. They’re good at measuring things that are easy to count—clicks, page views, response rates, cost per lead, etc. But they struggle with measuring the long-term or cumulative effects that the branding in their promotions has on current and future sales—people who buy, but not as a result of a specific promotion, the so-called halo effect.

Direct marketers take pride in accountability. But as I’ve said before, they can be their own worst enemies when it comes to measurement. They’re good at measuring things that are easy to count—clicks, page views, response rates, cost per lead, etc.

But they struggle with measuring the long-term or cumulative effects that the branding in their promotions has on current and future sales—people who buy, but not as a result of a specific promotion, the so-called halo effect.

Consider big direct marketing brands like 1-800-Flowers.com or Omaha Steaks. These brand names have been built through direct marketing promotions over time and, as a result, people self-direct to their Web and phone sales channels.

But most direct marketers don’t know how to account for this halo effect, and when they work with response rates only, at best, they shortchange their results; and at worst, they get fooled by failing to account for those who buy without responding.

Case in point: A few years ago, I analyzed a data set from a multivariate direct mail matrix test that had 12 cells: four list segments, four offers and four creative executions.

Working off of response rates alone, we identified the winning list segment, offer and creative. But digging deeper by matching the solicitation file to the sales file, we discovered that from a revenue-per-prospect standpoint, these response rate winners were not the best revenue producers. Further analysis showed that from an ROI standpoint, they were actually the worst. In fact, the offer with the highest response rate (a free trial) produced a negative ROI when compared with a control cell: People in the control group who did not receive this offer actually spent more than the ones who responded to the offer for a free trial.

Here are three ways you can account for the halo effect:

1. Compare customer sales data to your promotion history. This is a good starting point. See who was exposed to your promotions and purchased without responding

2. Index brand awareness to sales over time. Take a look at this post for a methodology to measure this metric.

3. Create an engagement score that counts brand exposures and index it to sales over time. More on a methodology to measure this metric next time.

How Do You Spell ROI?

Return on Investment: Everybody’s talking about ROI, but not everyone agrees on what it is. Given the various ways that I’ve heard marketers bandy about the term ROI, I wonder how many of them really understand the concept, and how many just use the term as a buzzword. There’s certainly a disconnect between the way many marketers use of the term and the traditional definition embraced by CEOs and CFOs.

Return on Investment: Everybody’s talking about ROI, but not everyone agrees on what it is.

Given the various ways that I’ve heard marketers bandy about the term ROI, I wonder how many of them really understand the concept, and how many just use the term as a buzzword.

There’s certainly a disconnect between the way many marketers use of the term and the traditional definition embraced by CEOs and CFOs.

A study by The Fournaise Group in 2012 revealed that:

  • 75 percent of CEOs think marketers misunderstand (and misuse) the “real business” definition of the words “Results,” “ROI” and “Performance” and, therefore, do not adequately speak the language of their top management.
  • 82 percent of B-to-C CEOs would like B-to-C ROI Marketers to focus on tracking, reporting and, very importantly, boosting four Key Marketing Performance Indicators: Sell-in, Sell-out, Market Share and Marketing ROI (defined as the correlation between marketing spending and the gross profit generated from it).

So CEOs clearly want marketers to get on board with the true definition of marketing ROI. You can calculate marketing ROI in two different ways:

1. Simple ROI:
Revenue attributed to Marketing Programs ÷ Marketing Costs

2. Incremental ROI:
(Revenue attributed to Marketing Programs – Marketing Costs) ÷ Marketing Costs

Either of these definitions is consistent with the classic direct marketing principles of Customer Lifetime Value (the “R”) and Allowable Acquisition Cost (the “I”).

Back in 2004, the Association of National Advertisers, in conjunction with Forrester Research, did a survey on the definition of ROI where respondents could select from a menu of meanings. The results showed that there was no definitive definition of ROI, but rather, that marketers attribute up to five different definitions of the term and many use it to refer to many (or any) marketing metrics.

Member Survey of Association of National Advertisers on meaning of ROI
(multiple responses allowed)

  • 66 percent Incremental sales revenue generated by marketing activities
  • 57 percent Changes in brand awareness
  • 55 percent Total sales revenue generated by marketing activities
  • 55 percent Changes in purchase intention
  • 51 percent Changes in attitudes toward the brand
  • 49 percent Changes in market share
  • 40 percent Number of leads generated
  • 34 percent Ratio of advertising costs to sales revenue
  • 34 percent Cost per lead generated
  • 30 percent Reach and frequency achieved
  • 25 percent Gross rating points delivered
  • 23 percent Cost per sale generated
  • 21 percent Post-buy analysis comparing media plan to actual media delivery
  • 19 percent Changes in the financial value of brand equity
  • 17 percent Increase in customer lifetime value
  • 6 percent Other/none of the above

While I couldn’t find an update of this study, clarity around the definition doesn’t seem to have improved in the last 10 years, given the results of The Fournaise Group survey. And the increased emphasis on digital and social media marketing in the last 10 years has probably made it worse. The Fournaise Group found that 69 percent of B-to-C CEOs believe B-to-C marketers now live too much in their creative and social media bubbles and focus too much on parameters such as “likes,” “tweets,” “feeds” or “followers.”

It’s time for marketers to stop using ROI as a buzzword for any marketing metric. You can’t measure and improve something if you don’t clearly define it.

Don’t Get Lost in a Maze of Metrics

There’s a lot of data out there. More than any one marketer needs at any one time. The new frontier in using big data in multichannel marketing is learning what data you need. And that starts with clearly defined marketing objectives. The proliferation of data has caused many marketers to get caught up in minutiae that are not relevant to their objectives. With all the data that’s available, it takes discipline to focus only on the metrics that are relevant. Too often the most important metrics like cost per acquisition and customer lifetime value are overlooked while we’re looking at things like email bounce rates and time on site, which certainly have their place, but should be viewed in the context of how they can be leveraged to improve lifetime value.

There’s a lot of data out there. More than any one marketer needs at any one time.

The new frontier in using big data in multichannel marketing is learning what data you need. And that starts with clearly defined marketing objectives.

The proliferation of data has caused many marketers to get caught up in minutiae that are not relevant to their objectives. With all the data that’s available, it takes discipline to focus only on the metrics that are relevant. Too often the most important metrics, like cost per acquisition and customer lifetime value, are overlooked while we’re looking at things like email bounce rates and time on site. Those are metrics which certainly have their place, but should be viewed in the context of how they can be leveraged to improve lifetime value.

How Many Metrics Do You Need?
Every semester, more than one student in my “Advertising Research” class asks:

How many questions do we need to have in our quantitative questionnaire?

My answer is always the same, and always initially perplexing to them:

As many as you need.

The ensuing discussion is a lesson in the importance of setting clear objectives:

What are you trying to find out? Write down what you need to learn from your survey, and develop questions that will get you that information. Once you’ve done that, count the number of questions you have. That’s how many you need.

That lesson applies to marketing measurement, as well. With all the metrics that our marketing analytics platforms can provide, it’s easy to get buried in a landslide of statistics that don’t really relate to your business objectives. If your objective is lead generation at a landing page, why measure time on site? (Of course if you find that the abandonment rate on the data capture page is high, then look at time on site. You may be asking for too much information.)

Define What You Need to Know
If you’re looking to optimize your cost per lead or maximize lead volume, you’ll need to track cost per lead by individual tactic. You’ll find an interesting approach to maximizing lead volume in a previous “Here’s What Counts” post. But if you’re looking to enroll people in a CRM program and every one of your touchpoints is essential, then you may be able to skip that level of analysis. (If that idea seems foreign to you, check out this “Here’s What Counts” post that talks about a real world scenario where it wasn’t necessary to track cost per enrollment by vehicle.)

Every end has a beginning. Measurement always starts with the objectives you set at the start of a campaign. If they are clearly defined and you focus only on those metrics that are related to the objectives, you won’t find yourself buried in data that’s not relevant to measuring your success.

Planning ROI? Turn the Funnel Upside-Down

Many marketers use a funnel to illustrate the progression from prospect to buyer because the narrowing graphic neatly shows the narrowing segments of the sales progression. Most construct the funnel by starting at the top and working their down chronologically through the sales cycle.  They apply projected percentages to each stage, funnel down to a number of buyers, calculate revenue based on average sale, and determine ROI based on promotion costs.

Many marketers use a funnel to illustrate the progression from prospect to buyer because the narrowing graphic neatly shows the narrowing segments of the sales progression. Most construct the funnel by starting at the top and working their down chronologically through the sales cycle. They apply projected percentages to each stage, funnel down to a number of buyers, calculate revenue based on average sale, and determine ROI based on promotion costs.

A different approach to using the funnel starts at the bottom. It has its roots in the tried and true direct response principles of Customer Lifetime Value (LTV) and Allowable Acquisition Cost (AAC). Because these two principles are the components that make up ROI (with LTV as the “R” and AAC as the “I”), the upside-down funnel becomes a useful tool for planning and creating ROI scenarios.

Start with the value of a customer. Set a target ROI and calculate your AAC. For this illustration, let’s assume that a buyer is worth $300 and we set our revenue target ROI at 3:1. This results in an AAC of $100.

See Equation No. 1 in the media player at right.

As you move to the lower portions of the upside down funnel, you apply assumptions about the conversion rates at each stage. For example, if you assume that 30 percent of all qualified leads will convert to buyers, then the Allowable Cost per Qualified Lead is $30.

See Equation No. 2 in the media player at right.

Similarly, you can calculate the Allowable Cost Per Lead, Per Response, and Per Impression all the way to the top of the upside down funnel. So if you estimate that two-thirds of your leads will be qualified, your Allowable Cost per Lead is $20, and so on.

When you reach the bottom of the upside-down funnel, it becomes particularly useful for media planning. You can determine the required response rates from each medium under consideration by:

  1. Dividing the cost of the media by the Allowable Lead Cost to determine the number of leads required from each medium
  2. Dividing the number of leads required by the circulation or number of impressions associated with the medium

For example, see Equation Nos. 3 and 4 in the media player at right.

Then, do a gut check. Is that response rate attainable? Don’t know? Test it. A carefully controlled small test will quantify your assumptions at each point of the upside-down funnel.

Use Market Research to Tie Brand Awareness and Purchase Intent to Sales

For years, I’ve been saying direct marketers are their own worst enemy when it comes to measurement. Direct marketers are good at measuring the things they’ve traditionally measured—response rates, cost per lead, cost per acquisition, etc.  But they’re not good at measuring the effect that their communications have on the non-responders; when, in fact, the effect of consistent branding in direct communications is what makes direct marketing powerhouses like Omaha Steaks and 1-800-flowers.com top of mind when consumers are ready to purchase (not to mention Amazon).

For years, I’ve been saying direct marketers are their own worst enemy when it comes to measurement.

Direct marketers are good at measuring the things they’ve traditionally measured—response rates, cost per lead, cost per acquisition, etc. But they’re not good at measuring the effect that their communications have on the non-responders; when, in fact, the effect of consistent branding in direct communications is what makes direct marketing powerhouses like Omaha Steaks and 1-800-flowers.com top of mind when consumers are ready to purchase (not to mention Amazon).

Even though consumers engage with brands on their own terms across multiple platforms, many marketers are stuck measuring the results of individual tactics rather than taking a holistic view of measurement. So when a single email or display ad fails to achieve the target level of attributable sales within a specific period of time, then they consider it a failure. Even though the communication has made an impact on those who didn’t respond, they can’t measure it, so they don’t count it.

And while many direct marketing practitioners now embrace the idea that their advertising has a cumulative effect of building a brand over time, most fall short of being able to quantify that ROI with meaningful metrics.

That’s where market research can help.

Consider the following word equations in light of how awareness contributes to sales for the top direct marketing companies:

Top of mind awareness + brand reputation + need = purchase intent
Top of mind awareness + brand reputation + immediate need = purchase

So it follows that if we can monitor awareness and reputation over time and index it to sales, then we can quantify the effects of those elements on sales revenue.

Start by surveying your prospects blindly—either through mail, email or search ads using relevant keywords. Offer an incentive that’s consistent with your product offering, e.g., “Save $$$ on cell phone accessories.” Ask respondents the following questions to determine the levels of unaided and aided awareness:

  • Which brands first come to mind when thinking of “category X”? (unaided awareness)
  • Which of the following brands (list) have you ever heard of? (aided awareness)

Get a better picture of the respondents’ product usage by asking:

  • Which brand(s) within category X do you “regularly” purchase?
  • Which brand is your favorite?
  • Which brand did you last purchase?
  • How often do you purchase this type of product?
    (Light, medium, heavy user?)
  • What percentage of “category X” purchases that you’ve made (within a certain timeframe) were “brand A”? (your share of customer)

For those who have used your brand, quantify purchase intent with the following question:

  • The next time you need this product, how likely are you to purchase “brand A”?

Next, index awareness levels to sales to sales revenues. Be sure account for category sales within the same time period. Your actual sales may have gone down, but the entire category may have gone down as well, and you may in fact have gotten more than your previous share of the category sales.

As you track these metrics over time, you will be able to quantify what a point of unaided awareness is worth in sales revenue. It’s one tool that will help you understand the effect that your communications have on sales beyond the responses that you can count directly.

Mindset and Measurement

In her book, “Mindset: The New Psychology of Success,” Stanford University Professor Carol Dweck purports that people possess one of two mindsets: the fixed mindset or the growth mindset. Fixed mindset people are “always trying to prove themselves and they’re supersensitive about being wrong or making mistakes.” They fear failure. They feel that they are always being judged. Fixed mindset people feel that they have fixed traits and talents, and that they’re never going to get any better. For them, success is about proving they’re smart or talented. Validating themselves.

In her book, “Mindset: The New Psychology of Success,” Stanford University Professor Carol Dweck purports that people possess one of two mindsets: the fixed mindset or the growth mindset.

Fixed mindset people are “always trying to prove themselves and they’re supersensitive about being wrong or making mistakes.” They fear failure. They feel that they are always being judged. Fixed mindset people feel that they have fixed traits and talents, and that they’re never going to get any better. For them, success is about proving they’re smart or talented. Validating themselves.

Growth mindset people believe that “your basic qualities are things you can cultivate through your efforts.” They welcome failure as a learning experience, an opportunity to grow. For them, success is about stretching themselves to learn something new. Developing themselves.

Which mindset a marketer possesses affects the way they approach testing and results measurement. Beginning my career in a traditional direct marketing environment, I learned early on that failure is a good thing. It tells you what doesn’t work. I thought everyone developed tests that had limited downside risk to determine the best media, creative and offers. We roll out the winning campaign and test against it time and again. Success is always evolving.

It wasn’t until I started in the agency business that I learned there was another mindset—one where in-market testing might uncover flaws in a campaign that could open it up for judgment. In the fixed marketing mindset, the agency team and the client select what they believe is the best approach. If time and money permit, then perhaps they do some research to validate their choice. But as David Ogilvy pointed out so many years ago, “Research is often misused by agencies and their clients. They have a way of using it to prove they are right. They use research as a drunkard uses a lamppost—not for illumination but for support.”

The fixed mindset marketers measure to validate their campaigns. The growth mindset marketers measure to challenge their campaigns.

Agency people can be especially prone to the fixed mindset, particularly when it involves admitting that the agency’s initial work or recommendation was not perfect. Once, I was analyzing conversion from visit to lead at a website. I found a problem with the way leads were being directed to the landing page; it wasn’t an intuitive interface for the visitor and it was a spot where visitors were abandoning the site. When I informed the account person about the issue she said, “We can’t change it now. The client already approved it.” Classic fixed mindset. Being wrong equals failure, even if admitting it means better results, learning and growth.

Clients who have lengthy, multi-layered approval processes are also prone to the fixed mindset. They resist testing because it’s too difficult to get multiple creative/offer variations approved. But perhaps they’re reluctant to admit to people across several departments and levels of the organization that they don’t know prospectively what’s going to work best.

The good news is people can change their mindsets if they change their perceptions of what it means to succeed and what it means to fail. Dr. Dweck relates that “John Wooden, the legendary basketball coach, says you aren’t a failure until you start to blame. What he means is that you can still be in the process of learning from your mistakes until you deny them.”

Testing new approaches and learning what doesn’t work is a step along the path of continuous improvement. If we’re going to take our marketing results to the next level, we need to challenge the status quo, not preserve it.

How to Maximize Your Lead Volume Within Your Allowable Cost per Lead

Many times marketers running lead generation programs shortchange their lead volume in order to maintain tight controls on their cost per lead. Their fear is that if they rollout media that tested at a cost per lead (CPL) that’s just equal to or slightly below their target CPL that a variation in response might put their overall CPL over the top. As a result, they roll out only those media properties that are performing below their target CPL.

Many times marketers running lead generation programs shortchange their lead volume in order to maintain tight controls on their cost per lead. Their fear is that if they roll out media that tested at a cost per lead (CPL) that’s just equal to or slightly below their target CPL that a variation in response might put their overall CPL over the top. As a result, they roll out only those media properties that are performing below their target CPL.

This conservative strategy ends up cheating you out of volume that could significantly increase your program’s total revenue and positively impact your ROI. The fact is that every well-constructed media test has its big winners as well as its big losers. The trick is to leverage the big winners in a way that allows you to include the “little losers” in the mix and still meet your overall target cost per lead.

With a few simple spreadsheet tricks, you can maximize your lead volume and still hit your target CPL by including media that actually generate higher lead costs than your target CPL! Think about it this way. If your target cost per lead is $15, for every $10 lead you get from a “big winner” media, you can accept a $20 lead from a “little loser.”

Let’s walk through the simple spreadsheet manipulations you need to manage this process.

Start out with your basic results spreadsheet like Table A that shows your media cost, responses, and cost per response for each media. For this example, we’ll look at a 500,000 impressions test (10 properties,
50,000 impressions each, with a roll-out potential of 15 million. The target CPL is $15.

#INLINE-CHART#

As you can see, the test yielded 700 responses at a cost of $11,425 or a total CPL of $16.32. But there are 7 out of 10 properties that are performing worse than the target CPL of $15.

The first thing you need to do is rank the results in ascending order of CPL using the Data Sort function, and you end up with Table B below. (Make sure you don’t include the total line in your sort).

#INLINE-CHART#

Here we see that properties H, B, and C are below the target of $15 per lead while all the others are higher. The combined roll-out quantity of these three properties is a disappointing 4,050,000 impressions out of the total potential roll-out quantity of 15 million. But let’s look at what the actual roll-out potential is when we leverage the “big winners” against the “little losers.”

To the spreadsheet that you sorted by ascending CPL, add columns for cumulative responses, cumulative cost and cumulative CPL. Table C, shows the formulas for calculating those.

#INLINE-CHART#

Looking at the results of this calculation in Table D, we get a better picture of the potential roll-out universe.

#INLINE-CHART#

If you look at the cumulative cost per lead column, you can see that taken together, 8 out of 10 media properties produce an aggregate cost per lead under $15. That leaves only properties E and F with their high CPLs out of the mix, creating a potential rollout of 12,250,000 impressions. (Note: If you decide to re-sort this spreadsheet do not include the cumulative results columns in the sort).

Now, some words of caution. Don’t roll all these marginal media out before retesting them in a larger quantity, say 250,000 impressions to make sure that you’re going to repeat your results. A test quantity of 50,000 impressions generating less than 100 responses does not create a high level of statistical confidence. So be especially careful with properties like A and I that have higher CPLs. You’ll also want to retest your “big winner” properties with a greater number of impressions to make sure the test results are not an aberration.