Influencer Marketing Can Have Great ROI and You can Prove It

In my previous post, I discussed how influencer marketing will become a prominent marketing tactic in 2020. In this post, I would like to share what is working and what influencer marketing needs to do to become a trusted channel.

In my previous post, I discussed how influencer marketing will become a prominent marketing tactic in 2020. In this post, I would like to share what is working and what influencer marketing needs to do to become a trusted channel.

Designing an effective influencer-based campaign must take into account the objectives of the campaign, whether it is a product or service, and the length of the product purchase cycle. As a result, execution varies. However, a clear consensus is emerging that the most successful campaigns focus on co-developing content, where the influencers are given the flexibility to determine the right way to introduce their audience to the sponsor’s brand. In these instances, brands work with influencers to design content that interacts with their product or service in an entertaining or informative way. When done well, the influencer’s credibility transfers onto the sponsor’s brand. A great discussion on this can be found on Scott Guthrie’s podcast.

A Successful Influencer Marketing Campaign

One example of an influencer campaign that I really love is the Liquid- Plumr “Will it clog” campaign. In this campaign, Liquid-Plumr worked with Vat19 to create funny and interesting clogs for Liquid-Plumr to tackle, like a pile of gummy bears. For Vat19’s audience, this was completely aligned with their theme of creating entertaining experiments. For Liquid-Plumr, not only was it great brand exposure, but it also built significant brand trust among viewers. As the challenges became more and more insane, viewers were impressed with how effective the product was at tackling tough clogs. I recently had the opportunity to hear Bryan Clurman, brand manager for Liquid-Plumr, share the team’s experience, and the lift in sales he showed was impressive.

I assume Liquid-Plumr detected the increase in sales because it was an impressive viral campaign lifting historically flat sales. In this aspect, this case is atypical. Many influencer campaigns are effective, but struggle to show it. Ask a typical marketer working on influencer campaigns and they will confess their most pressing challenge is measuring impact. Currently, most common attribution metrics rely on the same pixel/cookie-based tracking that has been used for digital ads over the last two decades. While this method has some clear benefits, we also know that there is usually a non-trivial gap between actual impact and that which can be directly attributed using cookies. (Let’s forget, for the moment, that the industry-wide death of cookies has already begun.) In my experience, this gap increases with longer sales cycles or when driving brand recognition is the primary goal, as opposed to immediate sales. The further the sale is from the ad exposure, the greater the chance that direct attribution will be lost.

The Magic of the Middle Funnel

An important part of the total ROI solution lies in the middle of the sales funnel. Activities here are closer to the initial ad/brand exposure. For example, assume you are looking for a washing machine for a new home, where your actual purchase may not happen for weeks. While conducting research, you come across a recommendation from a trusted influencer. You interact with the content and may click on a link to the brand website. There, you might look at reviews and product features, but you are still not ready to purchase. These engagement activities have economic value. We know this, because as engagement with a brand increases, sales should increase. However, middle of the funnel measurement is often neglected.

While paying more attention to middle funnel metrics is one step, the other is generating more compelling middle funnel activities. If an effective influencer campaign leads to a clickthrough, can the brand extend that co-branded experience on its own digital property? Not only will that cobranded experience keep the viewer engaged, it is also great for ROI tracking. Even if pixel tracking is lost at this stage, a statistical algorithm can now be employed to correlate the increase in co-branded engagement with eventual sales.

The truth of the matter is, influencer marketing does not have a measurement challenge. Influencer marketing ALSO has a measurement challenge.

What that means is there is nothing uniquely perplexing about influencer marketing ROI. However, influencer marketing is still very new and therefore, the burden of proof is higher. As with all successful marketing ROI plans, it requires a focused approach that clearly defines the objectives and actively seeks opportunities to encourage measurable engagement.

Where Do You Start? Teaching Direct Marketing to College Students

What’s the best approach to engage college kids in understanding direct marketing? Principles first; metrics second? Or Metrics first; principles second?

What’s the best approach to engage college kids in understanding direct marketing? Principles first; metrics second? Or Metrics first; principles second?

I remember sitting in the parlor of a Catholic parish rectory in North Jersey while my wife was participating in a wedding rehearsal. The Mets game was on TV. The brother of a parish priest who was visiting from Ireland asked me to explain baseball. Explain baseball?!?! Where do you start?

Despite all of the professional speaking and training I’ve done in direct response marketing, the first time I taught a college course devoted entirely to it was last spring. I started with the fundamental concepts of media, offer, and creative. I had them write about each of these concepts from their own experience. We went over the various targeting opportunities marketers have online and offline. And at the end, we covered measurement and metrics.

At the end of the course, I asked the students to tell me what worked, what didn’t, and what should be changed. The most insightful comment was from a student who said:

“I wish you had covered all that measurement content at the beginning of the course. It made me realize why all that other stuff was important, and how it fit into the big picture.”

HELP!

Now, as I embark on teaching a course dedicated to Direct Response Marketing at Rutgers School of Business Camden, I’m looking for advice about how to sequence things.

Last year, when I bemoaned the lack of an appropriate up-to-date textbook for this discipline in this column, Dave Marold and Harvey  Markowitz stepped up and recommended the Fourth Edition of “Direct, Digital, and Data-Driven Marketing,” by Lisa Spiller. (Thanks for that Dave and Harvey; I’m using that book in the Fall).

What Do You Think?

Now I see the benefit of stressing measurement early. Even though I told the students every class that the coolest thing about direct marketing is that you can measure it, apparently the mechanical reality of measuring something like search engine keywords was not real for them. So:

  • Do I incorporate some form of measurement into every lesson?
  • Do I introduce a comprehensive measurement unit early in the course? (Spiller’s book does that early on, in Chapter 4).
  • Or, do I go full-on “math course” at the beginning, and thin a 40-student class down to 20 students after two weeks? (Just kidding).

Opinions welcome. (Actually, encouraged.)

Why KPIs Lack Insight and What Marketers Can Do About It

I have a love-hate relationship with KPIs. When done right, they are mission-critical to defining success and can focus the organization on the right priorities. When chosen poorly, KPIs can be disconnected with ground realities and be a constant source of frustration for team members trying to impact them.

I have a love-hate relationship with KPIs (key performance indicators). When done right, they are mission-critical to defining success and can focus the organization on the right priorities. When chosen poorly, KPIs can be disconnected with ground realities and be a constant source of frustration for team members trying to impact them.

However, poorly designed KPIs are not my primary gripe, at least not in this post. My main concern is that even well-designed KPIs are simply not deeply insightful, but they are often used as if they are.

Well-designed KPIs are full of contradictions. On the one hand, they are expected to be simple, easy to communicate, and intuitive. On the other hand, they’re expected to provide actionable information and be a reliable measure of important success criteria.

Anyone who has worked on developing KPIs knows that it is a game of balance and compromise, based on business objectives. The need for actionable information battles with the desire for simple metrics. The desire for intuitive metrics battles metrics that push status quo thinking or properly reflect the diversity of business interactions.

After many years working with and helping clients identify KPIs, I have found ways to manage their dichotomous nature, but never overcome it. If there is a brilliant mind out there who has solved for this, I would love to hear about it. For now, I will assume that this dichotomous struggle is a law of nature.

This leads me back to my main point. Marketers need to stop viewing KPIs as major source of insights. They are, as the name illustrates, only “indicators.” While this seems like an obvious statement, it is surprising how often KPIs have become the primary source of insights for most companies.

Take digital analytics, for example. Most companies using the web analytics platform use default metrics, such as clickthrough and page views, as their primary measures to understand web activity. While these metrics may indicate increased interest in content, they rarely tell you how satisfied the visitor was with the content or how valuable it was in decision-making. It is rare for companies to set up custom metrics and reporting, which might provide better insights. It is even rarer for companies to download raw web data into a data management tool and truly analyze visitor interaction with content, even though these solutions exist. Instead, most companies use the default web KPIs to derive custom insights into behavior on their website.

Another example can be found about how companies use social channel data. There are some great social analytics tools out there. When I come across most implementations, however, they are mostly set to track high-level sentiment analysis and rarely deliver deep insights. However, the underlying data is often volumes of highly informative, unprompted, free-form feedback. It has the added benefit of being free of interviewer bias or agenda-setting.

Recently, I was working on a project for a client that viewed their products as very innovative. Yet, when mining nearly 1,000 instances of social data, we found only one unprompted mention of innovation. Upon further investigation, we found that innovation was meaningless to the consumer. Instead, it was performance, excitement, and fun that consumers talked about most often. The customer was conveying what innovation really meant to them, while the company was still thinking in terms of engineering sophistication. This insight was un-minable from the standard social KPIs. Even traditional survey-based market research may not have captured this insight, as it would have relied on coming up with the right questions to uncover this disconnect between the company and its customers.

These examples demonstrate the need to dig deeper for better insights and I risk the label of “Captain Obvious” by making this assertion.

So, let me add to this. Well-designed KPIs, because of their simplicity and action orientation, often lull us into overestimating their insightfulness. This link is unconscious and habitual.

When I have asked marketers “What is your (Social, Web or Customer) data telling you?” A common response is, the (relevant) tracker is telling us [fill in the blank].

In reality, the answer to the question is rarely found in the tracker or KPIs. Even if they can point to a KPI that is helpful, the underlying explanation is still often conjecture or a hypothesis. In fact, the better aligned the KPI story is with commonly accepted wisdom, the more likely it is to be seen as data-driven thinking.

In other words, we find an interesting KPI trend and create a believable story around this trend and that becomes data-driven thinking when it is still just conjecture. It takes great discipline to put on the brakes and look for deeper and corroborating evidence and that is what KPIs really calls for.

I want to make clear that this post is not advocating for the elimination of KPIs. They are very helpful tools for aligning the organization and most of us understand that they are only indicators. When done well, however, they are insidiously brilliant at creating the illusion of deep insight; especially if the resulting story is a good one. Truly data-driven marketers should be aware of this and be ready to dig deeper before letting a KPI drive strategic decisions.

Understanding Your Google Ads Metrics With the Latest Interface

How do you know what the metrics in Google Ads mean and which ones matter the most? The latest version of Google Ads’ interface has a particularly large number of metrics, so it’s easy to get overwhelmed when you first log on.

How do you know what the metrics in Google Ads mean and which ones matter the most? The latest version of Google Ads’ interface has a particularly large number of metrics, so it’s easy to get overwhelmed when you first log on.

Each page has a table full of data, including a graph of metrics and various reports. It’s a little like looking at an airplane cockpit for the first time, with all its lights, switches and gauges. However, experienced advertisers know that all the information in Google Ads allows you to dig into your campaign performance and find ways to improve it.

Which Metrics Really Matter?

The most important Google Ads metrics include the following:

  • Cost-per-click (CPC)
  • Clickthrough rate (CTR)
  • Conversion rate
  • Cost-per-acquisition (CPA)

CPC

CPC is an advertising model in which an advertiser pays a website owner each time a user clicks on an ad. First-tier search engines like Google Ads typically use a CPC model, because advertisers can bid on key phrases that are relevant to their target market. In comparison, content sites typically charge per 1,000 impressions of the ad.

CTR

CTR, or clickthrough rate, is the ratio of users who click a link to the total number of users who view the ad. CTR generally indicates a marketing campaign’s effectiveness in attracting visitors to a website.

Conversion Rate

Conversion rate is the ratio of goal achievements to the number of visitors. It’s essentially the proportion of visitors who take a desired action as a result of your marketing activity. The specific action that a conversion rate monitors depends on the type of business you’re promoting. For example, online retailers often define a conversion as a sale, while services businesses consider other actions, such as a request for a quote, a demo sign up or a report download, when measuring conversion rate.

CPA

CPA, or cost per action, is the total cost of your ads divided by the number of conversions. Again, the specific action depends on the type of business you’re promoting. For example, CPA for online retailers is typically the cost per e-commerce sale. Services businesses typically measure CPA as a cost per lead. This number is critical, because it tells you if your campaigns are profitable or not.

How Can Metrics Help You Improve Performance?

Poor metrics can indicate courses of action that can help you improve your Google Ads campaign performance.

CPC

A high CPC could mean that you need to raise the quality scores for your ad, which could reduce the cost of each click. You can also accomplish this by using ad scheduling and geotargeting to ensure your website doesn’t show ads during times or in locations where you don’t do business. Additional strategies for reducing CPC include using demographic targeting, in-market audiences and remarketing to narrow your audience to just the people who are interested in your business.

CTR

A low CTR can indicate that you need to review the keywords and ad copy in your Google Ads account. For example, you should ensure that you’re only bidding on keywords that relate to your offers. You should also perform A/B testing on your ads to determine the factors that interest your prospects the most, whether it’s features, benefits or some emotional trigger. You can also improve CTR by ensuring that your ad takes up as much room as possible by implementing ad extensions.

Conversion Rate

A low conversion rate can indicate that you need to take a closer look at your landing pages, where visitors go when they click on an ad. These pages should be very clean and quick to load to ensure visitors don’t lose interest after they click. Your ads should always send visitors directly to a dedicated landing page, rather than just your home page or even a general landing page.

CPA

A high CPA means that you aren’t getting a good return on investment (ROI) from your ad spend. Possible causes of a high CPA include a high CPC or low conversion rate, which often means a poor choice of keywords and ad copy. Concentrate your budget on high-converting keywords with a high intent to buy.

Conclusion

Google Ads provides many metrics that can tell you how to improve website performance. However, this information can also be daunting to interpret if you don’t know what it means.  Follow the tips above to monitor your key metrics and make adjustments to improve your Google Ads performance.

Want more tips to improve your Google advertising? Get your free copy of our “Ultimate Google AdWords Checklist.”

 

Lead Generation Metrics — The Basics and Beyond

Lead generation metrics should help you understand not only what parts of your digital marketing are working, but what parts are generating the highest quality leads.

There are basic lead generation metrics that you must to be tracking in order to evaluate the success of your lead gen efforts. You’ll likely have to go beyond the basics to mine truly valuable insights about your efforts.

Here’s a list, that’s by no means comprehensive, of my favorite basic and more advanced metrics.

First, the basics.

Impressions

How many people are seeing your ad, your content or whatever it is you’re using to attract that audience? This is, to use another term, your reach. Your tracking and evaluation here should be on a per-channel basis, with an eye toward finding the channels that you are able to grow most cost-effectively.

Clickthrough Rate

CTR is the number of people who interact with your content. Typically, that means they click the ad or the link in your social media post, etc. (You might also want to track other types of engagement, like subscriptions.) The critical element of this metric is breaking it down to individual ads or content, including individual issues of your newsletter campaign. You want to know what is resonating with your audience and what is driving them to take action.

Conversions

A conversion can be many different things, depending on the goal you have for your lead generation campaign. (e.g. marketing-qualified leads, sales-qualified leads, etc.) Whatever action you deem to be a conversion, it’s generally a “state change” along the buyer’s journey. That can be a move from a member of the target audience who’s never heard of you to a website visitor to a prospect to a MQL to an SQL and finally to becoming a client. Each of those state changes is a conversion that should be tracked separately.

Conversion Rate

This calculated metric is a function of conversions divided by impressions. It’s worth tracking on its own, of course, but should also be evaluated with some latitude. That is, as you expand your reach and your impressions rise, you may have a less tightly targeted audience. Of course, you’d like your conversion rate to always rise. But if it falls while the total number of conversions rise, that’s not necessarily a bad trade-off.

With these data points solidly represented in our dashboard, we can move on to additional (and increasingly useful) measurements.

Cost per Lead

What does it take to move a prospect through a stage in the funnel? How does the cost compare with other methods? (Direct mail, trade shows, etc.) How do costs compare across the various digital channels you’re using? These are the metrics that will guide your spend going forward.

Leads per Channel

Another calculated metric worth adding to your dashboard. Here, you compare how many leads a channel is generating against all other channels. It’s an analog to conversion rate in that a channel with more leads generated from a smaller audience (impressions) might be a channel worth exploring more deeply.

Time to Conversion

This metric typically takes some aggregating of data across platforms, as you’ll want to note when each state change occurs. It’s valuable to know how long it takes a typical prospect to proceed through each stage. It’s even more valuable to know this on a per-channel basis. And more valuable still to know average time-per-conversion for those prospects that become clients. You can then tailor your programs to pay more attention to those prospects who appear to be on that “golden path.”

Customer Lifetime Value (CLV)

CLV should be calculated across the board and broken down by channel. A channel with a slightly higher cost per lead but a 10-time increase in CLV is a great channel!

Conclusion

You may find the able list of metrics daunting to consider, especially if you’re not gathering and reviewing any of them now. If so, there’s no reason not to start small. As you become more comfortable with the data, you can expand your dashboard to include a broader range of data points and a broader possibility of action points.

Marketing Metrics Aren’t Baseball Scores

Lester Wunderman is called “the Father of Direct Marketing” — not because he was the first one to put marketing offers in the mail, but because he is the one who started measuring results of direct channel efforts in more methodical ways. His marketing metrics are the predecessors of today’s measurements.

Lester Wunderman is called “the Father of Direct Marketing” — not because he was the first one to put marketing offers in the mail, but because he is the one who started measuring results of direct channel efforts in more methodical ways. His marketing metrics are the predecessors of today’s measurements.

Now, we use terms like 1:1 marketing or digital marketing. But, in essence, data-based marketing is supposed to be looped around with learnings from results of live or test campaigns. In other words, playing with data is an endless series of learning and relearning. Otherwise, why bother with all this data? Just do what you gut tells you to do.

Even in the very beginning of the marketer’s journey, there needs to a step for learning. Maybe not from the results from past campaigns, but something about customer profiles and their behaviors. With that knowledge, smart marketers would target better, by segmenting the universe or building look-alike or affinity models with multiple variables. Then a targeted campaign with the “right” message and offers would follow. Then what? Data players must figure out “what worked” (or what didn’t work). And the data journey continues.

So, this much is clear; if you do not measure your results, you are really not a data player.

But that doesn’t mean that you’re supposed to get lost in an endless series of metrics, either. I sometimes see what is commonly called “Death by KPI” in analytically driven organizations. That is a case where marketers are too busy chasing down a few of their favorite metrics and actually miss the big boat. Analytics is a game of balance, as well. It should not be too granular or tactical all of the time, and not too high in the sky in the name of strategy, either.

For one, in digital marketing, open and clickthrough rates are definitely “must-have” metrics. But those shouldn’t be the most important ones for all, just because all of the digital analytics toolsets prominently feature them. I am not at all disputing the value of those metrics, by the way. I’m just pointing out that they are just directional guidance toward success, where the real success is expressed in dollars, pounds and shillings. Clicks lead to conversions, but they are still a few steps away from generating cash.

Indeed, picking the right success metrics isn’t easy; not because of the math part, but because of political aspects of them, too. Surely, aggressive organizations would put more weight onto metrics related to the size of footprints and the rate of expansion. More established and stable companies would put more weight on profitability and various efficiency measures. Folks on the supply side would have different ways to measure their success in comparison to sales and marketing teams that must move merchandise in the most efficient ways. If someone is dedicated to a media channel, she would care for “her” channel first, without a doubt. In fact, she might even be in direct conflicts with fellow marketers who are in charge of “other” channels. Who gets the credit for “a” sale in a multi-channel environment? That is not an analytical decision, but a business decision.

Even after an organization settles on the key metrics that they would collectively follow, there lies another challenge. How would you declare winners and losers in this numbers game?

As the title of this article indicates, you are not supposed to conclude one version of creative beat the other one in an A/B test, just because the open rate was higher for one by less than 1%. This is not some ballgame where a team becomes a winner with a walk-away homerun at the bottom of the 11th inning.

Differences in metrics should have some statistical significance to bear any meaning. When we compare heights of a classroom full of boys, will we care for differences measured in 1/10 of a millimeter? If you are building a spaceship, such differences would matter, but not when we measure the height of human beings. Conversion rates, often expressed with two decimal places, are like that, too.

I won’t get too technical about it here, but even casual decision-makers without any mathematical training should be aware of factors that determine statistical significance when it comes to marketing-related metrics.

  • Expected and Observed Measurements: If it is about open, clickthrough and conversion rates, for example, what are “typical” figures that you have observed in the past? Are they in the 10%to 20% range, or something that is measured in fractions? And of course, for the final measure, what are the actual figures of opens, clicks and conversions for A and B segments in test campaigns? And what kind of differences are we measuring here? Differences expressed in fractions or whole numbers? (Think about the height example above.)
  • Sample Size: Too often, sample sizes are too small to provide any meaningful conclusions. Marketers often hesitate to put a large number of target names in the no-contact control group, for instance, as they think that those would be missed revenue-generating opportunities (and they are, if the campaign is supposed to work). Even after committing to such tests, if the size of the control group is too small, it may not be enough to measure “small” differences in results. Size definitely matters in testing.
  • Confidence Level: How confident would you want to be: 95% or 90%? Or would an 80% confidence level be good enough for the test? Just remember that the higher the confidence level that you want, the bigger the test size must be.

If you know these basic factors, there are many online tools where you can enter some numbers and see if the result is statistically significant or not (just Google “Statistical Significance Calculator”). Most tools will ask for test and control cell sizes, conversion counts for both and minimum confidence level. The answer comes out as bluntly as: “The result is not significant and cannot be trusted.”

If you get an answer like that, please do not commit to a decision with any long-term effects. If you want to just declare a winner and finish up a campaign as soon as possible, sure, treat the result like a baseball score of a pitchers’ duel. But at least be aware that the test margin was very thin. (Tell others, too.)

Here’s some advice related to marketing success metrics:

  • Always Consider Statistical Significance and do not make any quick conclusions with insufficient test quantities, as they may not mean much. The key message here is that you should not skip the significance test step.
  • Do Not Make Tests Too Complicated. Even with just 2-dimensional tests (e.g., test of multiple segments and various creatives and subject lines), the combination of these factors may result in very small control cell sizes, in the end. You may end up making a decision based on less than five conversions in any given cell. Add other factors, such as offer or region, to the mix? You may be dealing with insignificant test sizes, even before the game starts.
  • Examine One Factor at a Time in Real-Life Situations. There are many things that may have strong influences on results, and such is life. Instead of looking at all possible combinations of segments and creatives, for example, evaluate segments and creatives separately. Ceteris paribus (“all other factors held constant,” which would never happen in reality, by the way), which segment would be the winner, when examined from one angle?
  • Test, Learn and Repeat. Like any scientific experiments, one should not jump to conclusions after one or two tests. Again, data-based marketing is a continuous loop. It should be treated as a long-term commitment, not some one-night stand.

Today’s marketers are much more fortunate in comparison to marketers of the past. We now have blazingly fast computers, data for every move that customers and prospects make, ample storage space for data, affordable analytical toolsets (often for free), and in general, more opportunities for marketers to learn about new technologies.

But even in the machine-driven world, where almost everything can be automated, please remember that it will be humans who make the final decisions. And if you repeatedly make decisions based on statistically insignificant figures, I must say that good or bad consequences are all on you.

Improved Marketing ROI Shouldn’t Be Your Metric, This Should

My team often engages in client projects designed to improve marketing outcomes. Many times, clients describe their primary objective as an increased return on marketing dollars or return on investment (ROI). However, this is often the wrong object and their real goal should be improved marketing effectiveness.

My team often engages in client projects designed to improve marketing outcomes. Many times, clients describe their primary objective as an increased return on marketing dollars or return on investment (ROI). However, this is often the wrong object and their real goal should be improved marketing effectiveness.

“That sounds like semantics,” you say? Yes, this is an argument over semantics, and in this case, semantics matter.

When stating the primary objective as improved marketing ROI, the aperture is usually focused on an optimization exercise, which pits financial resources on one side of the equation and levers — such as channel spend, targeting algorithms and A/B testing — on the other side.

A couple of decades ago, marketing analytics recognized that specific activities were easier to link, with outcomes based on data that was readily available. Over time, this became the marketing ROI playbook and was popularized by consultants, academics and practitioners. This led to improved targeting, ad buys and ad content. These improvements are very important, and I would argue that they are still a must-do for most marketing departments today. However, resources are optimally allocated across channels, winning ads identified and targeting algorithms improved, marketing is still not as effective as it can be. Now is when the hard part of building a more effective marketing function actually begins.

For a moment, let’s imagine a typical marketing ROI project from the customer’s perspective. Imagine you are actively shopping for a refrigerator. A retailer uses data to appropriately target you at the right time, across multiple channels, with the right banner ad and a purchase naturally follows, right? Of course not.

  • What about helping you understand the variety of features, prices and brands available?
  • What about helping you understand the value of selecting them over other retailers?
  • What about the brand affinity and trust this process is developing in the consumer’s mind?

Because this purchase journey can play out over weeks or months, these marketing activities are more difficult (but not impossible) to measure and are often left out of the standard ROI project. However, these activities are as impactful as the finely tuned targeting algorithm that brought you to the retailer’s website in the first place.

Back to why semantics over ROI and marketing effectiveness matter. Today, the term “marketing ROI” is calcified within a relatively narrow set of analytical exercises. I have found that using marketing effectiveness as the alternative objective gives license to a broader conversation about how to improve marketing and customer interaction. It also lessens the imperative to link all activities directly to sales. Campaigns designed to inform, develop relationships or assist in eventual purchase decisions are then able to be measured against more appropriate intermediate metrics, such as online activity, repeat visits, downloads, sign-ups, etc.

What makes this work more challenging is that it requires marketers to develop a purposeful and measurable purchase journey. In addition, it requires a clear analytics plan, which drives and captures specific customer behavior, identifies an immediate need and provides a solution so the customer can move further down the purchase journey.

Finally, it requires developing an understanding of how these intermediate interactions and metrics eventually build up to a holistic view of marketing effectiveness. Until marketers can develop an analytical framework which provides a comprehensive perspective of all marketing activity, marketing ROI is merely a game of finding more customers, at the right time and place who will overlook a poorly measured (and, by extension, poorly managed) purchase journey.

Marketing Success Metrics: Response or Dollars?

It’s tempting to ask about whether marketing success metrics should be response rates or money. But you don’t need to ask marketers what they want. Basically, they want everything.

It’s tempting to ask about whether marketing success metrics should be response rates or money. But you don’t need to ask marketers what they want. Basically, they want everything.

They want big spenders who also visit frequently, purchasing flagship products repeatedly. For a long time (some say “lifetime”). Without any complaint. Paying full price, without redeeming too many discount offers. And while at it, minimal product returns, too.

Unfortunately, such customers are as rare as a knight in white armor. Because, just to start off, responsiveness to promotions is often inversely related to purchase value. In other words, for many retailers, big spenders do not shop often, and frequent shoppers are often small item buyers, or worse, bargain-seekers. They may just stop coming if you cut off fat discount deals. Such dichotomy is quite common for many types of retailers.

That is why a seasoned consultants and analysts ask what brand leaders “really” want the most in marketing success metrics. If you have a choice, what is more important to you? Expanding the customer base or increasing the customer value? Of course, both are very important goals — and marketing success metrics. But what is the first priority for “you,” for now?

Asking that question upfront is a good defensive tactic for the consultant, because marketers tend to complain about the response rate when the value target is met, and complain about the revenue size when goals for click and response rates are achieved. Like I said earlier, they want “everything, all the time.”

So, what does a conscientious analyst do in a situation like this? Simple. Set up multiple targets and follow multiple marketing success metrics. Never hedge your bet on just one thing. In fact, marketers must follow this tactic as well, because even CMOs must answer to CEOs eventually. If we “know” that such key marketing success metrics are often inversely correlated, why not cover all bases?

Case in point: I’ve seen many not-so-great campaign results where marketers and analysts just targeted the “best of the best” segment — i.e., the white rhinoceros that I described in the beginning — in modeled or rule-based targeting. If you do that, the value may be realized, but the response rate will go down, leading to disappointing overall revenue volume. So what if the average customer value went up by 20%, when only a small group of people responded to the promotion?

A while back, I was involved in a case where “a” targeting model for a luxury car accessory retailer tanked badly. Actually, I shouldn’t even say that the model didn’t work, because it performed exactly the way the user intended. Basically, the reason why the campaign based on that model didn’t work was the account manager at the time followed the client’s instructions too literally.

The luxury car accessory retailer carried various lines of products — from a luxury car cover costing over $1,000 to small accessories priced under $200. The client ordered the account manager to go after the high-value target, saying things like “who cares about those small-timers?” The resultant model worked exactly that way, achieving great dollar-per-transaction value, but failing at generating meaningful responses. During the back-end analysis, we’ve found that the marketer indeed had very different segments within the customer base, and going only after the big spenders should not have been the strategy at all. The brand needed a few more targets and models to generate meaningful results on all fronts.

When you go after any type “look-alikes,” do not just go after the ideal targets in your head. Always look at the customer profile reports to see if you have dual, or multiple universes in your base. A dead giveaway? Look at the disparity among the customer values. If your flagship product is much more expensive than an “average” transaction or customer value in your own database, well, that means most of your customers are NOT going for the most expensive option.

If you just target the biggest spenders, you will be ignoring the majority of small buyers whose profile may be vastly different from the whales. Worse yet, if you target the “average” of those two dichotomous targets, then you will be shooting at phantom targets. Unfortunately, in the world of data and analytics, there is no such thing as an “average customer,” and going after phantom targets is not much different from shooting blanks.

On the reporting front — when chasing after often elusive targets — one must be careful not to get locked into a few popular measurements in the organization. Again, I recommend looking at the results in every possible way to construct the story of “what really happened.”

For instance:

  • Response Rate/Conversion Rate: Total conversions over total contacted. Much like open and click-through rate, but I’d keep the original denominator — not just those who opened and clicked — to provide a reality check for everyone. Often, the “real” response rate (or conversion rate) would be far below 1% when divided by the total mail volume (or contact volume). Nonetheless, very basic and important metrics. Always try to go there, and do not stop at opens and clicks.
  • Average Transaction Value: If someone converted, what is the value of the transaction? If you collect these figures over time on an individual level, you will also obtain Average Value per Customer, which in turn is the backbone of the Lifetime Value calculation. You will also be able to see the effect of subsequent purchases down the line, in this competitive world where most responders are one-time buyers (refer to “Wrestling the One-Time Buyer Syndrome”).
  • Revenue Per 1,000 Contacts: Revenue divided by total contacts multiplied by 1,000. This is my favorite, as this figure captures both responsiveness and the transaction value at the same time. From here, one can calculate net margin of campaign on an individual level, if the acquisition or promotion cost is available at that level (though in real life, I would settle for campaig- level ROI any time).

These are just three basic figures covering responsiveness and value, and marketers may gain important intelligence if they look at these figures by, but not limited to, the following elements:

  • Channel/Media
  • Campaign
  • Source of the contact list
  • Segment/Selection Rule/Model Score Group (i.e., How is the target selected)
  • Offer and Creative (hopefully someone categorized an endless series of these)
  • Wave (if there are multiple waves or drops within a campaign)
  • Other campaign details such as seasonality, day of the week, daypart, etc.

In the ultimate quest to find “what really works,” it is prudent to look at these metrics on multiple levels. For instance, you may find that these key metrics behave differently in different channels, and combinations of offers and other factors may trigger responsiveness and value in previously unforeseen manners.

No one would know all of the answers before tests, but after a few iterations, marketers will learn what the key segments within the target are, and how they should deal with them discriminately going forward. That is what we commonly refer to as a scientific approach, and the first step is to recognize that:

  • There may be multiple pockets of distinct buyers,
  • Not one type of metrics will tell us the whole story, and
  • We are not supposed to batch and blast to a one-dimensional target with a uniform message.

I am not at all saying that all of the popular metrics for digital marketing are irrelevant; but remember that open and clicks are just directional indicators toward conversion. And the value of the customers must be examined in multiple ways, even after the conversion. Because there are so many ways to define success — and failure — and each should be a lesson for future improvements on targeting and messaging.

It may be out of fashion to say this old term in this century, but that is what “closed-loop” marketing is all about, regardless of the popular promotion channels of the day.

The names of metrics may have changed over time, but the measurement of success has always been about engagement level and the money that it brings.

Survey: Marketing Leaders Responsible for More Than Ever

The early results from our Marketing Leadership Survey are in, and a shocking percentage of marketers say the job has changed dramatically in just five years. Marketing leaders have acquired (or been loaded with) more responsibility in almost every area.

The early results from our Marketing Leadership Survey are in — there’s still a week to go, so don’t forget to take it yourself and enter to win a $100 AMEX gift card! — and a shocking percentage of marketers say the job has changed dramatically in just five years. Marketing leaders have acquired (or been loaded with) more responsibility in almost every area.

Have the roles and responsibilities of marketing leaders changed in your organization over the past 5 years?
Credit: Target Marketing and NAPCO Media Research

More than half of respondents say marketing responsibilities has changed greatly or completely over the past five years. And five years really isn’t that that much time, essentially since 2013. All this change has happened in less time than Snapchat’s been around.

Looking specifically at how responsibilities have changed, the movement has only been in one direction: up. In fact, after our first survey email, the “much less responsibility” answer column has yet to be touched, and even “less responsibility” has only been selected a handful of times.

Notice that orange does not appear on this chart, and there are only slivers of light blue. | Credit: Target Marketing and NAPCO Media Research

The area of marketing responsibility expanding the most is technology, where over 80 percent report increased responsibilities. That comes as no surprise, as marketing technology is expanding and marketers are controlling, or at least demanding, more and more enterprise technology spending.

Metrics/reporting and data responsibilities aren’t far behind, with over 70 percent of respondents saying those responsibilities have increased.

Surprisingly, none of those lead the “much more responsibility” column. That went to Innovation, where over 40 percent of our respondents say they now have much more responsibility.

How do those answers stack up to your own evolving roles and responsibilities? Take the survey and let us know, and you’ll be entered to win $100!

And keep an eye out for the final report we’ll be putting together from this research. These are only two of the insights we’re developing; the final report will include how marketing leaders are spending their time, how they’re spending their money, how they base their KPIs, whether or not they feel respected in the corporate hierarchy, and more!

Content Marketing: Better Metrics Mean Better Measurement

It isn’t always easy to measure the impact of your content marketing or attribute sales, leads or lead quality accurately. One way to improve the quality of your measurement is to improve the quality of your metrics.

It isn’t always easy to measure the impact of your content marketing or attribute sales, leads or lead quality accurately. One way to improve the quality of your measurement is to improve the quality of your metrics.

Analytics data is easy to come by. In most cases it’s even available free or is included in services you’re already using, like an email service provider. That doesn’t mean all that data is useful.

In fact, having all of that data is a problem of its own — the firehose problem. As in, if a firehose is your only source of water, you’re either going to have to be creating in how you use it, or quenching your thirst is going to be a painful proposition.

Understanding Different Types of Metrics

Assuming you’ve solved that problem, and can isolate the data you want, it’s important to focus on business metrics rather than process metrics. We define these as follows:

  • Business metrics directly impact your businesses profitability (Sales, for example)
  • Process metrics are proxies that tell us about our marketing but not necessarily our business (Twitter followers are a good example)

As you’d imagine, business metrics are far more important, but many marketers focus more on process metrics because, well, they’re much easier to come by. Just log in to Twitter, Google Analytics or just about any other digital marketing platform and you’ve got your data.

What’s missing from that data for many B2B businesses is what we might call the last mile: the connection between content consumed and a sale consummated. That means measurement and attribution take much more effort. Marketing automation tools can help with this — and we certainly recommend that you implement the sort of tracking that these tools make possible — but even if you aren’t quite ready to make that commitment, there are adjustments you can make to the tracking you are doing.

Differentiating Between Initial Contact and Greater Engagement

Perhaps the best adjustment you can make is to move away from first-level metrics and toward second-level. So, rather than focusing on the number of Twitter followers you have, focus on the number of clicks you get to lead magnets you tweet about, the number of shares your posts get, and the number of likes your content garners.

That’s not to say that tracking your followers isn’t worthwhile. It is, particularly if you track the data over time. But it’s so much further removed from usable business metrics than the process metrics that measure the next step. That is, the folks who have not only followed you, but who have engaged with you via Twitter.

This is true of other social media channels and other content types. Rather than measuring just blog post page views (which I would argue are more valuable than Twitter followers), measure how frequently a blog post is shared, leads to a newsletter signup, a lead magnet click, or even clicks to other pages on your site.

While nothing is a replacement for the direct connection between content consumption and sales, tracking the engagement-based process metrics mentioned above is a much more accurate way to assess your content marketing’s health and effectiveness than relying on those metrics that measure just an initial interaction.