Media Outlook 2019: Spell Marketing with a ‘D’

The January marketing calendar in New York has included for the past decade or so a certain can’t-miss event of the Direct Marketing Club of New York. In 60 fly-by minutes, 100-plus advertising and marketing professionals hear a review of the previous year in marketing spend, a media outlook for the current year and macro-economic trends driving both.

The January marketing calendar in New York has included for the past decade or so a certain can’t-miss event of the Direct Marketing Club of New York. In 60 fly-by minutes, 100-plus advertising and marketing professionals hear a review of the previous year in marketing spend, a media outlook for the current year and macro-economic trends driving both.

Bruce Biegel, senior managing director at Winterberry Group, keeps everyone engaged, taking notes and thinking about their own experiences in the mix of statistics regarding digital, mobile, direct mail, TV and programmatic advertising.

“We will be OK if we can manage the Shutdown, Trump, China, Mueller, Congress and Brexit,” he noted, all of which weigh on business confidence.

Suffice it to say, marketing organizations and business, in general must navigate an interesting journey. Biegel reports estimated U.S. Gross Domestic Product (GDP) growth of 2.3 percent in 2019 down from 3 percent in 2018, while total marketing spending growth in 2018 had dipped below its historic level of exceeding two times GDP growth.

In 2019, we are poised for 5.3 percent growth in advertising and marketing spending a slight gain from the 5.2 percent growth of 2018 over 2017.

Watch the Super Bowl, By All Means But Offline Dominance Is Diminishing

Look under the hood, and you see what the big drivers are. Offline spending including sponsorships, linear TV, print, radio, outdoor and direct mail will spot anemic growth, combined, of 0.1 percent in 2019. (Of these, direct mail and sponsorships will each post growth of more than 3 percent, Winterberry Group predicts.)

But online spending growth display, digital video, social, email, digital radio, digital out-of-home, and search will grow by 15.5 percent. Has offline media across all categories finally reached its zenith? Perhaps. (See Figure 1.)

Figure 1.

Credit: Winterberry Group, 2019

Digital media spend achieved 50 percent of offline media spend for the first time in 2018. In 2019, it may reach 60 percent! So who should care?

We do! We are the livers and breathers of data, and data is in the driver’s seat. Biegel sees data spending growing by nearly 6 percent this year totaling $21.27 billion. Of this, $9.66 billion will be offline data spending, primarily direct mail. TV data spending (addressable, OTT) will reach $1.8 billion, digital data $7.85 billion, and email data spend $1.96 billion (see Figure 2.)

Figure 2.

Credit: Winterberry Group, 2019

Tortured CMOs: Unless She’s a Data Believer

Marketing today and tomorrow is not marketing yesterday. If marketing leadership does not recognize and understand data’s contribution to ad measurement, attribution and business objective ROI, then it’s time for a new generation to lead and succeed. Marketing today is spelled with a D: Data-Driven.

Unfortunately we don’t have all the data we need to manage Shutdown, Trump, China, Mueller, Congress and Brexit. That’s where sheer luck and gut instincts may still have a valid role. Sigh.

Election Polls and the Price of Being Wrong 

The thing about predictive analytics is that the quality of a prediction is eventually exposed — clearly cut as right or wrong. There are casually incorrect outcomes, like a weather report failing to accurately declare at what time the rain will start, and then there are total shockers, like the outcome of the 2016 presidential election.

screen-shot-2016-11-17-at-1-03-34-pmThe thing about predictive analytics is that the quality of a prediction is eventually exposed — clearly cut as right or wrong. There are casually incorrect outcomes, like a weather report failing to accurately declare the time it will start raining, and then there are total shockers, like the outcome of the 2016 presidential election.

In my opinion, the biggest losers in this election cycle are pollsters, analysts, statisticians and, most of all, so-called pundits.

I am saying this from a concerned analyst’s point of view. We are talking about colossal and utter failure of prediction on every level here. Except for one or two publications, practically every source missed the mark by more than a mile — not just a couple points off here and there. Even the ones who achieved “guru” status by predicting the 2012 election outcome perfectly called for the wrong winner this time, boldly posting a confidence level of more than 70 percent just a few days before the election.

What Went Wrong? 

The losing party, pollsters and analysts must be in the middle of some deep soul-searching now. In all fairness, let’s keep in mind that no prediction can overcome serious sampling errors and data collection problems. Especially when we deal with sparsely populated areas, where the winner was decisively determined in the end, we must be really careful with the raw numbers of respondents, as errors easily get magnified by incomplete data.

Some of us saw that type of over- or under-projection when the Census Bureau cut the sampling size for budgetary reasons during the last survey cycle. For example, in a sparsely populated area, a few migrants from Asia may affect simple projections like “percent Asians” rather drastically. In large cities, conversely, the size of such errors are generally within more manageable ranges, thanks to large sample sizes.

Then there are human inconsistency elements that many pundits are talking about. Basically everyone got so sick of all of these survey calls about the election, many started to ignore them completely. I think pollsters must learn that at times, less is more. I don’t even live in a swing state, and I started to hang up on unknown callers long before Election Day. Can you imagine what the folks in swing states must have gone through?

Many are also claiming that respondents were not honest about how they were going to vote. But if that were the case, there are other techniques that surveyors and analysts could have used to project the answer based on “indirect” questions. Instead of simply asking “Whom are you voting for?”, how about asking what their major concerns were? Combined with modeling techniques, a few innocuous probing questions regarding specific issues — such as environment, gun control, immigration, foreign policy, entitlement programs, etc. — could have led us to much more accurate predictions, reducing the shock factor.

In the middle of all this, I’ve read that artificial intelligence without any human intervention predicted the election outcome correctly, by using abundant data coming out of social media. That means machines are already outperforming human analysts. It helps that machines have no opinions or feelings about the outcome one way or another.

Dystopian Future?

Maybe machine learning will start replacing human analysts and other decision-making professions sooner than expected. That means a disenfranchised population will grow even further, dipping into highly educated demographics. The future, regardless of politics, doesn’t look all that bright for the human collective, if that trend continues.

In the predictive business, there is a price to pay for being wrong. Maybe that is why in some countries, there are complete bans on posting poll numbers and result projections days — sometimes weeks — before the election. Sometimes observation and prediction change behaviors of human subjects, as anthropologists have been documenting for years.

Top 3 Questions I Hear About Direct Marketing

Clients and friends who are traditional marketers often seek my advice on direct response. Here are the answers to the three questions I hear most frequently:

Unknown peopleTraditional marketer clients and friends often seek my advice on direct response. Here are the answers to the three questions I hear most frequently:

Question No. 1: What Kind of Response Rate Should I Expect?

There are response rate benchmark studies published by the DMA and others, usually organized by industry and type of offer (lead generation, free information, cash with order, etc.). These reports can provide you with some guidance in setting your expectations, but they can just as easily lead you astray. How? If you’ve seen one campaign, you’ve seen just that: one. But some marketers fall into the trap of applying previous results to various campaigns.

Your response rate is driven by three factors, listed here in order or importance:

  • Media: If you don’t get your message in front of the right people, your response will suffer. It is the single most important driver of response, so choose wisely.
  • Offer: What’s your value proposition to the prospect? Simply stated, your offer says, “Here’s what I want you to do, and here’s what you’re going to get when you do it.” If your offer is not appealing or relevant to the prospect, the response — or lack thereof — will reflect that. Also, keep in mind that soft offers, which require little commitment on the part of the prospect (e.g., get free information, download a whitepaper, etc.), will generate a higher response than hard offers, which require a greater commitment (request a demo, make an appointment with a sales rep, payment with order, etc.).
  • Creative: It’s hard for traditional advertisers to believe that this element is lower in importance than the first two, but it is. And the biggest driver of response from a creative standpoint is a clearly stated prominent call to action.

Question No. 2: We Have a Strong Campaign Coming Out of Market Research. My Client/Management Wants to Get This Out As Quickly As Possible. Why Do I Have to Test?

Three reasons:

  • You may have a well-researched creative position but it can be executed in a variety of different ways (see the third bullet under Question No. 1, above). Furthermore, your market research couldn’t predict the response rates from different media. But knowing whether email lists, websites or social media fare best for your audience and offer will be crucial to generating the highest response rate.
  • You want to be able to optimize the three factors above to determine which combination gives you the most qualified leads at the lowest cost per lead.
  • Most importantly, you want to avoid a potentially catastrophic result if you’ve gotten one of the three key elements wrong. It’s better to do that with a small quantity rather than a full-scale effort. It’s always disconcerting to hear people say, “We tried direct. It didn’t work.” Keep in mind that if you’ve seen one, you’ve seen one. Previous successes and shortcomings won’t apply when you tweak the context.

Question No. 3. How Big Should My Test Be?

Your test should be large enough to produce statistically significant results. There are two parts to this: the confidence level of your results and the variation you’re willing to accept.

There are statistical formulas for calculating sample size, but a good rule of thumb to follow is that with 250 responses, you can be 90 percent confident that your results will vary no more than plus or minus 10 percent.

For example, if you test 25,000 emails and get a 1 percent response rate, that’s 250 responses. That means you can be 90 percent confident that (all things held equal) you will get between 0.9 percent and 1.1 percent in a rollout.

A smaller number of responses will result in a reduced confidence level or increased variance. For example, with a test size of 10,000 emails and a 1 percent response rate at a 90 percent confidence level, your variance would be 16 percent rather than 10 percent. That means you can be 90 percent confident that you’ll get between 0.84 percent and 1.16 percent response rate, with all things being held equal.

Machine Learning and AI — What’s ‘Real,’ What’s Required

Big data has gone full-cycle. Quite a while ago, big data had its beginning within the realm of academic research. Recognizing its usefulness, niche businesses then began implementing big data. Massive companies, such as Google, began commercializing big data — these large companies then spawned smaller companies. Those companies are now getting big. This all makes for a lot of noise in the marketplace.

Data graphicBig data has gone full-cycle.

Quite a while ago, big data had its beginning within the realm of academic research. Niche businesses then began implementing big data after recognizing its usefulness. Next, massive companies (like Google) started commercializing big data — these large companies then spawned smaller companies. Those companies are now getting big.

This all makes for a lot of noise in the marketplace.

Today, we hear folks without applied mathematics or computer science backgrounds talking big data, algorithms and artificial intelligence (AI) at cocktail parties. The fluency has grown rather quickly: A CMO I’ve known for years used to wince when we talked analytics, but now she enthusiastically discusses her firm’s AI initiatives. She’s not running marketing at Google or IBM Watson, either — she sells clothing online.

While we’re likely in one of the most amazing periods in history to be in business, it does not come without its challenges. These days, you have to sift through all of the clutter when it comes to innovations in the marketing space.

Let’s see if we can simplify what the data pundits are tweeting and discern where the value really is.

Machine Learning

Machine learning (ML) occurs through networks of algorithms.

First, the good news: ML really works.

As we’ve discussed in “Marketing Machines — Possible or Pipedream?” ML is used to ingest large amounts of data and identify patterns in that data. The machine “learns” by ingesting, transforming and then conditioning a learning algorithm with your dataset.

ML will find the statistical relationships (models) between your various data points to articulate how efficiently your business is running. By calculating the best potential models, it can also show you what improvements you can make. ML can deduce your most profitable business targets. It can tell you who is likely to buy shoes priced over $800, or which production line is most likely to break down in the wintertime.

But ML Isn’t Foolproof

Machine Learning can surely help us find structure and patterns in data through statistics and the power of cloud computing. Amazon’s ML cloud computing capability, for example, isn’t specific to any domain and arguably works with any inputs. It will consistently output a result or target. Yet that very flexibility is where ML can prove risky:

“If you can dump anything into an ML process, and have it come up with an answer, you’d be wise to be wary of that answer.”

ML techniques all require you provide it with a “universe”. This universe consists of all the likely permutations representative of your purpose. If your conditioning data is skewed heavily to sneakers under $75, it will prove very hard to predict what customers are likely to buy $800 shoes.

This may sound like an unfair example, but consider the marketers who are out to break into the higher-end sales but only have data from their pre-existing customers. If skewed interpretations were applied to new-customer marketing (and they can be), your returns could be even worse than without any ML interference. The fact is, there are far more experiments where ML doesn’t produce a valuable outcome than those that do. But as technology and big data are refined over time, better results will be achieved across the board.

Analytics and model-building are highly iterative processes. If an ML process is focused on only a particular niche, the likelihood of getting better results sooner is higher — but still iterative. Despite its current limits, AI offers a deeper and more layered method of applying iterative math to break down large data questions than raw manpower.

Google’s AlphaGo AI beat champion Lee Sedol in a tournament of Go by exponentiating component questions, covering as many bases as it could. While AlphaGo works similarly in many ways to the human mind in this way, it did also have the advantage of iteratively playing against itself thousands of times.

Humans can’t do that.

The Bottom Line: Good Data In, Good Comes Out

Whether Google’s AlphaGo, Amazon’s ML tools or your home-grown mashup, the quality of the data that goes into ML is the largest factor you can control in creating value with systems-driven optimization.

In an age where many organizations have siloed data or cumbersome messes, along with marketing organizations that don’t even have a reliable marketing operations database, this is no small challenge. Getting your data centralized, organized and accessible is a requisite first step. Get that right, and there may be opportunities ahead to drive value up.

Direct-Mail Testing Upended With Bayesian Analytics 

Direct-mail marketers have relied on either A/B testing or multivariate testing to evaluate winning campaigns for generations. Those evaluations, unfortunately, weren’t always based on statistics, but on educated guesses or office surveys. But a confluence of technology and something called Bayesian Analytics now enables direct mailers to pre-test and predict responses before mailing.

Direct-mail marketers have relied on either A/B testing or multivariate testing to evaluate winning campaigns for generations. Those evaluations, unfortunately, weren’t always based on statistics, but often on educated guesses or office surveys. But a confluence of technology and something called Bayesian Analytics now enables direct mailers to pre-test and predict responses accurately before mailing.

Bayesian Analytics may well upend how we test to identify the highest profit-producing control more quickly and at a fraction of the cost of traditional testing methods. Bayesian Analytics is already being used in astrophysics, weather forecasting, insurance risk management and health care policy. And now, a few cutting-edge mailers have successfully used this analytics approach, too.

Usually, direct-mail marketers test four categories of variables, such as price, headlines, imagery and formats.

Within each of those variables, direct marketers often want to test even more options. For example, you might want to test the relative effectiveness of discounts of $5 off, $10 off, 10 percent off or 15 percent off. And you want to test multiple headlines, images and formats.

The following matrix illustrates the complexity of testing multiple variables. Let’s say you want to test four different pricing offers, four headlines, four imagery graphics and four direct mail formats. Multiplying 4 x 4 x 4 x 4, you find there are a possible 256 test combinations.

GHBlog100516It’s impractical and costly to test 256 combinations. Even if your response rate dictated you only needed to mail 5,000 items per test for statistical reliability, you’d still have to mail over 1.2 million pieces of mail. If each piece costs $0.50, the total testing cost is $600,000.

Bayesian Analysis works with a fraction of the data required to power today’s machine learning and predictive analytics approaches. It delivers the same or better results in a fraction of the time. By applying Bayesian Analysis methodologies, direct mailers can make significant and statistically reliable conclusions from less data.

The International Society for Bayesian Analysis says:

“Bayesian inference remained extremely difficult to implement until the late 1980s and early 1990s when powerful computers became widely accessible and new computational methods were developed. The subsequent explosion of interest in Bayesian statistics has led not only to extensive research in Bayesian methodology but also to the use of Bayesian methods to address pressing questions in diverse application areas such as astrophysics, weather forecasting, health care policy, and criminal justice.”

Bayesian Analysis frequently produces results that are in stark contrast to our intuitive assumptions. How many times have you used your intuition to test a specific combination of variables thinking it would result in a successful direct-mail test, only to be disappointed in the results?

Bayesian Analytics methodology takes the guess-work out of what to test in a live-mailing scenario. Instead of testing and guessing (as the late Herschell Gordon Lewis wrote in his recent column, Rather Test or Guess?) you can now pre-test those 256 combinations of variables before the expense of a live mail test. The pre-test reveals which combination of variables will produce the highest response rate in the live test, resulting in substantial test savings.

But wait, there’s another benefit: You can learn what mix of variables will produce the best results for any tested demographic or psychographic group. It’s possible to learn that a certain set of variables work more successfully for people who are, for example, aged 60+, versus those aged 40-59. This means you may be able to open up new prospecting list selections that previously didn’t work for you.

Again, a handful of mailers have already pre-tested this new Bayesian Analysis methodology — it has accurately predicted the results in live testing at a 95 percent level of confidence. Now that beta testing has been completed and the methodology is proven to be reliable, look to hear more about it in the future.

There’s more about this methodology than can be shared in a single blog post. To learn more, download my report.

My new book, “Crack the Customer Mind Code” is available at the DirectMarketingIQ bookstore. Or download my free seven-step guide to help you align your messaging with how the primitive mind thinks. It’s titled “When You Need More Customers, This Is What You Do.” 

Direct Mail Benchmarks From DMA

In my years following the direct marketing field, one of the resources I’ve most appreciated is the Direct Marketing Association’s annual roundup of direct and interactive marketing statistics, the DMA Statistical Fact Book. Each year, this compilation of research studies—this year, 40 prominent sources—offers benchmarks and other metrics related to nearly a dozen categories. Examining direct mail-related data, here are a few stats from this year’s edition that jump out at me. Did you know

In my years following the direct marketing field, one of the resources I’ve most appreciated is the Direct Marketing Association’s annual roundup of direct and interactive marketing statistics, the DMA Statistical Fact Book. Each year, this compilation of research studies—this year, 40 prominent sources—offers benchmarks and other metrics related to nearly a dozen categories: Internet, mobile marketing, social media, catalog, consumer demographics, direct mail, direct marketing overview, email, nonprofit and USPS information.

Examining direct mail-related data, here are a few stats from this year’s edition that jump out at me. Did you know:

  1. The mean cost per order or lead for a letter-sized direct mail piece sent to a house file is $19.35, and the same sent to a prospect or total file is $51.30. —”DMA Response Rate Report,” 2012.
  2. More than 12.5 million consumers purchased prescription drugs via a mail or phone order. —Experian Simmons “National Consumer Study,” 2012.
  3. In the food category, 16.8 percent of coupons redeemed originated from the Internet, home-printed; another 6.6 percent originated from direct mail. —Valassis/NCH Marketing Services, “Coupon Facts Reports,” 2013.
  4. The salary range of marketing analytics directors with 7+ years’ experience was $119,300 to $131,500. —Crandall Associates, 2012.
  5. 54.5 percent of U.S. Households read, looked at, or set aside for later reading, their letter-sized enveloped direct mail pieces in 2011. For larger than letter-size envelope mail, 67.2 percent did the same. —USPS “Household Diary Study,” 2012.
  6. Mail order companies have the highest percentage of pieces addressed to specific household members—97.1 percent of their direct mail, while Restaurants have the least—16.2 percent. —USPS “Household Diary Study,” 2012.
  7. The response rate for credit card mailings in 2012 was 0.6 percent—down from 2.2 percent in 1993, but up from 0.3 percent in 2005. —Ipsos/Synovate Mail Monitor, 2012.
  8. In 2012, 54.2 percent of total value of U.S. Mail is attributable to direct mail advertising across all classes. —DMA/USPS “Revenue, Pieces and Weight by Class of Mail and Special Services,” 1990-2012.
  9. In the U.S., direct mail marketing spend held steady at $45.2 billion between 2011 and 2012. It stood at $43.8 billion in 2009. —Winterberry Group, 2013.
  10. After peaking at 19.6 billion catalogs mailed (in the U.S.) in 2007, only 11.8 billion catalogs were mailed in 2012. —DMA/USPS “Revenue, Pieces and Weight by Class of Mail and Special Services,” 2012.
  11. Of 11,743 catalogs in the U.S., 94.1 percent of catalogs have an online version—MediaFinder.com, “National Directory of Catalogs,” 2012.

No wonder the 200-page DMA Statistical Fact Book is—year to year—among DMA’s best sellers in its bookstore. It’s available for purchase via DMA’s online bookstore. The cost is $249 for DMA members and $499 for non-members: https://imis.the-dma.org/bookstore/ProductSingle.cfm?p=0D45047B|4DA56D9737FF45DF90CA1DA713E16B80

Happy reading!

Forget Real Friends, Just Fake It

If someone “likes” your brand on Facebook, or gives your website or blog posting a “thumbs-up,” is that a meaningful metric as a marketer? If your Twitter followers keep increasing, does that mean you’re publishing valuable content and helping position yourself as an industry thought leader? I used to think so, but I was disappointed to learn how disingenuous the entire process has become.

If someone “likes” your brand on Facebook, or gives your website or blog posting a “thumbs-up,” is that a meaningful metric as a marketer? If your Twitter followers keep increasing, does that mean you’re publishing valuable content and helping position yourself as an industry thought leader? I used to think so, but I was disappointed to learn how disingenuous the entire process has become.

In pre-Facebook days, if we liked a brand/product/service, we would talk positively about our experiences. We’d gladly refer colleagues when asked, or write an email or letter praising the organization. If we were truly brand ambassadors, we’d proudly pontificate and evangelize at the drop of a hat.

With the creation of the digital “thumbs-up,” a click of the mouse records your endorsement or disagreement in a split second. And, as marketers, we greedily record and compare those statistics as a justification for the impact that our brand might be having on the target market.

Every time I post a tweet, I can’t help but glance at my increasing “follower” statistics and wonder what 140-character pithy remark prompted them to start following me. It also adds a bit of pressure to make sure I keep my followers interested and engaged with my marketing insights.

But on several occasions, when scanning a discussion group on LinkedIn, someone has created the challenge: “Like me/our company on Facebook and I’ll like yours!” My reaction is swift and from the gut… “Like you? I don’t even KNOW you.”

Perhaps I’m naive, but I was under the impression that if your brand provided quality products and services that were deemed useful to your target audience, or you posted information that was helpful/funny/smart, then your reader/user gave you the good old “thumbs up” as a reflection of their approval. So imagine my surprise when I found a site where you can buy Facebook “likes” or Twitter followers!

For a few bucks you can add hundreds or thousands of “likes” to your page, or increase your Twitter followers instantly … all with the goal of seemingly increasing your brand popularity and, in turn, helping your site move up in rankings and search results.

Who thinks up this stuff?

Clearly an entrepreneur who has figured out that anything worth having is a business just waiting to happen—even if it means that what you’re selling is a tool to help companies scam potential customers.

And what about those companies that purchase “likes” or Twitter followers? Perhaps if they spent more time and money on running honest and helpful businesses that customers truly liked and felt good about, they wouldn’t have a need to purchase fake “friends” to boost their fake popularity.

I know how hard it is to build and sustain a business in a world filled with ruthless competitors. But I can promise that your business won’t get ahead by faking friends.