Marketers, How Valid Is Your Test? Hint: It’s Not About the Sample Size

The validity of a test is not tied to the size of the sample itself, but rather to the number of responses that you get from that sample. Choose sample sizes based on your expected response rate, not from tradition, your gut or convenience.

A client I worked with years ago kept fastidious records of test results that involved offers, lists, and creative executions. At the outset of our relationship, the client shared the results of these direct mail campaigns and the corresponding decisions that were made based on those results. The usual response rates were in the 0.3 to 0.5% range, and the test sample sizes were always 25,000. If a particular test cell got 130 responses (0.52%), it was deemed to have beaten a test cell that received 110 responses (0.44%). Make sense? Intuitively, yes. Statistically, no.

In fact, those two cells are statistically equal. With a sample size of 25,000 a 0.5% response rate, your results can vary by as much as 14.7% at a 90% confidence level. That means that there was a 90% chance that the results from that test could have been as much 0.55% or as little as 0.43%, making our test cell results of 110 responses (0.44%) and 130 responses (0.52%) statistically equal. I had to gently encourage the client to consider retesting at larger sample sizes.

There are statistical formulas for calculating sample size, but a good rule of thumb to follow is that with 250 responses, you can be 90% confident that your results will vary no more than +10%. This rule of thumb is valid in any medium online or offline. For example, if you test 25,000 emails and you get a 1% response rate, that’s 250 responses. Similarly, if you buy 250,000 impressions for an online ad and you get a 0.1% response rate, you get 250 responses. That means you can be 90% confident that (all things held equal) you will get between 0.9% and 1.1% in the email rollout,  and between 0.009% and 0.01%, with a continuation of the same ad in the same media. (Older editions of Ed Nash’sDirect Marketing — Strategy, Planning, Execution contain charts that you can reference at different sample sizes and response rates).

A smaller number of responses will result in a reduced confidence level or increased variance. For example, with a test size of 10,000 emails and a 1% response rate (100 responses), your variance at a 90% confidence level would be 16%, rather than 10%. That means you can be 90% confident that you’ll get between 0.84% and 1.16% response rate  with all things being held equal. Any response within that range could have been the result of variation within the sample.

Marketers are not alone in using their gut rather than statistics to determine sample sizes. Nobel Laureate Daniel Kahneman confesses in his book “Thinking, Fast and Slow“:

“Like many research psychologists, I had routinely chosen samples that were too small and had often obtained results that made no sense … the odd results were actually artifacts of my research method. My mistake, was particularly embarrassing because I taught statistics and knew how to compute the sample size that would reduce the risk to an acceptable level. But I had never chosen a sample size by computation. Like my colleagues, I had trusted tradition and my intuition in planning my experiments and had never thought seriously about the issue.”

The most important takeaway here is that the validity of a test is not tied to the size of the sample itself, but rather to the number of responses that you get from that sample. Choose sample sizes based on your expected response rate, not from tradition, your gut or convenience.

Third-Party Data: A Quest for Quality

As marketing depends increasingly on data, a data quality regimen is an absolute necessity. While that’s been known to most “traditional” direct and database marketers for decades, I sometimes think the world of digital data is dragging along kicking and screaming.

Data mining
“Big_Data_Prob,” Creative Commons license. | Credit: Flickr by KamiPhuc

As marketing depends increasingly on data, a data quality regimen is an absolute necessity.

While that’s been known to most “traditional” direct and database marketers for decades, I sometimes think the world of digital data is dragging along kicking and screaming. Quality data is a quest, and seeking it out requires a discipline to test sources before appending and using the data.

The only mistake is not to test.

Coming from Data & Marketing Association’s &Then17, where a panel of brand chiefs were discussing perspectives on using first-, second- and third-party data in marketing, it seemed clear to me that the C-suite — to the extent that it is aware at all — appears to lack confidence in most third-party data sources, and how they could or should be deployed. Obviously the variety, volume and velocity of data can be overwhelming — particularly as digital, social and mobile channels churn a constant flow of data to evaluate and onboard – but the need to append and enhance first-party data with observed third-party data is absolutely the right way to go. Once an enterprise is ready to do so.

Still, third-party data has a confidence hurdle to overcome. But overcome we must.

If brands rely on first-party data alone, or second-party data from select marketing partners, and ignore third-party data sources, then advertisers are potentially shutting themselves off from cross-device customer identity recognition and resolution, better marketing attribution models, more refined lookalike, persona and acquisition models, customer journey mapping, omni-channel consumer discovery — and even a more complete customer view.

With all this on the line, it’s obvious (to me) that there must be executive buy-in to investigate and build-in third-party data. And integrating such data with first- and second-party sources. (Of course, first- and second-party data may have data quality issues, too.)

But let’s be clear — such a must is not just a grab-and-go data play. Maybe some brands have been burned on third-party data use. Hence, third-party data suppliers have a must of their own: either prove your quality now, or change your business processes so you can.

“If 2017 is the year of data, 2018 will be the year of data quality,” said Maureen Noonan, sales executive in the retail channel, LiveRamp, last week at the company’s Ramp Up on the Road event in Philadelphia.

On a Direct Marketing Club of New York Webinar two days later, Michelle Said, senior manager, New Marketing Institute at MediaMath, spoke of the TLC MediaMath goes through in evaluating third-party data sources and onboarding. Indeed, much of the Q&A on that webinar honed in on evaluating digital data prior to deployment. She said Data Management Platforms (DMPs) — where data are integrated — and Demand Supply Platforms (DSPs) — where audiences (media) are purchased — might best be merged to increase customer data match rates and improve data quality.

Unfortunately, there’s no industry report card on third-party data sources, nor one of the handful of onboarding players. Thus, it is imperative digital data users must adopt a discipline to test before the buy. If you’re not making time to test, then you’re leaving yourself vulnerable to garbage-in, garbage-out. Right now, the pursuit of quality is driving the data marketplace.

4 Ways to Triple Your Digital Marketing Results

Digital marketing is direct marketing. If you follow these four principles, you’ll triple your digital communications results — and it doesn’t cost you a penny more.

What metrics do you use to define digital marketing success today? Clicks? Traffic? Followers? Leads? Sales? ROI? Notice what these metrics have in common. They all require some action on the part of the target, whether it’s a prospect or a customer. And how do you motivate an action? You use direct response communications. It’s as simple as that. Digital marketing is direct marketing. So why are we still seeing suboptimal digital communications in display, email, SEM, wherever. It’s a tragedy. If you follow these four principles, you’ll triple your digital communications results — and it doesn’t cost you a penny more.

Direct response communications are structured specifically to motivate an immediate response, which is why they are perfect for digital marketing communications. The structure relies on four elements.

1. Add an Offer

The offer is the key motivator that overcome inertia and stimulates response. A strong offer can improve response rates by 300 percent. It doesn’t necessarily have to do with discounts or deals. In fact, in B2B, the most powerful offer is authoritative, educational information, packaged up in a report, a case study, a chart, a video — something that answers a question or solves a business problem. Make the offer the center of your messaging. Explain why they can’t live without it.

2. Make a Strong Call to Action

The CTA is, in sales terminology, the “close,” where a rep asks for the order. “Click here.” “Download now.” Make it prominent, and make it persuasive. No more bland “More information” buttons. Here’s a handy checklist of 75 CTA options to inspire you.

3. Prepare a Dedicated Landing Page

This is where the real close takes place. Use the landing page to resell the offer, and capture the prospect’s information. Design the form to be filled out easily, asking for as little data as possible. If you already know some of the target’s data elements, as is likely with email communications, then prepopulate the webform. Whatever you do, don’t drive the respondent to your home page.

4. Test and Improve

Continuous split testing is so easy in digital channels, you have no excuse not to take advantage. Test your audience segments, your offers, headlines, calls to action, design — everything. And keep testing, for continuous improvement. As Jan Brandt, the digital marketing pioneer who launched AOL practically single handedly, used to say: “Do more of what works, and less of what doesn’t.”

After these four, there are plenty of other effective direct response principles you can apply. Improve your audience targeting. Use a friendly, personal tone. Add a sense of urgency. Focus on benefits, over features. I could go on. But you’ll get 90 percent of the way there with the Big Four principles above. Then sit back and watch your digital marketing response rates soar.

A version of this article appeared in Biznology, the digital marketing blog.

Embrace Failure to Achieve Success

Too many marketers fear failure instead of embracing it. They fear that reporting poor results will be viewed as poor management. Instead, they should be positioning their results as learnings. Knowing what doesn’t work is just as important as knowing what does; yet the fear of failure permeates many corporate cultures, discouraging risk-taking and encouraging the status quo.

failure
(Image via iskandariah.perubatan.org)

Too many marketers fear failure instead of embracing it. They fear that reporting poor results will be viewed as poor management. Instead, they should be positioning their results as learnings. Knowing what doesn’t work is just as important as knowing what does; yet the fear of failure permeates many corporate cultures, discouraging risk-taking and encouraging the status quo.

There have been many times when I proposed a limited test plan with a small downside only to have it rejected by the client in favor of “the way we’ve always done it.” Following the course that nobody ever got fired for may be the politically safe option, but breakthrough results are never achieved from the status quo. As Theodore Roosevelt said, “The only man who never makes a mistake is the man who never does anything.”

Reporting on the acquisition of Whole Foods by Amazon, The New York Times noted, “While other companies dread making colossal mistakes, Mr. Bezos seems just not to care … That breeds a fiercely experimental culture that is disrupting entertainment, technology and especially retail.” (June 18, 2017) Commenting on Bezos’s style, Farhad Manjoo said in his column State of the Art, “The other thing to know about Mr. Bezos is that he is a committed experimentalist. His main way of deciding what Amazon should do next is to try stuff out, see what works, and do more of that.” (NYTimes June 19, 2017) Something direct marketers have done for decades.

Learning to embrace failure is an acquired skill. Smith College has instituted a new program called “Failing Well” to destigmatize failure for the high achievers who are admitted to the prestigious school on the basis of their perfect resumes. Smith’s Rachel Simmons says, “What we’re trying to teach is that failure is not a bug of learning, it’s a feature.” (NYTimes, June 25, 2017)

David Ogilvy, a strong proponent of testing and measurement, addresses the importance of embracing failures in the Ogilvy on Advertising chapter entitled “The 18 Miracles of Research.” He relates a story about a client who had invested $600,000 (a large sum in Ogilvy’s day) to develop a new product line. Ogilivy says, “ … our research showed a notable lack of enthusiasm … When I reported this discouraging news to the client I was afraid that, like most executives faced with inconvenient research, he would argue the methodology. I underestimated him. ‘Dry hole,’ said he, and left the meeting.”

Testing and experimentation is easy in the digital marketing environment. Even the best-conceived test plans will produce more failures than successes. Embrace those failures as valuable learnings.

Small Blog, Big Strategy

It’s incredibly tough for even the biggest brands to master content marketing. So what about small blogs? How are they staying relevant today? Microtargeting and interest-based awareness have changed digital strategy and these tactics are now home to small bloggers.

Kia Street blogIt’s incredibly tough for even the biggest brands to master content marketing. So what about small blogs? How are they staying relevant today? Microtargeting and interest-based awareness have changed digital strategy and these tactics are now home to small bloggers.

Let’s call “small” any blog with more than five active content contributors and at least a few published posts. Sound like you? Keep reading for more of my take on how to amplify your blog’s online presence. If you site has yet to be born, refer to this easy-to-digest explanation on the first steps of getting a website — securing a domain name.

kia street blog graphicDevelop Reasonable KPIs

No matter how big or small the budget, there are plenty of ways to get your content out there. For example:

  • Be at the top of results when users search for you on Google
  • Maximize reach and awareness of new posts immediately after release
  • Drive and sustain website traffic via Twitter and referrals
  • Focus on what is most important to your business: such as user acquisition, overall awareness and user engagement.

This allows you to divide and conquer with paid search, native advertising, social media and affiliate marketing. Consider this perspective when developing your own KPIs.

Aggregate Your Audience Data

What does your audience like on each channel? What do they care about?

Ask your audience data a lot of questions to help you dive further into who your readers are, how they use the chosen platform and what type of content they respond to most. Now see if you can match your blog’s content to the trends found within your audience data. This can help you understand if you’re offering the right content for your audience.

Think of your analysis as instant market research. Your audience data allows you to truly map out your customer’s journey. Some marketers are innovating this concept entirely by creating content paths to match their content marketing goals.

Identify a Content Strategy

Once you’ve solidified your goals and target audience, examine your strategy. Nix any initiatives that don’t contribute to your ultimate mission. What is it that you ultimately want your audience to do? The answer to this question should drive your content marketing strategy.

Experiment With Social Tactics

Experiment with targeted content that is engaging and personalized. Be transparent and interesting to your users. Here are a few simple ideas to make this happen:

  • Host a live Q&A panel on Periscope featuring your editorial staff;
  • Let the audience choose the topic of your next blog post via Twitter polling;
  • Find, attend and capture industry events with Instagram Stories.

Depending on your audience and the theme of your blog, there are many ways of standing out to both followers and non-followers, alike. Play with and test different tactics for best results!

kia blog post chartLearn, Try, Repeat

The best piece of advice for any small blogger is to learn, try, repeat. Here are three principles for riding the trend waves of your industry:

There are tons of sources that can provide you with the training you need to be successful in content marketing. Use them!

You can never go wrong with experimentation, but you can definitely go wrong without it. Don’t be hesitant toward failure.

Digital changes by the second — and so do the needs of your audience. Remember to periodically optimize content to fit the needs of your users.

Learn, try, repeat: It’s the most effective way for small blogs to sustain authority and relevancy in 2017 (and beyond!)

The Problem With A/B Testing

This week we set up an elaborate A/B test on subject lines. I liked “How 1.75 Billion Mobile Users See Your Website” and my client manager liked “Business Cards are No Longer the First Impression.” We learned long ago not to be a focus group of two, but our testing also proves something else I’ve been saying for years—A/B tests do not stand alone.

This week we set up an elaborate A/B test on subject lines. I liked “How 1.75 Billion Mobile Users See Your Website” and my client manager liked “Business Cards are No Longer the First Impression.” We learned long ago not to be a focus group of two, but our testing also proves something else I’ve been saying for years—A/B tests do not stand alone.

For our Mobile Users campaign, we dropped in an actual screenshot of every recipient’s website as viewed on an iPhone 6 (see image), because we knew this level of personalization could add a sizeable bump to engagement. It’s one thing to tell a recipient their website looks awful on a mobile device; it’s another thing to show them.

At the end of the campaign, we will have sent under 10,000 emails, but before we get to the balance, we felt it was important to know which of the two subject lines would perform better. All of us want to have the very best chance of success, so this was a necessary step. Ensure our subject line would foster a higher open rate.

For our initial test, we sent 600 emails, half to each subject line. One subject line performed best with opens, the other subject line performed best for clicks to the form. What that means is we now have a new question: is it better for us to get more people to open and see the message, or is it better to get fewer people to open, but to have accurately set their expectation about what was inside so they would click?

The open rate differed by more than 10 percent, and the CTR by about 2 percent.

Should I stop my analysis here and answer the only question I started with (which subject line should we use), or would it be better to take a look at other factors and try to improve the overall success in any way we can? For me, the problem I see with many marketers’ A/B tests is they ask one question, answer it, and then move on. In fact, many email automation systems are set up in precisely this manner: send an A/B test of two subject lines, and whichever performs better, use it to send the balance. What about the open rate and the CTR combined? Isn’t that far more important in this case (and many others)? Let’s take it one step further: what about the open rate, CTR and form completion rate combined? Now we’re on to something.

There are many factors at work here: time of day, past engagement, lifecycle and more. The subject line is a good place to start, but I can’t afford to ignore what we’ve gleaned from other campaigns.

This then becomes the hardest part of testing—be that A/B or multivariate—isolating what we’ve actually learned, and that usually means I cannot analyze just this one campaign. It must be an aggregate.

For our campaign, I took our test results and put those into a spreadsheet of 2014 campaign results and started to look for patterns. We’ve all read Thursday mornings are good (as an example), but does that hold true for my list? Were my open rates affected by time of day, by date, by day, by business type, by B-to-C vs. B-to-B? These are all analytics we track because we’ve found each of these does, in fact, influence open rate.

So, yes, we did learn which of the two subject lines performed better for opens, but what we also learned is that a repeat of the test to another 600 recipients on Tuesday morning instead of Thursday morning resulted in almost exactly opposite performance.

A/B tests can be hard. If they were easy, everyone would do them. Our simple one-time test was not enough information to make decisions about our campaign. It took more testing to either prove or disprove our theories, and it took aggregating the data with other results to paint the full picture.

We did find a winner: an email with a good subject line to get it opened, good presentation of supporting information inside, that led recipients to a form they actually completed, and all sent on the right day at the right time, from the right sender,

While you’re not privy to all of the data we have, on the top of the subject lines alone, which do you prefer?

11 Best Practices for Email Acquisition and Engagement

The income generated by your email marketing is directly related to the quality of your email address list. A list filled with highly targeted prospects and customers delivers solid response rates, clickthrough and revenue. Acquiring addresses for the people most likely to respond to your email marketing and sending relevant content should be top priorities for every company.

The income generated by your email marketing is directly related to the quality of your email address list. A list filled with highly targeted prospects and customers delivers solid response rates, clickthrough and revenue. Acquiring addresses for the people most likely to respond to your email marketing and sending relevant content should be top priorities for every company.

The best strategies capture email addresses at a variety of locations and use customized messaging to motivate participation in the marketing program. Moving people past the resistance to share their email address is only the first step in a multi-faceted strategy. Every email from the initial “Welcome to our program” to routine promotional messages must speak directly to the recipients or risk triggering opt-out activity.

Overcoming the inertia created by using a tool that consistently generates responses is one of the biggest challenges faced by email marketers. The “if it isn’t broken, why fix it?” thought process prevents email programs from generating even more revenue and building better relationships. The only way to move past this is to implement a continuous improvement policy and test everything.

Continuous improvement begins with best practices. Using the results from tests by others is a good way to insure that you will not reinvent their mistakes. Once the best practices are in place, testing different ways to engage customers and prospects is easier and more effective. Here are some tips to get you started:

  1. Measure Everything: Capturing every piece of data is important because it creates benchmarks for improvement. If the data isn’t immediately convertible to usable information, save it. Hard drives are cheap and trying to regenerate lost data is hard.
  2. Customize Welcome Emails: Subscribers from different sources have different expectations. Create customized emails that recognize the difference and speak directly to the recipients. If your email marketing service provider doesn’t have this capability and changing isn’t an option right now, speak to the persona most likely to become a long-term profitable customer.
  3. Capture Email Addresses at Point of Sale: Offering to email receipts reduces customer resistance to sharing information and provides a second opportunity to encourage program participation when people don’t automatically opt in.
  4. Give People a Tangible Reason for Signing Up to Receive Your Emails: Offering a discount on the next purchase encourages the sign-up and future purchases. If people don’t respond to the discount, test sending a reminder just before the coupon expires. (Note: if you don’t have the ability to identify the people who responded, don’t send the reminder. Doing so tells them that they weren’t recognized when they returned and undermines the relationship.)
  5. Offer People a Sign-up Choice Between Email and Text Messages: When given a choice, people are more likely to choose one than none. It simultaneously grows your email and mobile marketing programs.
  6. Use Pop-ups to Encourage Sign-ups: Pop-ups are the acquisition method that people love to hate. Forget the hate talk and go with the test results because it is also the method that delivers high response rates.
  7. Personalize Everything: Relationships are personal. Sending generic emails will not create loyal customers. Create an email marketing program that is personal and customized for individuals and you’ll get lifelong, highly profitable customers.
  8. Keep Your Data Clean: Email hygiene services verify your addresses and reduce spam risk. A good send reputation keeps the spaminators at bay, improves deliverability, and connects you to people interested in your products and services.
  9. Create Second Chance Offers for People Who Don’t Opt In: Automatically opting people in when they provide their email address for other reasons can reduce deliverability and your send reputation. Use a second chance offer to encourage people who didn’t opt in to change their mind.
  10. Segment Well: Sending the same email to everyone generates results. Creating specialized emails based on people’s behavior and preferences generates much better results. In addition to the immediate response, customized emails make people more likely to open and respond to future messages.
  11. Test Everything: General best practices are simply rules of thumb that provide a starting point for successful email marketing programs. The best way to optimize your program is to test different tactics and use the information to fine-tune future mailings.