How to Find New Customers, Based on Current Customers, With a Targeted Mail List

When you need to acquire new customers, purchasing a targeted mail list is the way to reach them. However, some lists are better than others. We talked about five types of prospecting lists in the last post; now, we will discuss analytics for profiling and modeling lists.

Your mailing list is critical to your mailing results. When you need to acquire new customers, purchasing a targeted mail list is the way to reach them. However, some lists are better than others. We talked about five types of prospecting lists in the last post; now, we will discuss analytics for profiling and modeling lists.

The better your list is targeted, the better your response rate will be.

  • Descriptive Analytics: Is a profile that describes common attributes of your customers and helps to target, based on demographic lookalikes. The market penetration of each attribute shows the comparison between customers and overall population living in the same geo area, with the same attributes, where each element is examined separately. Basically, you will see who your best customers are and find prospects just like them.
  • Predictive Analytics: Is a model that finds how two or more groups are similar or dissimilar. For example, buyers vs. non-buyers; or responders vs. non-responders. Then it assigns a score that represents a probability-to-act, based on the interaction of several attributes. That way, you can get a better idea of who buys what in order to find more people like them.

So why would you want to try one of these options? You can expect an improved response rate, more effective cross-sell and up-sell opportunities, and the ability to build better loyalty programs, because you understand people better. These processes help you identify prospects who “look like” your best customers.

Profiling allows you to profile your best customers (B2C or B2B) and find out what makes them different from others in your target market. You can target new prospects who are the most likely to respond, purchase, or renew, based on your customer data. You can gain precise information about your customers, based on the statistical analysis of key activities. Finally, you will understand the lifetime value of customers, including their probability to respond and purchase products, with a highly advanced model.

Predictive modeling is a process that analyzes the past and current activities or behaviors of two groups to improve future results. This is accomplished between the comparisons of two groups of data. The differences are assessed to identify whether a pattern exists and if it is likely to repeat itself. Scores can be applied to prospect data purchases, or to segment client data for marketing.

Both provide great opportunities for you to target and reach prospects who are more likely to be interested in what you are selling. This way, your offer resonates with them and compels action. This is another way to increase your ROI, as well as save money. You are mailing to only qualified people, so there are less pieces to print and mail. Keep in mind that your customer list is going to get the best response rates, but a highly targeted list like these will have higher response rates than an average purchase list. Are you ready to profile and model your list?


The Value of ‘Old’ Marketing Content Is Finding What’s Evergreen

There is such a thing as having too much marketing content. Here’s your guide to evaluating the content you have to determine what you should keep, what you should update, and what you should delete.

It doesn’t take too terribly long to build up a library of some size if you’re generating articles and other content consistently as part of your content marketing. If you’ve been at it for a while, then you may have more content than you realize, more content than you need, and more content than is good for you.

You’re probably asking yourself, “Is that even possible? Having too much content?” The answer is “yes.” But the real questions you should be asking are, “how much is too much” and “How do I know what to keep and what to delete?”

Age Is Just a Number In Content Marketing

Your first thought might be the old rule about, “First in, first out,” which is an excellent rule of thumb for milk, meat, and other perishables. However, there’s no reason that your oldest content needs to be tossed first. In fact, if it’s still performing, it’s worth keeping.

A quick review of your analytics data will tell you what content is working, old or new. Just be sure you’re looking at your traffic data intelligently. Cumulative page views for a piece that’s been live for six years is likely to have far more page views than a 6-month-old piece over that 6-year period. Be sure to compare like data periods

Be sure, as well, to adjust for other factors, like any promotion you may be doing for one piece and not the other. (In this regard, the younger piece may look like the better performer, if you’ve recently featured it in social media posts, email marketing, etc.)

Once you’re sure you’re comparing apples to apples, it’s time to look for context. Is the traffic flow holding steady? Is it seasonal? Can you identify traffic sources for each piece? (Different sources will be of greater or lesser value.)

All of those data points should factor into your decision about keeping or deleting a piece of content, and on how to treat that content, if you do keep it. More on this below.

Engagement Matters, Part I

Knowing how many people are consuming a piece of content is great, but knowing how they’re consuming it and what the content is encouraging them to do is far more valuable.

Gather data points that tell you how frequently visitors take the action you desire. You may need to customize your calls to action (CTAs) to differentiate between actions taken in various places on your site. With that tracking in place, you can identify the pages that create more conversions. Digital marketing lives and dies by conversions, so developing content that converts reliably is critical.

Engagement Matters, Part II

Other engagement metrics matter, as well. Time on page, bounce rate, number of pages visited in the same session, and other metrics can all tell you how deeply your audience is connecting with a particular topic. These are no substitute for CTA engagement, but it is still worth examining these metrics as additional evidence for or against an article’s value.

Options Beyond ‘Kill or Keep’

There’s going to be some content that it’s clear should be kept and some that should clearly be killed. In the middle, you’re likely to find some that could go either way. A few options you have are:

Combining Content

If articles aren’t quite connecting with your audience as you’d like, perhaps combining two or more of them would help?

Refocusing Content

If a mid-pack piece is being outperformed by similar articles, change its focus. You don’t want to keep writing minor variations on the same content targeting the same keywords. Doing so almost always pits you against yourself in competition for top listings on a search engine results page. But a new take on a similar topic is worth exploring for SEO and conversion improvements.

Don’t Fear the Purge

Finally, resist the urge to keep everything. It can be tempting to keep the lid on your desktop trash icon firmly sealed. You worked hard, or invested resources, to generate the content you have. But your business changes, the market changes, and your content library has to change along with them. Purge anything that isn’t relevant to your business goals and is not helping you answer your audience’s most pressing questions.

Machine Learning? I Don’t Think Those Words Mean What You Think They Mean

I find more and more people use the term “machine learning” when they really mean to say “modeling.” I guess that is like calling all types of data activities — with big and small data — “Big Data.” And that’s OK.

I find more and more people use the term “machine learning” when they really mean to say “modeling.” I guess that is like calling all types of data activities — with big and small data — “Big Data.” And that’s OK.

Languages are developed to communicate with other human beings more effectively. If most people use the term to include broader meanings than the myopic definition of the words in question, and if there is no trouble understanding each other that way, who cares? I’m not here to defend the purity of the meaning, but to monetize big or small data assets.

The term “Big Data” is not even a thing in most organizations with ample amounts of data anymore, but there are many exceptions, too. I visit other countries for data and analytics consulting, and those two words still work like “open sesame” to some boardrooms. Why would I blame words for having multiple meanings? The English dictionary is filled with such colloquial examples.

I recently learned that famous magic words “Hocus Pocus” came from the Latin phrase “hoc est corpus,” which means “This is the body (of Christ)” as spoken during Holy Communion in Roman Catholic Churches. So much for the olden-day priests only speaking in Latin to sound holier; ordinary people understood the process as magic — turning a piece of bread into the body of Christ — and started applying the phrase to all kinds of magic tricks.

However, if such transformations of words start causing confusion, we all need to be more specific. Especially when the words are about specific technical procedures (not magic). Going back to my opening statement, what does “machine learning” mean to you?

  • If spoken among data scientists, I guess that could mean a very specific way to describe modeling techniques that include Supervised Learning, Unsupervised Learning, Reinforced Learning, Deep Learning, or any other types of Neural Net modeling, indicating specific methods to construct models that serve predetermined purposes.
  • If used by decision-makers, I think it could mean that the speaker wants minimal involvement of data scientists or modelers in the end, and automate the model development process as much as possible. As in “Let’s set up Machine Learning to classify all the inbound calls into manageable categories of inquiries,” for instance. In that case, the key point would be “automation.”
  • If used by marketing or sales; well, now, we are talking about really broad set of meanings. It could mean that the buyers of the service will require minimal human intervention to achieve goals. That the buyer doesn’t even have to think too much (as the toolset would just work). Or, it could mean that it will run faster than existing ways of modeling (or pattern recognition). Or, they meant to say “modeling,” but they somehow thought that it sounded antiquated. Or, it could just mean that “I don’t even know why I said Machine Learning, but I said it because everyone else is saying it” (refer to “Why Buzzwords Suck”).

I recently interviewed a candidate fresh out of a PhD program for a data scientist position, whose resume is filled with “Machine Learning.” But when we dug a little deeper into actual projects he finished for school work or internship programs, I found out that most of his models were indeed good, old regression models. So I asked why he substituted words like that, and his answer was staggering; he said his graduate school guided him that way.

Why Marketers Need to Know What Words Mean

Now, I’m not even sure whom to blame in a situation like this, where even academia has fallen under the weight of buzzwords. After all, the schools are just trying to help their students getting high paying jobs before the summer is over. I guess then the blame is on the hiring managers who are trying to recruit candidates based on buzzwords, not necessarily knowing what they should look for in the candidates.

And that is a big problem. This is why even non-technical people must understand basic meanings of technical terms that they are using; especially when they are hiring employees or procuring outsourcing vendors to perform specific tasks. Otherwise, some poor souls would spend countless hours to finish things that don’t mean anything for the bottom-line. In a capitalistic economy, we play with data for only two reasons:

  1. to increase revenue, or
  2. to reduce cost.

If it’s all the same for the bottom line, why should a non-technician care about the “how the job is done” part?

Why It Sucks When Marketers Demand What They Don’t Understand

I’ve been saying that marketers or decision-makers should not be bad patients. Bad patients won’t listen to doctors; and further, they will actually command doctors prescribe certain medications without testing or validation. I guess that is one way to kill themselves, but what about the poor, unfortunate doctor?

We see that in the data and analytics business all of the time. I met a client who just wanted to have our team build neural net models for him. Why? Why not insist on a random forest method? I think he thought that “neural net” sounded cool. But when I heard his “business” problems out, he definitely needed something different as a solution. He didn’t have the data infrastructure to support any automated solutions; he wanted to know what went on in the modeling process (neural net models are black boxes, by definition), he didn’t have enough data to implement such things at the beginning stage, and projected gains (by employing models) wouldn’t cover the cost of such implementation for the first couple of years.

What he needed was a short-term proof of concept, where data structure must be changed to be more “analytics-ready.” (It was far from it.) And the models should be built by human analysts, so that everyone would learn more about the data and methodology along the way.

Imagine a junior analyst fresh out of school, whose resume is filled with buzzwords, meeting with a client like that. He wouldn’t fight back, but would take the order verbatim and build neural net models, whether they helped in achieving the business goals or not. Then the procurer of the service would still be blaming the concept of machine learning itself. Because bad patients will never blame themselves.

Even advanced data scientists sometimes lose the battle with clients who insist on implementing Machine Learning when the solution is something else. And such clients are generally the ones who want to know every little detail, including how the models are constructed. I’ve seen data scientists who’d implemented machine learning algorithms (for practical reasons, such as automation and speed gain), and reverse-engineered the models, using traditional regression techniques, only to showcase what variables were driving the results.

One can say that such is the virtue of a senior-level data scientist. But then what if the analyst is very green? Actually some decision-makers may like that, as a more junior-level person won’t fight back too hard. Only after a project goes south, those “order takers” will be blamed (as in “those analysts didn’t know what they were doing”).


Data and analytics businesses will continually evolve, but the math and the human factors won’t change much. What will change, however, is that we will have fewer and fewer middlemen between the decision-makers (who are not necessarily well-versed in data and analytics) and human analysts or machines (who are not necessarily well-versed in sales or marketing). And it will all be in the name of automation, or more specifically, Machine Learning or AI.

In that future, the person who orders the machine around — ready or not — will be responsible for bad results and ineffective implementations. That means, everyone needs to be more logical. Maybe not as much as a Vulcan, but somewhere between a hardcore coder and a touchy-feely marketer. And they must be more aware of capabilities and limitations of technologies and techniques; and, more importantly, they should not blindly trust machine-based solutions.

The scary part is that those who say things like “Just automate the whole thing with AI, somehow” will be the first in line to be replaced by the machines. That future is not far away.

How Marketers Can Throw Away Data, Without Regrets

Yes, data is an asset. But not if the data doesn’t generate any value. (There is no sentimental value to data, unless we are talking about building a museum of old data.) So here’s how to throw away data.

Last month, I talked about data hoarders (refer to “Don’t Be a Data Hoarder”). This time, let me share some ideas about how to throw away data.

I heard about people who specialize in cleaning other people’s closets and storage spaces. Looking at the result — turning a hoarder’s house into a presentable living quarters — I am certain that they have their own set of rules and methodologies in deciding what to throw out, what goes together, and how to organize items that are to be kept.

I recently had a relatable experience, as I sold a house and moved to a smaller place, all in the name of age-appropriate downsizing. We lived in the old home for 22 years, raising two children. We thought that our kids took much of their stuff when they moved out, but as you may have guessed already, no, we still had so much to sort through. After all, we are talking about accumulation of possessions by four individuals for 22 long years. Enough to invoke a philosophical question “Why do humans gather so much stuff during their short lifespans?” Maybe we all carry a bit of hoarder genes after all. Or we’re just too lazy to sort things through on a regular basis.

My rule was rather simple: If I haven’t touched an item for more than three years (two years for apparel), give it away or throw it out. One exception was for the things with high sentimental value; which, unfortunately, could lead into hoarding behavior all over again (as in “Oh, I can’t possibly throw out this ‘Best Daddy in the World’ mug, though it looks totally hideous.”). So, when I was in doubt, I chucked it.

But after all of this, I may have to move to an even smaller place to be able to claim a minimalist lifestyle. Or should I just hire a cleanup specialist? One thing is for sure though; the cleanup job should be done in phases.

Useless junk — i.e., things that generate no monetary or sentimental value — is a liability. Yes, data is an asset. But not if the data doesn’t generate any value. (There is no sentimental value to data, unless we are talking about building a museum of old data.)

So, how do we really clean the house? I’ve seen some harsh methods like “If the data is more than three years old, just dump it.” Unless the business model has gone through some drastic changes rendering the past data completely useless, I strongly recommend against such a crude tactic. If trend analysis or a churn prediction model is in the plan, you will definitely regret throwing away data just because they are old. Then again, as I wrote last month, no one should keep every piece of data since the beginning of time, either.

Like any other data-related activities, the cleanup job starts with goal-setting, too. How will you know what to keep, if you don’t even know what you are about to do? If you “do” know what is on the horizon, then follow your own plan. If you don’t, the No. 1 step would be a companywide Need-Analysis, as different types of data are required for different tasks.

The Process of Ridding Yourself of Data

First, ask the users and analysts:

  • What is in the marketing plan?
  • What type of predictions would be required for such marketing goals? Be as specific as possible:
    • Forecasting and Time-Series Analysis — You will need to keep some “old” data for sure for these.
    • Product Affinity Models for Cross-sell/Upsell — You must keep who bought what for how much, when, through what channel type of data.
    • Attribution Analysis and Response Models — This type of analytics requires past promotion and response history data for at least a few calendar years.
    • Product Development and Planning — You would need SKU-level transaction data, but not from the beginning of time.
    • Etc.
  • What do you have? Do the full inventory and categorize them by data types, as you may have much more than you thought. Some examples are:
    • PII (Personally Identifiable Data): Name, Address, Email, Phone Number, Various ID’s, etc. These are valuable connectors to other data sources such as Geo/Demographic Data.
    • Order/Transaction Data: Transaction Date, Amount, Payment Methods
    • Item/SKU-Level Data: Products, Price, Units
    • Promotion/Response History: Source, Channel, Offer, Creative, Drop/Wave, etc.
    • Life-to-Date/Past ‘X’ Months Summary Data: Not as good as detailed, event-level data, but summary data may be enough for trend analysis or forecasting.
    • Customer Status Flags: Active, Dormant, Delinquent, Canceled
    • Surveys/Product Registration: Attitudinal and Lifestyle Data
    • Customer Communication History Data: Call-center and web interaction data
    • Online Behavior: Open, Click-through, Page views, etc.
    • Social Media: Sentiment/Intentions
    • Etc.
  • What kind of data did you buy? Surprisingly large amounts of data are acquired from third-party data sources, and kept around indefinitely.
  • Where are they? On what platform, and how are they stored?
  • Who is assessing them? Through what channels and platform? Via what methods or software? Search for them, as you may uncover data users in unexpected places. You do not want to throw things out without asking them.
  • Who is updating them? Data that are not regularly updated are most likely to be junk.

Taking Stock

Now, I’m not suggesting actually “deleting” data on a source level in the age of cheap storage. All I am saying is that not all data points are equally important, and some data can be easily tucked away. In short, if data don’t fit your goals, don’t bring them out to the front.

Essentially, this is the first step of the data refinement process. The emergence of the Data Lake concept is rooted here. Big Data was too big, so users wanted to put more useful data in more easily accessible places. Now, the trouble with the Data Lake is that the lake water is still not drinkable, requiring further refinement. However, like I admitted that I may have to move again to clean my stuff out further, the cleaning process should be done in phases, and the Data Lake may as well be the first station.

In contrast, the Analytics Sandbox that I often discussed in this series would be more of a data haven for analysts, where every variable is cleaned, standardized, categorized, consolidated, and summarized for advanced analytics and targeting (refer to “Chicken or the Egg? Data or Analytics?” and “It’s All about Ranking”). Basically, it’s data on silver platters for professional analysts— humans or machines.

At the end of such data refinement processes, the end-users will see data in the form of “answers to questions.” As in, scores that describe targets in a concise manner, like “Likelihood of being an early adopter,” or “Likelihood of being a bargain-seeker.” To get to that stage, useful data must flow through the pipeline constantly and smoothly. But not all data are required to do that (refer to “Data Must Flow, But Not All of Them”).

For the folks who just want to cut to the chase, allow me to share a cheat sheet.

Disclaimer: You should really plan to do some serious need analysis to select and purge data from your value chain. Nonetheless, you may be able to kick-start a majority of customer-related analytics, if you start with this basic list.

Because different business models call for a different data menu, I divided the list by major industry types. If your industry is not listed here, use your imagination along with a need-analysis.

Cheat Sheet

Merchandizing: Most retailers would fall into this category. Basically, you would provide products and services upon payment.

  • Who: Customer ID / PII
  • What: Product SKU / Category
  • When: Purchase Date
  • How Much: Total Paid, Net Price, Discount/Coupon, Tax, Shipping, Return
  • Channel/Device: Store, Web, App, etc.
  • Payment Method

Subscription: This business model is coming back with full force, as a new generation of shoppers prefer subscription over ownership. It gets a little more complicated, as shipment/delivery and payment may follow different cycles.

  • Who: Subscriber ID/PII
  • Brand/Title/Property
  • Dates: First Subscription, Renewal, Payment, Delinquent, Cancelation, Reactivation, etc.
  • Paid Amounts by Pay Period
  • Number of Payments/Turns
  • Payment Method
  • Auto Payment Status
  • Subscription Status
  • Number of Renewals
  • Subscription Terms
  • Acquisition Channel/Device
  • Acquisition Source

Hospitality: Most hotels and travel services fall under this category. This is even more complicated than the subscription model, as booking and travel date, and gaps between them, all play important parts in the prediction and personalization.

  • Who: Guest ID / PII
  • Brand/Property
  • Region
  • Booking Site/Source
  • Transaction Channel/Device
  • Booking Date/Time/Day of Week
  • Travel(Arrival) Date/Time
  • Travel Duration
  • Transaction Amount: Total Paid, Net Price, Discount, Coupon, Fees, Taxes
  • Number of Rooms/Parties
  • Room Class/Price Band
  • Payment Method
  • Corporate Discount Code
  • Special Requests

Promotion Data: On top of these basic lists of behavioral data, you would need promotion history to get into the “what worked” part of analytics, leading to real response models.

  • Promotion Channel
  • Source of Data/List
  • Offer Type
  • Creative Details
  • Segment/Model (for selection/targeting)
  • Drop/Contact Date

Summing It All Up

I am certain that you have much more data, and would need more data categories than ones on this list. For one, promotion data would be much more complicated if you gathered all types of touch data from Google tags and your own mail and email promotion history from multiple vendors. Like I said, this is a cheat sheet, and at some point, you’d have to get deeper.

Plus, you will still have to agonize over how far back in time you would have to go back for a proper data inventory. That really depends on your business, as the data cycle for big ticket items like home furniture or automobiles is far longer than consumables and budget-price items.

When in doubt, start asking your analysts. If they are not sure — i.e., insisting that they must have “everything, all the time”— then call for outside help. Knowing what to keep, based on business objectives, is the first step of building an analytics roadmap, anyway.

No matter how overwhelming this cleanup job may seem, it is something that most organizations must go through — at some point. Otherwise, your own IT department may decide to throw away “old” data, unilaterally. That is more like a foreclosure situation, and you won’t even be able to finish necessary data summary work before some critical data are gone. So, plan for streamlining the data flow like you just sold a house and must move out by a certain date. Happy cleaning, and don’t forget to whistle while you work.

4 Steps to Improve Conversion Using an Analytical Approach

Improving on-site conversions and increasing sales has been and always will be a top priority for smart businesses. However, WordStream found from first-hand analysis that the average conversion rate for a business website is a measly 2.35%. Obviously, having more sales is the key to long-term success, but finding effective ways to optimize the factors that impact the conversion rate is often very tricky, for several reasons.

Improving on-site conversions and increasing sales has been and always will be a top priority for smart businesses. However, WordStream found from first-hand analysis that the average conversion rate for a business website is a measly 2.35%. Obviously, having more sales is the key to long-term success, but finding effective ways to optimize the factors that impact the conversion rate is often very tricky, for several reasons.

First of all, there are many factors that influence conversions and purchase decisions. One little bump in the road can bring the buyer’s journey to a screeching halt. Secondly, it is difficult to determine which factors exactly are hurting or helping, making conversion rate optimization (CRO) a seemingly impossible task for many businesses.

This is why an analytical approach is necessary for true conversion rate optimization. Data is a crucial and necessary ingredient for any smart business decision. And thankfully, accurate data is more accessible now than ever with evolving technology and tools.

Here’s how to improve conversions with an analytical approach that consists of four simple steps…

1. Identify the Gaps

You cannot fix what you do not know is broken. On the other hand, it is extremely wasteful and counterproductive to start from scratch when some elements of your website are actually working just fine. Therefore, you need to find the weakest links and address them first.

Google Analytics is actually a great tool in this regard. There are plenty of insights that it offers, which can shed light on the details of your website that are affecting conversion rates. For example, you may want to start off with the obvious comparisons, like desktop vs. mobile conversions.

It is important to note that the global average conversion rate for desktop devices is nearly 4%, while that for mobile is just under 2%. So there will be some divergence between these two rates. However if, for example, your mobile conversion rates are significantly lower, it could be a sign that the UX is not optimized properly or even interfering with the customer experience.

conversion rate

2. Start Simple and Work Your Way Up

Boosting your micro, mini and macro conversions doesn’t require a complex strategy or an overwhelming overhaul of your existing marketing campaigns. Instead, look for the simplest changes that will have an impact on lead qualification, sales complexity, or purchase timelines.

At its core, your conversion rate depends greatly on the customer experience. Providing a remarkable CX starts early on in the targeting process. With the right brand messaging and martech implementation, you can improve CX and influence your conversion rates.

For example, your promotions and advertising should be contextual and timely. This, in turn, depends on how well you’ve carried out keyword research and whether your content matches trends in your niche.

Ask your marketing team key questions about the basics of your strategies. For example, how long has it been since you defined your core audience and analyzed data to determine the demographics of your customers? Things change quickly over time and you need to adapt to changes in consumer behavior on a regular basis to ensure effective targeting of each segment.

conversion rate optimization

Many a time, instituting online conversions as an organizational process or operational function needs a top-down approach. In order to know and meet industry standards and benchmarks, and find meaningful correlations in your business, you may want to restructure your C-suite to handle Big Data and its implications on marketing and customer service. Many modern companies are now hiring CDOs (chief data officers) and CCOs (chief customer experience officers) to gather insights from analytics and deliver better customer service. Bringing on experts in these fields can do wonders for your CRO strategy.

3. Optimize Hot Points

In order to truly optimize your conversion rates, you need to understand how customers are interacting with your website. Using heat maps is a great way to understand the general path that visitors follow on your website. Website heat maps track mouse movements, click rates, and scrolling speeds, and use color-coded overlays to identify the parts or elements where the most action is occurring on your web pages.


Using insights from heat maps, you can influence the user’s course of action by better positioning key elements (and tweaking their copy) that help boost conversions.

By placing CTA buttons in the areas where their eyes are naturally drawn, you can increase on-page engagement. You can also use this information to guide product displays, optimize content placement, and just create a more appealing layout that is designed to move customers through the sales funnel.

Healthcare publishing media site used heat map testing to optimize the content placement on its website to increase the number of signups for its online courses. They found that their existing layout unfortunately had very low engagement and the placement of their CTAs was less than ideal. After testing new designs and altering the layout with important CTA buttons along the natural reading flow, they were able to increase their conversions by nearly 16%.

4. Monitor, Adjust, and Improve

An analytical approach to CRO doesn’t just stop with identifying weaknesses and providing solutions. As you make changes, it is imperative that you continue to measure the impact that these changes are having on your conversion rates for an extended period of time and continue to test out alternative tactics. This is the only way make strategic changes that deliver long-lasting results.

When it comes to testing various strategies, the traditional approach has been to use A/B testing for layouts and copy. While this system has certainly been a staple for marketing teams in the past, it definitely has its faults. However, machine-learning and AI-enabled technology now allow businesses to conduct multivariate testing and also implement the results with utmost accuracy.

For example, money transfer startup Monito used AI-based testing to optimize conversions on a landing page that showed the best currency conversion rates. First, they used machine learning analysis to test out the efficacy of hypothetical designs of their lead box and estimate sign-up behavior through predictive heat maps. They then ran 12 design variants simultaneously, while AI measured and rated visitor interactions. The best design ultimately led to a 50% rise in signups.

CRO image

You need to continually test and monitor your changes to be sure that your site is truly optimized at all times. Your target audiences and their preferences both are constantly changing; so what may have worked well a few months ago may no longer be the best option.

In Conclusion

Using an analytical approach to CRO is truthfully the only way to guarantee continued success. By making the most of modern tools to collect behavioral data, make changes to your website’s design, and constantly monitor the outcomes, you can expect to see significant increases in conversions on an ongoing basis. Good luck!

Don’t Be a Data Hoarder — Why Data Governance Matters in Marketing

They say data is an asset. I say it, too. If collected data are wielded properly, they can definitely lead to financial gains, either through a revenue increase or cost reduction. But that doesn’t mean that possessing large amounts of data guarantees large dollar figures for the collector. Data governance matters.

They say data is an asset. I say it, too. If collected data are wielded properly, they can definitely lead to financial gains, either through a revenue increase or cost reduction. But that doesn’t mean that possessing large amounts of data guarantees large dollar figures for the collector. Data governance matters, because the operative words in my statement are “wielded properly,” as I have been emphasizing for years through this column.

Plus, collecting data also comes with risks. When sensitive data go into the wrong hands, it often leads to a direct financial burden for the data collector. In some countries, an assumed guardian of sensitive data may face legal charges for mishandling sensitive data. Even in the United States, which is known as the “freest” country for businesses when it comes to data usage, data breach or clear abuse of data can lead to a publicity nightmare for the organization; or worse, large legal settlements after long and costly litigations. Even in the most innocuous cases, mistreatment of sensitive data may lead to serious damage to the brand image.

The phrase is not even cool in the business community anymore, but “Big Data” worked like a magic word only a few years ago. In my opinion, that word “big” in Big Data misled many organizations and decision-makers. It basically gave a wrong notion that “big” is indeed “good” in the data business.

What is “good,” in a pure business sense? Simply, more money. What was the popular definition of Big Data back then? Three Vs, as in volume, velocity and variety. So, if varieties of data in large volumes move around really fast, it will automatically be good for businesses? We know the answer by now, that a large amount of unstructured, unorganized and unrefined data could just be a burden to the holder, not to mention the security concerns listed earlier.

Unfortunately, with the popularity of Big Data and emergence of cloud computing, many organizations started to hoard data with a hope that collected data would turn into gold one day. Here, I am saying “hoarding” with all of the negative connotations that come with the word.

Hoarders are the people who are not able to throw away anything, even garbage. Data hoarders are the same way. Most datasets are huge because the collector does not know what to throw out. If you ask any hoarder why he keeps so many items in the house, the most common answer would be “because you never know when you need them.” Data hoarders keep every piece of data indefinitely for the same reason.

Only Keep Useful Data

But if you are playing with data for business purposes, you should know what pieces of data are useful for decision-making. The sponsor of any data activity must have clear objectives to begin with. Analysts would then find out what kind of data are necessary to meet those goals, through various statistical analyses and cumulative knowledge.

Actually, good analysts do know that not all data are created equal, and some are more useful than others. Why do you think that the notion of a Data Lake became popular following the Big Data hype? Further, I have been emphasizing the importance of an even more concise data environment. (I call it an “Analytics Sandbox.”) Because the lake water in the Data Lake is still not drinkable. Data must get smaller through data refinement and analytics to be beneficial for decision-makers (refer to “Big Data Must Get Smaller”).

Nonetheless, organizations continue to hoard data, because no one wants to be responsible for purging data that may be useful someday. Government agencies may have some good reasons to maintain large amounts of data, because the cost of losing or misplacing data about some terrorist activities is too high. Even in that case, however, we should collectively be concerned if the most sensitive data about us — such as our biometrics data — reside in some government agency’s server somewhere, without clear and immediate purposes. In cities like London or Paris, cameras are on every street corner, linked to facial recognition algorithms. But we tolerate that because the benefit outweighs the risk (so we think). But that doesn’t mean that we don’t need to be concerned with data breach or abuse.

Hoarding Data Gives Brands the Temptation to Be Creepy

If the data are collected by businesses for their financial gains, then the subjects of such data collection (i.e., consumers) should question who gave them the right to collect data about every breath we take, every move we make and every claim we stake. It is one thing to retain data about mutual transactions, but it is quite another to collect data on our movement or whereabouts, unilaterally. In other words, it is one thing to be remembered (for better service and recommendation in the future), but it is another to be stalked (remember “Every Breath You Take” is a song about a stalker).

Have you heard a story about a stalker who successfully courted the subject as result of stalking? Why do marketers think that they will sell more of their products by stalking their customers and prospects? Since when did being totally creepy – as in “I know where you are and what you’re doing right now” – become an acceptable marketing tactic? (Refer to “Don’t Do It Just Because You Can.”)

In fact, even if you do possess such data, in the interest of “not” being creepy, you must make your message more innocuous. For example, don’t act like you are offering an item because you “know” that the target looked around similar items recently. That kind of creepy approach may work once in a while, but let’s not call that a good sales tactic.

Instead, sellers should make gentle nudges. Don’t say “I know you are looking for this particular skin care item.” The response to that would be “Who the hell are you, and how do you know that?” Instead, do say “Would you be interested in our new product for people with sensitive skin?” The desirable response would be “Hey, I was just looking for something like that!”

The difference between a creepy stalking and a gentle nudging is huge, from the receiving end.

Through many articles about personalization, I have been emphasizing the use of model-based personas, as they pack so much information in the form of answer to questions and cover the gap of missing data (as we’d never know everything about everyone). If I may add one more benefit of modeling, it coverts data into probabilities. Raw data is about “I know she is looking for a particular high-end skin care item,” where coverage of such data is seriously limited, anyway. Conversely, model scores are about “Her score for high-end beauty products is 8 out of 10 scale score,” even if we may not even have concrete data about that specific interest.

Now, users who only have access to the model score — which is “dull” information, in comparison to “sharp” data about some verified behavior — would be less temped to say “Oh, I know you did this.” Even for non-geeky types, the difference between “Is” and “Likely to be” is vast.

If converting sharp data into innocuous probability scores through modeling is too much for you to start with, then at least categorize the data, and expose data points to users that way. Yes, we are living in the world of SKU-level product suggestion (like Amazon does), but as a consumer, have you ever “liked” such blunt suggestions, anyway? Marketers do it because such personalization does better than not doing anything at all, but such a practice is hardly ideal for many reasons (Being creepy being one. Refer to “Personalization Is About the Person”).

The saddest part in all this is that most marketers don’t even know how to fully utilize what they collected. I’ve seen too many organizations that are still stuck with using a few popular data variables repeatedly, while hoarding data indiscriminately. Why risk all of those privacy and security concerns, not to mention the data maintenance cost, if that is the case?

Have a Goal for All of That Data

If analytics is part of the process, then the analysts will tell you with conviction, that you don’t need all those data points for certain types of prediction. For instance, why risk losing a bunch of credit card numbers, when the credit card type or payment method is all you need to predict responses and propensities on a customer level?

Of course, the organization must first decide what types of models and predictions are necessary to meet their goals. But that is the beginning part of the whole analytics game, anyway. Analytics is not about answering to some wishful thinking of data hoarders; it should be a goal-oriented activity, with carefully selected and refined data for clear purposes.

A goal-oriented mindset is even more important in the age of machine learning and automation. Because we should never automate bad behaviors. Imagine a powerful marketing automation engine in the hands of data hoarders. Forget about organizational inefficiency. As a consumer, don’t you get a chill down your spine just imagining how creepy the outcome would be? Well, maybe we don’t really have to imagine it, as we all get bombarded with ineffective and not-so-personal offers every day.


So, marketers, have clear purposes in data activities, and do not become mindless data hoarders. If you do possess data, wield them properly with analytics. And while at it, purge pieces of data that do not fit your goals. That “you never know” attitude really doesn’t help anyone. And you are supposed to know your own goals and what data and methodologies will get you there.

Automation — With a Little Help From Good Machines

Some claim that human behaviors are just algorithmic responses developed over past 70,000 years or so. Now, armed with data that we are casually scattering around, machine-based algorithms outperform human brains in most areas already, and such evolution will continue.

We should be mindful when dropping buzzwords (refer to “Why Buzzwords Suck”). As more and more people jump on the bandwagon of a buzzword, it tends to gain magical power. Eventually, some may even believe that buying into a “word” will solve all their problems.

But does it ever work out that way? Did anyone make a fortune buying into the Big Data hype yet? I know some companies did; but, ironically, the winners do not even utter such words. I’ve never seen any news release from Google or Amazon that they are investing in “Big Data.” For them, playing with large amounts of data have been just part of their businesses all along.

Now the new buzz is about AI, machine learning and automation, in general; and it will be a little different from buzzwords from the past. Whether we like it or not, that is the direction that we are already headed in the world where each decision will be increasingly more dependent on deterministic algorithms.

Some even claim that human behaviors are just algorithmic responses developed over past 70,000 years or so. Now, armed with data that we are casually scattering around, machine-based algorithms outperform human brains in most areas already, and such evolution will continue until most humans will become largely irreverent in terms of economic value, they say. Not that it would happen overnight, but the next generation may look at our archaic way of things the way we look at our ancestors who were without computers.

First, the Marketing Case for AI

If such is our fate, why are contemporary humans so willingly jumping onto this automation bandwagon where machines will make decisions for us? Because they are smarter than average humans? What does “smart” even mean when we are talking about machines? I think people generally mean to say that machines remember details better than us, and calculate a complex series of algorithms faster and more accurately than us.

Some may say that humans with experiences are wiser with visions to see through things that are not seemingly related. But I dare to say that I’ve seen machines from decades ago finding patterns that humans would never find on their own. When machines start learning without our coaching or supervision — the very definition of AI — at a continuously increasing rate, no, we won’t be able to claim that we are wiser than machines, either. In the near future, if not already.

So, before we casually say that AI-based automation is the future of marketing, let’s ask ourselves why we are so eager to give more power to machines. For what purpose?

The answer to that philosophical question in the business world is rather simple; decision-makers are jumping onto the automation bandwagon to save money. Period.

Specifically, by reducing the number of people who perform tasks that machines can do. As a bonus, AI saves time by performing the tasks faster than ever. In some cases — mostly, for small operations — machines will perform duties that have been neglected due to high labor costs, but even in such situations, automation will not be considered a job-creating force.

Making the Marketing Case for Humans Using Data

Some may ask why I am stating the obvious here. My intention here is to emphasize that automation, all by itself, doesn’t have the magic power to reveal new secrets, as the technology is primarily a replacement option for human labor. If the result of machine-based analytics look new to you, it’s because humans in your organization never looked at the data the same way before, not because it was an impossible task to begin with. And that is a good thing as, in that case, we may be talking about using machine power to do the things that you never had human resources for. But in most cases, automation is about automating things that people know how to do already in the name of time and cost savings.

Like any other data or analytics endeavors, we must embark on marketing automation projects with clear purposes. What would be the expected outcome? What are you trying to achieve? For what types of tasks? What parts of the process are we automating? In what sequence?

Just remember that anyone who would say “just automate everything” is the type of person who would be replaced by machines first.

At the end of that automation rainbow, there lie far less people employed for given tasks, and only the logical ones who see through the machines would remain relevant in the new world.

Nonetheless, providing purposes for machines is still a uniquely human function, for now. And project goals would look like those of any other tasks, if we come back to the world of marketing here. Examples are:

  • Consolidate unorganized free-form data into intelligent information — for further analyses, or for “more” automation of related tasks. For instance, there are thousands of reasons why consumers call customer service lines. Machines can categorically sort those inquiries out, so that finding proper answers to them — the very next logical step — can also be automated. Or, at least make the job easier for the operator on the call (for now). Deciphering image files would be another example, as there has been no serious effort to classify them with sheer manpower in a large scale. But then again, is it really impossible for humans to classify large numbers of images? How about crowdsourcing? Or let an authoritarian government force a stadium-full of North Koreans to do it manually? We’d use machines, because it would be just cheaper and faster to do it with machine learning. But who do you think corrected wrong categorization done by machines to make them better?
  • Find the next, best product for the buyer. This one is quite a popular task for machines, but even a simple “If you bought this, you would like that, too” type of product recommendation would work far better if input data (i.e., product descriptions and product categories) were well-organized — by machines. Machines work better in steps, too.
  • Predict responsiveness to channel promotions and future value of a customer. These are age-old tasks for analytics teams, but with sets of usable data, machines can update algorithms and apply scores, real-time, as new information enters the system. Call that AI, if algorithms are updated automatically, all on its own. Actually, this would be easier for a machine to pick up than fixing messy data. Not that they will know the difference between easy and difficult, but I’m talking about in terms of ease of delegation, from our point of view.
  • Then ultimately, personalize every interaction with every customer through every touch channel. I guess that would be the new frontier for marketers, as approaching personalization on such massive scale can’t be done without some help from good machines. But I still stand by my argument that each component of personalization efforts is something that we know how to do (refer to “Key Elements of Complete Personalization”). By performing each step much faster with machines, though, we can soon reach that ultimate level of personalization through consolidation of services and tasks. And the grand design of such a process will be set up by humans — at least initially.

This Human’s Final Thoughts on AI

These are just some examples in marketing.

If we dive into the operational side, there will be an even richer list of candidates for automation.

In any case, how do marketers stay a step ahead of machines, and remain commanders of them?

Ironically, we must be as logical as a Vulcan to control them effectively. Machines do not understand illogical commands, and will ignore them without any prejudice (but it would “feel” like disrespect to us).

Teaching Humans to Automate

I heard that some overzealous parents started teaching computer programming to 4- or 5-year old children, in addition to a foreign language and piano lessons. That sounds all Cool and the Gang to me, but I wondered how they would teach such young kids how to code.

Obviously, they wouldn’t teach them JavaScript or Python from Day 1. Instead, they first teach the kids how to break down simple tasks into smaller steps. For example, if I ask you to make a grilled cheese sandwich, you — as a human being — will go at it with minimal instruction. Try to order an imaginary machine to do the same. For the machine’s sake, it won’t even know what a grilled cheese sandwich is, or understand why carbon-based lifeforms (especially gluttonous humans) must consume such large quantities of organic materials on a regular basis.

Teaching Machines to Human

If you try it, you will find that the task of writing a spec for a machine is surprisingly tedious.

Just for a little grilled cheese sandwich, you have to:

human automation, the grilled cheese story
Photo by: Christoher Del Rosario ( | Credit: Getty Images by Christopher Del Rosario / EyeEm
  • instruct it on how to get to the breadbox,
  • how to open it,
  • how many slices of bread should be taken out,
  • how to take them out without flattening them (applying the right amount of pressure),
  • how to open the refrigerator,
  • how to locate butter and cheese in the mix of many food items,
  • how to peel off two slices of cheese without tearing them,
  • how to ignite a stove burner,
  • how to find a suitable pan (try to explain “suitable,” in terms measurements and shape),
  • how to preheat the pan to a designated temperature (who’d design and develop the heat censor?),
  • how to melt butter on the pan without burning it,
  • how to constantly measure and monitor the temperature,
  • how to judge the right degree of “brown” color of grilled cheese,
  • etc. etc..

If you feel sick reading all of this, well, I didn’t even get to the part about serving the damn sandwich on a nice plate yet.

Anyway, Human Marketers, Here’s the Conclusion

I am not at all saying that all decision-makers must be coders. What I am trying to emphasize is the importance of breaking down a large task into smaller “logical” steps. Smart machines will not need all of these details to perform “known” tasks (i.e., someone else taught it already). And that is how they get smarter. But they would still work better in clear logical steps.

For humans to command machines effectively, we must think like machines — at least a little bit. Yes, automation is mostly about automating things we already know how to do. We use machines to perform those tasks much faster than humans. To achieve overall organizational effectiveness, break down the processes into smaller bits, where each step becomes the stepping stones for the next. Then prioritize which part would be the best candidate for automation, and which part would still be best served by human brains and hands.

For now, that would be the fastest route to full automation. As a result of it, many humans may be demoted to jobs like reading machine-made scripts to other humans on the phone, or delivering items that machines picked for human consumers in the name of personalization. If that is the direction where human collectives are headed, let’s try to be the ones who provide purposes for machines. Until they don’t even need such instructions from us anymore.

5 Trends in Customer Experience Software for 2019

I asked people who use customer experience software to share their thoughts on how the software, and its use, will evolve in 2019. Here are five trends to look for this year.

I asked marketers using customer experience (CX) software to share their thoughts on how the technology and its use, will evolve in 2019. Based on this research, I expect CX technologies will evolve in 2019 to support greater system and data connectivity, improve customer insights, and increase message relevance and process automation.

Here are five trends to look for this year.


M&A activity reflects a trend toward unified customer data platforms and all-in-one solutions for marketing, sales, support, analytics and CX. That’s great for businesses, because soon they won’t need to spend millions on integrations and IT for a single, holistic view of the customer.

We’ll see the introduction of tools and improvement in design to help vertical markets perform integrations more easily, which can find ways to transfer customer information from one application to the other.


Improved integration will help collate disparate customer data to provide a holistic view of customer activity across all departments — sales, marketing, customer support, etc. As CX software improves, organizations will start valuing CX data more than actual goods or services sold, because CX data will have a stronger correlation with long-term revenue generation and profitability.

Customer experience software users say 2018 was the year of data for CX software — from GDPR-mandated data cleanups to a wave of new data from relational and transactional customer interactions, CX software companies focused on data collection. From real-time data collection facilitated by chatbots and AI, 2018 saw a new way for the CX world to gather, store and leverage customer data for more customized engagement. This focus on data will be the foundation for what’s to come in 2019 — a greater focus on data collection, data analysis, and acting on data to increase customer retention and better connect with customers.

CX software will get a chance to show what it can do. We can expect the use of AI to grow and enable companies to sort and evaluate the data collected faster than in previous years. As CX software continues to gather more information, it will also continue to improve the software’s processes and provide better analysis.


We see the embedding of more “marketing-like” approaches — more analytics of customer behavior and more automation of responses and customer outreach. CX pros are starting to do this as well, moving away from working only with survey responses from a small number of customers. This is parallel to the development that started some 20 years ago in marketing automation when businesses moved on from small-scale surveys in market research to using all their business data to better understand customers. CX solutions are looking at all data on customers, using it to understand their needs and wants, and building automatic processes to meet those needs.

Businesses are realizing that customers today rate their experience based on the sum of all the interactions with business — not just on call wait time, or Internet ease of use. All of these things come together as customers move seamlessly from one channel to another — they see this as one overall experience. As such, successful CX solutions are embedding tools that can work with the entire customer journey — from its “discovery,” based on journey analytics, to the orchestration of a better overall CX through customer journey management.


We’ll see organizations leveraging technology to make customer journeys frictionless, personalized — and ultimately, more profitable. Companies will be able to better target customers with more in-depth information, personalized messaging and tailored recommendations that better align with needs. At the end of the day, it will all come down to how well companies use CX software to learn about their customers and better serve them.

Customers will become more skeptical of companies that fail to personalize emails and content. Consumers respond better when they feel like they’re people, not just another number on a list. The future of successful CX software implementations is those that take the time to focus on personalized, relevant information of value that helps makes customers’ lives simpler and easier.

Artificial Intelligence/Machine Learning

The shiny toys of Augmented Reality (AR), Artificial Intelligence (AI), and data analytics only account for the aesthetic aspects of CX. Many companies are looking at the sexy components of a well-built website while overlooking why customers fell in love with brands like Netflix and Amazon. These top sites and companies gave the people exactly what they wanted from the onset. The next step for companies in 2019 is to find the right balance to effectively blend UX and CX to suit the customers’ needs like never before.

With that being said, the automation side of CX is incredibly powerful. We’ve seen improvements in AI software within the past year and it will be fun to see how much it develops this year. The next couple years will revolutionize CX. Now that companies have built the technology, the only thing left is to fine-tune it. Build on the useful technology put in place.

Machine learning (ML) and AI are already being used to identify data points with the most impact. We will see this translate into providing meaningful action points which leverage the data points.

We will see CX software using AI to predict CX for new products, based on past data with greater accuracy.

Lastly, we will see marketers leveraging AI to learn about their target customers and prompt them to take action to meet their needs.

Nostalgic for the Future: Data That is ‘Close to You’

Last week, I had a dream — and in it, Karen Carpenter and I were friends. The following night, I had a similar dream — and this time it was Carly Simon. I literally went to bed the next night hoping for a Roberta Flack visitation.

Last week, I had a dream and in it, Karen Carpenter and I were friends. The following night, I had a similar dream and this time it was Carly Simon. I literally went to bed the next night hoping for a Roberta Flack visitation. As a result of these slumbering vocalists and songwriters, I’ve spent a good part of my leisure time over the New Year holiday listening to all their songs on my iPod. It’s yesterday, once more.

Who knows why we dream what we dream?

Sometimes, it just happens that when we’ve experienced enough in life, in play, in work some situations are bound to come around again, next week or decades later. I mean, I owned all that vinyl way back then and now I can stream it all again.

Greatest Hits: Lifecycles of Data-Inspired Marketing

So when Marc Pritchard of Procter & Gamble last week at the Consumer Electronics Show talked about “a world without ads,” I said to myself “oh, I’ve heard this song before.” And he’s right to say it.

In the world of data and direct marketing, a quest for wholly efficient advertising and a mythical 100-percent response rate actually is a 100-year science. Thank you, visionaries, such as Claude Hopkins.

• The 19th Century shopkeeper knew each customer, and conversed regularly. Ideally, each customer’s wants and desires were noted and needs anticipated to the extent that the customer was fulfilled accordingly. (Aaron Montgomery Ward and Richard Warren Sears.)
• Direct marketing originally through print, catalogs and mail, and then broadcast sought to replicate this model remotely. Measurement, attribution and response were put to science. Creativity served the science, or science served the creativity in either direction. Segmentation, analytics and differentiated communication flowed. (David Ogilvy, Stan Rapp and Alvin Eicoff, among others).
• In digital, social and mobile, direct marketing is rejuvenated this time “data-driven marketing.” Some have described this as data-inspired storytelling, or direct marketing on steroids. How responsible data collection can be used to identify prospect needs and wants, and funnel tailored communication through to sale, service and repeat purchase. (Jeff Bezos, among others.)
• And now the product itself can be designed to communicate to the customer smart appliances, smart cars, and the parts and products inside, with sensors and Internet connections and mobile app interfaces all being able to let the user know, it’s time for consideration or some other product lifecycle action.

Post-Advertising: A Reverence for Data

In all these examples, the constant is “I want to know you, so I can serve you the customer” and the facilitator is data. We exist to create and serve a customer. Period. Anything less is not sustainable. Data, in these models, is sought, analyzed and revered. It is also transparent, and its use and application has consumer buy-in. That premise is as true now in the Internet age, as it was in the direct response era before it. We all need to excel in data reverence, first, and then data analysis and application.

Advertising does have a role here, of course. Not every product sells itself and not every product meets customer satisfaction fully. The best advertising, and even the best data behind it, cannot save a bad product. There is always a need for advertising and marketing to inform the consumer, and a brand promise that serves to attract and retain beyond the product.

Every generation has its pop heroes. Tonight, I may just dream of Adele.

Marketing Metrics Aren’t Baseball Scores

Lester Wunderman is called “the Father of Direct Marketing” — not because he was the first one to put marketing offers in the mail, but because he is the one who started measuring results of direct channel efforts in more methodical ways. His marketing metrics are the predecessors of today’s measurements.

Lester Wunderman is called “the Father of Direct Marketing” — not because he was the first one to put marketing offers in the mail, but because he is the one who started measuring results of direct channel efforts in more methodical ways. His marketing metrics are the predecessors of today’s measurements.

Now, we use terms like 1:1 marketing or digital marketing. But, in essence, data-based marketing is supposed to be looped around with learnings from results of live or test campaigns. In other words, playing with data is an endless series of learning and relearning. Otherwise, why bother with all this data? Just do what you gut tells you to do.

Even in the very beginning of the marketer’s journey, there needs to a step for learning. Maybe not from the results from past campaigns, but something about customer profiles and their behaviors. With that knowledge, smart marketers would target better, by segmenting the universe or building look-alike or affinity models with multiple variables. Then a targeted campaign with the “right” message and offers would follow. Then what? Data players must figure out “what worked” (or what didn’t work). And the data journey continues.

So, this much is clear; if you do not measure your results, you are really not a data player.

But that doesn’t mean that you’re supposed to get lost in an endless series of metrics, either. I sometimes see what is commonly called “Death by KPI” in analytically driven organizations. That is a case where marketers are too busy chasing down a few of their favorite metrics and actually miss the big boat. Analytics is a game of balance, as well. It should not be too granular or tactical all of the time, and not too high in the sky in the name of strategy, either.

For one, in digital marketing, open and clickthrough rates are definitely “must-have” metrics. But those shouldn’t be the most important ones for all, just because all of the digital analytics toolsets prominently feature them. I am not at all disputing the value of those metrics, by the way. I’m just pointing out that they are just directional guidance toward success, where the real success is expressed in dollars, pounds and shillings. Clicks lead to conversions, but they are still a few steps away from generating cash.

Indeed, picking the right success metrics isn’t easy; not because of the math part, but because of political aspects of them, too. Surely, aggressive organizations would put more weight onto metrics related to the size of footprints and the rate of expansion. More established and stable companies would put more weight on profitability and various efficiency measures. Folks on the supply side would have different ways to measure their success in comparison to sales and marketing teams that must move merchandise in the most efficient ways. If someone is dedicated to a media channel, she would care for “her” channel first, without a doubt. In fact, she might even be in direct conflicts with fellow marketers who are in charge of “other” channels. Who gets the credit for “a” sale in a multi-channel environment? That is not an analytical decision, but a business decision.

Even after an organization settles on the key metrics that they would collectively follow, there lies another challenge. How would you declare winners and losers in this numbers game?

As the title of this article indicates, you are not supposed to conclude one version of creative beat the other one in an A/B test, just because the open rate was higher for one by less than 1%. This is not some ballgame where a team becomes a winner with a walk-away homerun at the bottom of the 11th inning.

Differences in metrics should have some statistical significance to bear any meaning. When we compare heights of a classroom full of boys, will we care for differences measured in 1/10 of a millimeter? If you are building a spaceship, such differences would matter, but not when we measure the height of human beings. Conversion rates, often expressed with two decimal places, are like that, too.

I won’t get too technical about it here, but even casual decision-makers without any mathematical training should be aware of factors that determine statistical significance when it comes to marketing-related metrics.

  • Expected and Observed Measurements: If it is about open, clickthrough and conversion rates, for example, what are “typical” figures that you have observed in the past? Are they in the 10%to 20% range, or something that is measured in fractions? And of course, for the final measure, what are the actual figures of opens, clicks and conversions for A and B segments in test campaigns? And what kind of differences are we measuring here? Differences expressed in fractions or whole numbers? (Think about the height example above.)
  • Sample Size: Too often, sample sizes are too small to provide any meaningful conclusions. Marketers often hesitate to put a large number of target names in the no-contact control group, for instance, as they think that those would be missed revenue-generating opportunities (and they are, if the campaign is supposed to work). Even after committing to such tests, if the size of the control group is too small, it may not be enough to measure “small” differences in results. Size definitely matters in testing.
  • Confidence Level: How confident would you want to be: 95% or 90%? Or would an 80% confidence level be good enough for the test? Just remember that the higher the confidence level that you want, the bigger the test size must be.

If you know these basic factors, there are many online tools where you can enter some numbers and see if the result is statistically significant or not (just Google “Statistical Significance Calculator”). Most tools will ask for test and control cell sizes, conversion counts for both and minimum confidence level. The answer comes out as bluntly as: “The result is not significant and cannot be trusted.”

If you get an answer like that, please do not commit to a decision with any long-term effects. If you want to just declare a winner and finish up a campaign as soon as possible, sure, treat the result like a baseball score of a pitchers’ duel. But at least be aware that the test margin was very thin. (Tell others, too.)

Here’s some advice related to marketing success metrics:

  • Always Consider Statistical Significance and do not make any quick conclusions with insufficient test quantities, as they may not mean much. The key message here is that you should not skip the significance test step.
  • Do Not Make Tests Too Complicated. Even with just 2-dimensional tests (e.g., test of multiple segments and various creatives and subject lines), the combination of these factors may result in very small control cell sizes, in the end. You may end up making a decision based on less than five conversions in any given cell. Add other factors, such as offer or region, to the mix? You may be dealing with insignificant test sizes, even before the game starts.
  • Examine One Factor at a Time in Real-Life Situations. There are many things that may have strong influences on results, and such is life. Instead of looking at all possible combinations of segments and creatives, for example, evaluate segments and creatives separately. Ceteris paribus (“all other factors held constant,” which would never happen in reality, by the way), which segment would be the winner, when examined from one angle?
  • Test, Learn and Repeat. Like any scientific experiments, one should not jump to conclusions after one or two tests. Again, data-based marketing is a continuous loop. It should be treated as a long-term commitment, not some one-night stand.

Today’s marketers are much more fortunate in comparison to marketers of the past. We now have blazingly fast computers, data for every move that customers and prospects make, ample storage space for data, affordable analytical toolsets (often for free), and in general, more opportunities for marketers to learn about new technologies.

But even in the machine-driven world, where almost everything can be automated, please remember that it will be humans who make the final decisions. And if you repeatedly make decisions based on statistically insignificant figures, I must say that good or bad consequences are all on you.