Beware of One-Size-Fits-All Customer Data Solutions

In the data business, the ability to fine-tune database structure and toolsets to meet unique business requirements is key to success, not just flashy features and functionalities. Beware of technology providers who insist on a “one-size-fits-all” customer data solution.

In the data business, the ability to fine-tune database structure and toolsets to meet unique business requirements is key to success, not just flashy features and functionalities. Beware of technology providers who insist on a “one-size-fits-all” customer data solution, unless the price of entry is extremely low. Always check the tech provider’s exception management skills and their determination to finish the last mile. Too often, many just freeze at the thought of any customization.

The goal of any data project is to create monetary value out of available data. Whether it is about increasing revenue or reducing cost, data activities through various types of basic and advanced analytics must yield tangible results. Marketers are not doing all this data-related work to entertain geeks and nerds (no offense); no one is paying for data infrastructure, analytics toolsets, and most importantly, human cost to support some intellectual curiosity of a bunch of specialists.

Therefore, when it comes to evaluating any data play, the criteria that CEOs and CFOs bring to the table matter the most. Yes, I shared a long list of CDP evaluation criteria from the users’ and technical points of views last month, but let me emphasize that, like any business activity, data work is ultimately about the bottom line.

That means we have to maintain balance between the cost of doing business and usability of data assets. Unfortunately, these two important factors are inversely related. In other words, to make customer data more useful, one must put more time and money into it. Most datasets are unstructured, unrefined, uncategorized, and plain dirty. And the messiness level is not uniform.

Start With the Basics

Now, there are many commoditized toolsets out in the market to clean the data and weave them together to create a coveted Customer-360 view. In fact, if a service provider or a toolset isn’t even equipped to do the basic part, I suggest working with someone who can.

For example, a service provider must know the definition of dirty data. They may have to ask the client to gauge the tolerance level (for messy data), but basic parameters must be in place already.

What is a good email address, for instance? It should have all the proper components like @ signs and .com, .net, .org, etc. at the end. Permission flags must be attached properly. Primary and secondary email must be set by predetermined rules. They must be tagged properly if delivery fails, even once. The list goes on. I can think of similar sets of rules when it comes to name, address, company name, phone number, and other basic data fields.

Why are these important? Because it is not possible to create that Customer-360 view without properly cleaned and standardized Personally Identifiable Information (PII). And anyone who is in this game must be masters of that. The ability to clean basic information and matching seemingly unmatchable entities are just prerequisites in this game.

Even Basic Data Hygiene and Matching Routines Must Be Tweaked

Even with basic match routines, users must be able to dictate tightness and looseness of matching logics. If the goal of customer communication involves legal notifications (as for banking and investment industries), one should not merge any two entities just because they look similar. If the goal is mainly to maximize campaign effectiveness, one may merge similar looking entities using various “fuzzy” matching techniques, employing Soundex, nickname tables, and abbreviated or hashed match keys. If the database is filled with business entities for B2B marketing, then so-called commoditized merge rules become more complicated.

The first sign of trouble often becomes visible at this basic stage. Be aware of providers that insist on “one-size-fits-all” rules, in the name of some universal matching routine. There was no such thing even in the age of direct marketing (i.e., really old days). How are we going to go through complex omnichannel marketing environment with just a few hard-set rules that can’t be modified?

Simple matching logic only with name, address, and email becomes much more complex when you add new online and offline channels, as they all come with different types of match keys. Just in the offline world, the quality of customer names collected in physical stores vastly differs from that of self-entered information from a website along with shipping addresses. For example, I have seen countless invalid names like “Mickey Mouse,” “Asian Tourist,” or “No Name Provided.” Conversely, no one who wants to receive the merchandise at their address would create an entry “First Name: Asian” and “Last Name: Tourist.”

Sure, I’m providing simple examples to illustrate the fallacy of “one-size-fits-all” rules. But by definition, a CDP is an amalgamation of vastly different data sources, online and offline. Exceptions are the rules.

Dissecting Transaction Elements

Up to this point, we are still in the realm of “basic” stuff, which is mostly commoditized in the technology market. Now, let’s get into more challenging parts.

Once data weaving is done through PII fields and various proxies of individuals across networks and platforms, then behavioral, demographic, geo-location, and movement data must be consolidated around each individual. Now, demographic data from commercial data compilers are already standardized (one would hope), regardless of their data sources. Every other customer data type varies depending on your business.

The simplest form of transaction records would be from retail businesses, where you would sell widgets for set prices through certain channels. And what is a transaction record in that sense? “Who” bought “what,” “when,” for “how much,” through “what channel.” Even from such a simplified view point, things are not so uniform.

Let’s start with an easy one, such as common date/time stamp. Is it in form of UTC time code? That would be simple. Do we need to know the day-part of the transaction? Eventually, but by what standard? Do we need to convert them into local time of the transaction? Yes, because we need to tell evening buyers and daytime buyers apart, and we can’t use Coordinated Universal Time for that (unless you only operate in the U.K.).

“How much” isn’t so bad. It is made of net price, tax, shipping, discount, coupon redemption, and finally, total paid amount (for completed transactions). Sounds easy? Let’s just say that out of thousands of transaction files that I’ve encountered in my lifetime, I couldn’t find any “one rule” that governs how merchants would handle returns, refunds, or coupon redemptions.

Some create multiple entries for each action, with or without common transaction ID (crazy, right?). Many customer data sources contain mathematical errors all over. Inevitable file cutoff dates would create orphan records where only return transactions are found without any linkage to the original transaction record. Yes, we are not building an accounting system out of a marketing database, but no one should count canceled and returned transactions as a valid transaction for any analytics. “One-size-fits-all?” I laugh at that notion.

“Channel” may not be so bad. But at what level? What if the client has over 1,000 retail store locations all over the world? Should there be a subcategory under “Retail” as a channel? What about multiple websites with different brand names? How would we organize all that? If this type of basic – but essential – data isn’t organized properly, you won’t even be able to share store level reports with the marketing and sales teams, who wouldn’t care for a minute about “why” such basic reports are so hard to obtain.

The “what” part can be really complicated. Or, very simple if product SKUs are well-organized with proper product descriptions, and more importantly, predetermined product categories. A good sign would be the presence of a multi-level product category table, where you see entries like an apparel category broken down into Men, Women, Children, etc., and Women’s Apparel is broken down further into Formalwear, Sportswear, Casualwear, Underwear, Lingerie, Beachwear, Fashion, Accessories, etc.

For merchants with vast arrays of products, three to five levels of subcategories may be necessary even for simple BI reports, or further, advanced modeling and segmentation. But I’ve seen too many cases of incongruous and inconsistent categories (totally useless), recycled category names (really?), and weird categories such as “Summer Sales” or “Gift” (which are clearly for promotional events, not products).

All these items must be fixed and categorized properly, if they are not adequate for analytics. Otherwise, the gatekeepers of information are just dumping the hard work on poor end-users and analysts. Good luck creating any usable reports or models out of uncategorized product information. You might as well leave it as an unknown field, as product reports will have as many rows as the number of SKUs in the system. It will be a challenge finding any insights out of that kind of messy report.

Behavioral Data Are Complex and Unique to Your Business

Now, all this was about relatively simple “transaction” part. Shall we get into the online behavior data? Oh, it gets much dirtier, as any “tag” data are only as good as the person or department that tagged the web pages in question. Let’s just say I’ve seen all kinds of variations of one channel (or “Source”) called “Facebook.” Not from one place either, as they show up in “Medium” or “Device” fields. Who is going to clean up the mess?

I don’t mean to scare you, but these are just common examples in the retail industry. If you are in any subscription, continuity, travel, hospitality, or credit business, things get much more complicated.

For example, there isn’t any one “transaction date” in the travel industry. There would be Reservation Date, Booking Confirmation Date, Payment Date, Travel Date, Travel Duration, Cancellation Date, Modification Date, etc., and all these dates matter if you want to figure out what the traveler is about. If you get all these down properly and calculate distances from one another, you may be able to tell if the individual is traveling for business or for leisure. But only if all these data are in usable forms.

Always Consider Exception Management Skills

Some of you may be in businesses where turn-key solutions may be sufficient. And there are plenty of companies that provide automated, but simpler and cheaper options. The proper way to evaluate your situation would be to start with specific objectives and prioritize them. What are the functionalities you can’t live without, and what is the main goal of the data project? (Hopefully not hoarding the customer data.)

Once you set the organizational goals, try not to deviate from them so casually in the name of cost savings and automation. Your bosses and colleagues (i.e., mostly the “bottom line” folks) may not care much about the limitations of toolsets and technologies (i.e., geeky concerns).

Omnichannel marketing that requires a CDP is already complicated. So, beware of sales pitches like “All your dreams will come true with our CDP solution!” Ask some hard questions, and see if they balk at the word “customization.” Your success may depend on their ability to handle exceptions than executing some commoditized functions that they had acquired a long time ago. Unless you really believe that you will safely get to your destination on a “autopilot” mode.

 

Company Data Opens the Door to Data Business Success

The hype around creating data businesses is at an all-time high in the B2B media sector. With display ad revenue falling off a cliff and lead-gen business models becoming more challenged, there is a clear focus to diversify digital revenue further, and data models are a clear player in that picture.

The hype around creating data businesses is at an all-time high in the B2B media sector. With display ad revenue falling off a cliff and lead-gen business models becoming more challenged, there is a clear focus to diversify digital revenue further, and data models are a clear player in that picture.

But, as I talk to colleagues in the sector, many are trying to find their way on the data front. One obstacle in becoming a data player lies in the fundamental way media brands have their databases set up.

Most B2B media players tout that they have in-depth insights on users through their database. The problem with this approach is that it does not align with the way many companies want to start their data journey. While many marketers will look for one-on-one connections with people, the process of gathering data and gaining market insights often starts at the company/organization level.

There are many reasons why this is the case. On one hand, most of the data searches by a marketer start with trying to understand general trends across a market sector or region. While this data can be obtained at the individual level, it’s much easier to see these trends at the company level.

On the other hand, marketers are getting highly targeted at the companies they want to reach – at Edgell, we had a large technology provider come to us that wanted to reach only 300 companies at one point. While this can be done at the individual level, it’s much easier and more accurate to achieve with a company table structure in place.

Making Company Data Come to Life

Clearly, the lack of a company structure in today’s databases is hurting media brands in their data business launch efforts. But, how do we overcome this obstacle?

There are several ways to start. First, you need to shift your team’s thinking and culture. While capturing info on an individual is important, you have to put a structure in place where you’re equally collecting data at the company level.

Beyond the cultural shift, here are some tips that will help make your company table structure come to life:

Tip 1: Define what you need to capture

By adding a company table, you have a chance to bring valuable data elements to your database. So, you should use this as a chance to bring in data elements that you may not be capturing today. For example, you can bring in things like industries the company covers, overall company revenue, divisions, and other elements that you may not have in your database today.

Tip 2: Align data

The addition of a company table also allows you to align data across individual records. For example, in an audience-driven database, you can have one employee at company X that puts down divisional revenue while another puts down group revenue. Through the company table, you can use publicly available info from Company X’s financial reports or from tools like Hoovers to align all users from that company under a single revenue bracket.

Tip 3: Start with your top companies

Once you’ve built the foundation for the company table, you should develop a strategy for rolling it out to all of your audience members. But where do you start? The 80/20 rule is a good solution here. Take your Top 50 or Top 100 companies and roll the company table out to these organizations first. Likely, you’ll have the most contacts from these companies. That will allow you to create a good sample size for data collection/alignment and also a good way to build a plan for future migration.

Unlocking the Power of the Company Data

Once you’ve started supporting and collecting data, the next trick is putting the reporting tools in place to get value from the company data. Even more important is to have your company reports in place so your customers can leverage them for running market analysis reports, market trend reports, and more.

There are a number of options that can help on the reporting front. To me, one of the most powerful is the integration of a visualization tool like a Tableau or a Good Data. Visualization tools let you pull in different data sets, align them and create reports that can show the combination of data cross companies. This is extremely powerful when dealing with company data because you see trends around companies based on revenue, business/industry, and more.

Start Now

There is no doubt that many media brands are struggling to find their niche in the data business. While not the end-all solution, leveraging a company table in your database architecture is one way that you can better position your organization for data success.

Direct Mail: Data Structure Really Matters

Many times, one key aspect of direct mail marketing is overlooked and that is the structure of your database. What information you put in each field can really make a difference when you process data through CASS, DPV and NCOA. The postal service has a set of expected fields for each type of processing and when your data does not match that structure, you will have issues with correct output from the software.

Many times, one key aspect of direct mail marketing is overlooked and that is the structure of your database. What information you put in each field can really make a difference when you process data through CASS, DPV and NCOA. The postal service has a set of expected fields for each type of processing and when your data does not match that structure, you will have issues with correct output from the software.

First and foremost, your data should be consistent within each field. If your data field is “company name,” then the only data found in the field is “company name.” Likewise for all other fields. If the information is not consistent, the coding programs cannot find the proper information which can cause them to code the record as invalid. This means it will cost you more money to mail it, as it will not be able to mail at automation postage rates. You can, of course, choose not to mail bad addresses, but it could have been a good address if the information were where it should be in the record. There is no reason to lose out on contacting a prospect or customer just because of your data structure.

You are welcome to have as much information in your database as you need. We always recommend keeping purchase history and any other information you collect to be used to target people to the right offers. You will just add that information into additional fields. Never add extra information to fields required by the USPS, it will be stripped off in the CASS/DPV process or it can cause the address to be found invalid. Create new fields to hold extra information.

So what does the post office require for processing with CASS and DPV?

  • Address
  • City
  • State
  • ZIP code

If you want the process to add information for full-service barcoding your mail service provider will need to add required blank fields to your data, they will be populated by the process. Once complete, your file will contain information on which addresses were found and which were coded as bad addresses. You may choose how you wish to deal with the bad ones.

What does the post office require for NCOA?

  • Name
  • Company
  • Address
  • City
  • State
  • ZIP code

You must have either a company name or a person’s name in order for NCOA to match. The verification process looks to see if that person or company is at that address. You can expect your processed data to come back to you with fields that code records as good, moved or moved, no address on file. You are then able to use the new addresses for the people who moved, as well as remove the bad ones.

So what other pitfalls are there in data structure? Many times, it is unrecognized characters that get pulled in by exporting lists from CRM software. The most common one are hard returns in fields, those need to be removed before data can be processed. There are others that crop up too, such as foreign characters. Each filed needs to consist of only letters or numbers. Commas and periods are okay, as well, but keep in mind the post office prefers no punctuation. The cleaner your data file is, the better results you will get when processing them for mailing. If you are at all concerned about your data structure, contact your mail service provider, they can help guide you.

The Keyword in ‘Customer Journey’ Is ‘Customer’

The keyword in “Customer Journey” is “customer,” not “journey.” In fact, in this Omni-channel world, the word “journey” doesn’t even do much justice to what that journey study should be all about; there is no simple linear timeline about any of it anymore.

The keyword in “Customer Journey” is “customer,” not “journey.” In fact, in this omnichannel world, the word “journey” doesn’t even do much justice to what that journey study should be all about; there is no simple linear timeline about any of it anymore.

We often think about the customer journey in this fashion: awareness, research, engagement, transaction, feedback and, ever-important, repeat-purchase. This list is indeed a good start.

However, if you look at this list as a consumer, not as a marketer, do you personally go through all of these steps in this particular order? On a conceptual level, yes, but in the world where everyone is exposed to over five types of screens and interactive devices every day, old-fashioned frameworks based on linear timelines don’t always hold water.

I, as a consumer, often do research using my phone at the place of purchase. I may feel rewarded even before any actual purchase. I may provide feedback about my “experience” before, during or after a transaction. And being a human being with emotions, my negative feedback may not be directly correlated to my loyalty to the brand. (Actually, I am writing this piece while flying on an airline with which I have a premiere status, and to which I often provide extremely negative reviews.)

People are neither linear nor consistent. Especially when we are connected to devices with which we research, interact, transact and complain anytime, anywhere. The only part that is somewhat linear is when we put something in the shopping basket, make a purchase, and keep or return the item. So, this timeline view, in my opinion, is just a good guideline. We need to look at the customer journey from the customer’s angle, as well.

Understanding customer behavior is indeed a tricky business, as it requires multiple types of data. If we simplify it, we may put the key variables into three major categories. For a 3-dimensioal view (as I often do in a discussion), put your left hand out and assign each of the following dimensions to your thumb, index finger and the middle finger:

  • Behavioral Data: What they showed interest in, browsed, researched, purchased, returned, subscribed to, etc. In short, what they actually did.
  • Demographic Data: What they look like, in terms of demographic and geo-demographic data, such as their age, gender, marital status, income, family composition, type of residence, lifestyle, etc.
  • Attitudinal Data: Their general outlook on life, religious or political beliefs, priorities in life, reasons why they like certain things, purchase habits, etc.

One may say these data types are highly correlated to each other, and more often than not, they are indeed highly correlated. But not exactly so, and not all the time. Just because one keeps purchasing luxury items or spending time and money on expensive activities, and he is enjoying a middle-age life style living in posh neighborhood, we can’t definitely claim that he is politically conservative. Sometimes we just have to stop and ask the person.

On top of that, what people say they do and what they actually do are often not the same. Hence, these three independent axis of data types to describe a person.

If we have all three types of data about a person, prediction of that person’s intention — or his journey for commercial purposes or otherwise — will become incredibly accurate. But, unfortunately for marketers, asking “everyone about everything” simply isn’t feasible.

Even the most thorough survey is based on a relatively small sample response. One great thing about traditional primary research is that we often get to know who the respondents are. On the other hand, if we rely on social media to “listen,” we get to have opinions from far more people. But the tricky part there is that we don’t get to know who is speaking, as PII (personally identifiable information) is heavily guarded by the social media handlers. Basically, it isn’t easy to connect the dots completely when it comes to attitudinal data. (Conversely, connecting the dots between the behavioral data and demographic data is much simpler, provided with a decent data collection mechanism.)

Now let’s go back to the timeline view of the customer journey for an initial framework. Let’s list the key items in a general order for a simpler breakdown (though things may not be totally linear nowadays), and examine types of data available in each stage. The goal here is to find the point of entry for this difficult task of understanding the “end-to-end” customer journey in the most comprehensive way.

Listing typical data types associated with these entries:

  1. Awareness: Source (where from), likes/followings, clicks other digital trails, survey results, social media data, etc.
  2. Research: Browsing data, search words/search results, browsing length, page/item views, chats, etc.
  3. Engagement: Shopping basket data, clicks, chats, sales engagements, other physical trails at stores, etc.
  4. Transaction: Product/service (items purchased), transaction date, transaction amount, delivery date, transaction channel, payment method, region/store, discounts, renewals, cancelations, etc.
  5. Feedback: Returns, complaints and resolutions, surveys, social media data, net promotor score, etc.
  6. Repeat-purchase: Transaction data summarized on a customer level. The best indicator of loyalty.

Now, looking back at the three major types of data, let’s examine these data related to journey stages in terms of the following criteria:

  • Quality: Are data useful for explaining customer behaviors and predicting their next moves and future values? To explain their motives?
  • Availability: Do you have access to the data? Are they readily available in usable forms?
  • Coverage: Do you have the data just for some customers, or for the most of them?
  • Consistency: Do you get to access the data at all times, or just once in a while? Are they in consist forms? Are they consistently accurate?
  • Connectivity: Can you connect available data on a customer level? Or are they locked in silos? Do you have the match-key that connects customer data regardless of the data sources?

With these criteria, the Ground Zero of the most useful source in terms of understanding customers is transaction data. They are usually in the most usable formats, as they are mostly numbers and figures around the product data of your business. Sometimes, you may not get to know “who” made the purchase, but in comparison to other data types, hands-down, transaction data will tell you the most compelling stories about your customers. You’ll have to tweak and twist them to derive useful insights, but the field of analytics has been evolving around such data all along.

If you want to dig deeper into the “why” part of the equation (or other qualitative elements), you would need to venture into non-transactional, more attitudinal data. For the study of online journey toward conversion, digital analytics is undoubtedly in its mature stage, though it only covers online behaviors. Nonetheless, if you really want to understand customers, start with what they actually purchased, and then expand the study from there.

We rarely get to have access to all of the behavioral, demographic, and attitudinal data. And under those categories, we can think of a long list of subcategories, as well. Cross all of that with the timeline of the journey — even a rudimentary one — and having readily usable data from all three angles at all stages is indeed a rare event.

But that has been true for all ages of database marketing. Yes, those three key elements may move independently, but what if we only get to have one or two out of the three elements? Even if we do not have attitudinal data for a customer’s true motivation of engagement, the other two types of data — behavioral, which is mostly transaction and digital data, and demographic data, which can be purchased in large markets like the U.S. — can provide at least directional guidance.

How do you think the political parties target donors during election cycles? They at times have empirical data about someone’s political allegiance, but many times they “guess” using behavioral and demographic data along with modeling techniques, without really “asking” everyone.

Conversely, if you get to have access to attitudinal data of “some” people with known identities, we can build models to project such valuable information to the general population, only using a “common” set of variables (mostly demographic data). For instance, we may only get a few thousand respondents revealing their sentiment toward a brand or specific stances (for example, being a “green” conscience customer). We can use common demographic variables to project such a tendency to everyone. Would such a “bridging technique” be perfect? Like I mentioned in the beginning, no, not always. Will having such inferred information be much better than not knowing anything at all? Absolutely.

Without a doubt, understanding the customer journey is an important part of marketing. How else would you keep them engaged at all stages of purchases, leading them to loyalty?

The key is not to lose focus on the customer-centric side of analytics. Customer journey isn’t even perfectly sequential anymore. It should be more about “customer experience” regardless of the timeline. And to get to that level of constant relevancy, start with the known customer behaviors, and explain away “what works” in all channel engagements for each stage.

Channel or stage-oriented studies have their merits, but they won’t lead marketers to a more holistic view of customers. After all, high levels of awareness and ample clicks are just good indicators of future conversions; they do not instantly guarantee loyalty or profitability. Transaction data tend to reveal more stable paths to longevity of customer relationship.

You may never get to have explicit measurements of loyalty consistently; but luckily for us, customers vote with their money. Unlock the transaction data first, and then steadily peel away to the “why” part.

I am not claiming that you will obtain the answer to the “causality” question with just behavioral data; but for marketing purposes, I’d settle for “highly correlated” elements anytime. Because marketing activities can happen successfully without pondering upon the “why” question, if actionable shortcuts to loyalty are revealed through sold transaction data.

Every Figure Must Be Good, Bad or Ugly

You get to hear “actionable insights” whenever analytics or roles of data scientists are discussed. It may reach the level of a buzzword, if it hasn’t gone there already. But what does it mean?

That's one ugly number
That’s one ugly number

You get to hear “actionable insights” whenever analytics or roles of data scientists are discussed. It may reach the level of a buzzword, if it hasn’t gone there already. But what does it mean?

Certainly, stating the obvious doesn’t qualify as insightful reporting. If an analyst is compelled to add a few bullet points at the bottom of some gorgeous chart, it has to be more than “The conversion rate decreased by 13.4 percent compared to the same period last year.” Duh, isn’t that what that plot chart is saying, anyway? Tell me something we can’t readily see.

And the word “actionable” means, “So, fine, numbers look bad. What are we supposed to do about it?” What should be the next action for the marketers? Should we just react to the situation as fast we can, or should we consider the long-term effect of such an action, at this point? Shouldn’t we check if we are deviating from the long-term marketing strategies?

Many organizations consider a knee-jerk reaction to some seemingly negative KPI “analytics-based,” just because they “looked” at some numbers before taking action. But that is not really analytics-based decision-making. Sometimes, the best next step is to identify where we should dig next, in order to get to the bottom of the situation.

Like in any investigation, analysts need to follow the lead like a policeman; where do all of these tidbits of information lead us? To figure that out, we need to label all of the figures in reports — good, bad and ugly. But unlike policework, where catching the bad guy is the goal (as in “Yes, that suspect committed a crime,” in absolute terms), numbers in analytics should be judged in a relative manner. In other words, if the conversion rate of 1.2 percent seems “bad” to you, how so? In comparison to what? Your competitors in a similar industry? Last year’s or last quarter’s performance? Other similar product lines? Answering these questions as an analyst requires full understanding of business goals and challenges, not just analytical skillsets.

Last month, at the end of my article “Stop Blaming Marketing Problems on Software,” I listed nine high-level steps toward insight-driven analytics. Let’s dig a little further into the process.

Qualifying numbers into good, bad and ugly is really the first step toward creating solutions for the right problems. In many ways, it is a challenging job — as we are supposed to embark on an analytical journey with a clear problem statement. During the course of the investigation, however, we often find out that the original problem statement is not sufficient to cover all bases. It is like starting bathroom renovation in a house and encountering serious plumbing problems — while doing the job. In such cases, we would have no choice but to alter the course and fix a new set of problems.

In analytics, that type of course alteration is quite common. That is why analysts must be flexible and should let the numbers speak for themselves. Insisting on the original specification is an attitude of an inflexible data plumber. In fact, constantly “judging” every figure that we face, whether on a report or in the raw data, is one of the most important jobs of an analyst.

And the judgment must be within the business context. Figures that are acceptable in one situation may not be satisfactory in another situation, even within the same division of a company. Proper storytelling is another important aspect of analytics, and no one likes to hear lines out of context — even funny ones.

It may sound counterintuitive, but the best way to immerse oneself into a business context is to figure out why the consumer of information is asking certain questions and find ways to make her look good in front of her boss, in the end. Before numbers, figures, fancy graphics, statistical methodologies, there are business goals. And that is the key to determining the baselines for comparisons.

To list a few examples of typical baselines:

  • Industry norm
  • Competitors
  • Overall company norm
  • Other brands
  • Other products/product lines
  • Other marketing channels (if channel-driven)
  • Other regions and countries (if regional)
  • Previous years, seasons, quarters, months, weeks or year-to-date
  • Cost factors (for Return on Investment)

Then, involved parties should get into a healthy argument about key measurements, as different ones may paint a totally different picture. Overall sales figure in terms dollars may have gone down, but the number of high-value deals may have gone up, revealing multiple challenges down the line. Analysts must create an environment where multi-dimensional pictures of the situation may emerge naturally.

Some of the obvious and not-so-obvious metrics are:

  • Counts of opens, clicks, visits, pages views, shopping baskets, abandonments, etc. Typical digital metrics.
  • Number of conversions/transactions (in my opinion, the ultimate prize)
  • Units sold
  • # Unique visitors and/or customers (very important in the age of multichannel marketing)
  • Dollars — Total paid, discount/coupon amount, returns (If we are to figure out what type of offers are effective or harmful, follow the discounts, too.)
  • Days between transactions
  • Recency of transactions
  • Tenure of customers
  • Cost

If we conduct proper comparisons against proper baseline numbers, these raw figures may reveal interesting stories on their own (as in, “which ones are good and which ones are really ugly?”).

If we play with them a little more, more interesting stories will spring up. Simply, start dividing them with one another, again, considering what the users of information would care about the most. For instance:

  • Conversion rates — Compared to opens, visits, unique visitors (or customers), mailing counts, total contact counts, etc. Do them all while at it, starting with the Number of Customers, divided by the Number of Total Contacts.
  • Average dollar per transaction
  • Average dollar per customer
  • Dollar generated per 1,000 contacts
  • Discount ratio (Discount amount / Total dollar generated)
  • Average units per transaction
  • Revenue over Cost (good, old ROI)

Why go crazy here? Because, very often, one or two types of ratios don’t paint the whole picture. There are many instances where conversion rate and value of the transaction move in opposite directions (i.e., high conversion rate, but not many dollars generated per transaction). That is why we would even have “Dollar generated per every 1,000 contacts,” investigating yet another angle.

Then, analysts must check if these figures are moving in different directions for different segments. Look at these figures and ratios by:

  • Brand
  • Division/Country/Territory/Region
  • Store/Branch
  • Channel — separately for outbound (what marketers used) and inbound (what customers used)
  • Product line/Product category
  • Time Periods — Year, month, month regardless of the year, date, day of the week, daypart, etc.

Now here is the kicker. Remember how we started the journey with the idea of baseline comparisons? Start creating index values against them. If you want to compare some figures at a certain level (say, store level) against a company’s overall performance, create a set of index values by dividing corresponding sets of numbers (numbers in question, divided by those of the baseline).

When doing that, even consider psychological factors, and make sure that “good” numbers are represented with higher index values (by playing with the denominators). No one likes double negatives, and many people will have a hard time understanding that lower numbers are supposed to be better (unless the reader is a golfer).

Now the analyst is ready to mark these figures good, bad and ugly — using various index values. If you are compelled to show multiple degrees of goodness or badness, by any means, go right ahead and use five-color scales.

Only then, analysts should pick the most compelling stories out of all of this and put them in less than five bullet points for decision-makers. Anything goes, for as long as the points do matter for the business goals. We have to let the numbers speak for themselves and guide us to the logical path.

Analysts should not shy away from some ugly stories, as those are the best kind. If we do not diagnose the situation properly, all subsequent business and marketing efforts will be futile.

Besides, consumers of information are tired of the same old reports, and that is why everyone is demanding number geeks produce “actionable insights” out of mounds of data. Data professionals must answer that call by making the decision-making process simpler for non-analytical types. And that endeavor starts with labeling every figure good, bad or ugly.

Don’t worry; numbers won’t mind at all.

What Marketers Are Buying in 2017

If you’re looking at what investments to make this year, or trying to convince your colleagues to put the company’s money where your mouth is, take a look at the chart below to find out where other marketers are putting theirs.

Along side the media usage survey we released in the January/February issue of Target Marketing, NAPCO Media Research head Nathan Safran recently surveyed marketers on one very simple question: Which technologies, solutions and services will your company be investing in for 2017?

The answers weren’t necessarily surprising, but they do give a simple, yes/no indication for where marketers are putting their money.

First of all, a majority of responses said they plan to buy tech for email and social media marketing. So over 50 percent of marketers surveyed still feel a need to make investments to unlock the full potential of those channels.

Not far behind those, in the 40 to 50 percent range, marketers are investing in content marketing, video marketing, Web analytics and direct mail solutions.

We’re also seeing a lot of respondents who plan to invest in the bedrock marketing tools of CRM. Marketing automation, databases and mobile tech.

So, if you’re looking at what investments to make this year, or trying to convince your colleagues to put the company’s money where your mouth is, take a look at the chart below to find out where other marketers are putting theirs.

TM Plan to Buy Chart

Data Must Flow, But Not All of Them

Like any resource like water, data may be locked in wrong places or in inadequate forms. We hear about all kinds of doomsday scenarios related to the water supply in Africa, and it is because of uneven distribution of water thanks to drastic climate change and border disputes.

data flow and Marketing channelsThree quarters of this planet’s surface is covered with water. Yet, human collectives have to work constantly to maintain a steady supply of fresh water. When one area is flooded, another region may be going through some serious drought. It is about distribution of resources, not about the sheer amount of them.

Data management is the same way. We are clearly living in the age of abundant data, but many decision-makers complain that there are not enough “useful” data or insights. Why is that?

Like any resource like water, data may be locked in wrong places or in inadequate forms. We hear about all kinds of doomsday scenarios related to the water supply in Africa, and it is because of uneven distribution of water thanks to drastic climate change and border disputes. Conversely, California is running out of its water sources, even as the state is sitting right next to a huge pond called the Pacific Ocean. Water, in that case, is in a wrong form for the end-users there.

Data must flow through organizations like water; and to be useful, they must be in consumable formats. I have been emphasizing the importance of the data refinement process throughout this series (refer to “Cheat Sheet: Is Your Database Marketing Ready?” and “It’s All about Ranking”). In the data business, too much emphasis has been put on data collection platforms and toolsets that enable user interface, but not enough on the middle part where data are aligned, cleaned and reformatted though analytics. Most of the trouble, unfortunately, happens due to inadequate data, not because of storage platforms and reporting tools.

This month, nonetheless, let’s talk about the distribution of data. It doesn’t matter how clean and organized the data sources are, if they are locked in silos. Ironically, that is how this term “360-degree customer view” became popular, as most datasets are indeed channel- or division-centric, not customer-centric.

It is not so difficult to get to that consensus in any meeting. Yeah sure, let’s put all the data together in one place. Then, if we just open the flood gates and lead all of the data to a central location, will all the data issues go away? Can we just call that new data pond a “marketing database”? (Refer to “Marketing and IT; Cats and Dogs.”)

The short answer is “No way, no sir.” I have seen too many instances where IT and marketing try to move the river of data and fail miserably, thanks to the sheer size of such construction work. Maybe they should have thought about reducing the amount of data before constructing a monumental canal of data? Like in life, moving time is the best time to throw things away.

IT managers instinctively try to avoid any infrastructure failure, along with countless questions that would rise out of dumping “all” of the data on top of marketers’ laps. And for the sake of the users who can’t really plow through every bit of data anyway, we’ve got to be smarter about moving the data around.

The first thing that data players must consider is the purpose of the data project. Depending on the goal, the list of “must-haves” changes drastically.

So, let’s make an example out of the aforementioned “360-degree customer view” (or “single customer view”). What is the purpose of building such a thing? It is to stay relevant with the target customers. How do we go about doing that? Just collect anything and everything about them? If we are to “predict” their future behavior, or to estimate their propensities in order to pamper them through every channel that we get to use, one may think that we have to know absolutely everything about the customers.

Prescriptive Analytics at All Stages

I don’t know who or which analytics company started it, but we often hear that “prescriptive analytics” is at the top of the analytical food chain. I’m sure anyone who is even remotely involved in marketing analytics has seen that pyramid chart, where BI and dashboard reports

I don’t know who or which analytics company started it, but we often hear that “prescriptive analytics” is at the top of the analytical food chain. I’m sure anyone who is even remotely involved in marketing analytics has seen that pyramid chart, where BI and dashboard reports are at the bottom, then the descriptive analytics next, and then predictive analytics, and the prescriptive analytics on top. I, too, often differentiate different types of analytics, as they all serve very different purposes at different stages of marketing (refer to “How to Outsource Analytics”). However, I wholeheartedly reject the notion that so-called prescriptive analytics should deserve the position at the top of the pyramid.

If the chart meant to display “difficulty” of conducting various types of analytics, it may make some sense. Unfortunately, the users do not see it that way, but they see it as a roadmap. As if one must graduate lower-level status to move on to a more advanced level of analytics (I don’t really care for the term “advanced analytics,” either). That kind of hierarchical order surely is not, and should not be the case in the analytics business.

Because that would be like saying a doctor should not prescribe any medicine until she exhausts all possible procedures and options, in a pre-set order, to treat the patient. That would be just stupid. Doctors should be able to provide a cure if there are obvious visible symptoms, without conducting the most complicated (and expensive) tests first. Likewise, analysts must be able to prescribe solutions even from a one-page report, or a note on a napkin, for that matter. That means if we really want to use the term “prescriptive analytics” somehow, it should be attached to all types of analytics, at all times.

Now let’s break down what “prescriptive” in analytics means, anyway. We often say getting the “insights” out of data or reports is the essence of modern-day data science. But being prescriptive means analysts must go one step further and suggest what the next steps should be. If one sees apparent or not-so-apparent troubles or issues brewing, what are we going to do about it? Answering that question would require multi-dimensional thinking for sure, and such activity is a lot more difficult than building statistical models. For one, it is not purely about statistics and data manipulation, but about business practices, as well.

That is why I said the pyramid chart may make some sense if it’s meant to represent difficulty levels. The main reason why finding a decent data scientist is so difficult is deriving insights out of mounds of data and suggesting next steps are not just a mathematical challenge. Dealing with small and large data is a difficult task in itself, learning programming language is a lot like learning a foreign language, and mastering statistical knowledge is definitely not for everyone. Knowing how many people are severely allergic to foreign languages “and” mathematics, we are talking about a fraction of the population in America who would pass those two hurdles in the first place. Now we are adding that the analysts must possess business acumen to advise senior-level executives without using any scientific jargon (refer to “How to Be a Good Data Scientist”).

5 Data-Driven Marketing Catalysts for 2016 Growth

The new year tends to bring renewal, the promise of doing something new, better and smarter. I get a lot of calls looking for ideas and strategies to help improve the focus and performance of marketers’ plans and businesses. What most organizations are looking for is one or more actionable catalysts in their business.

The new year tends to bring renewal and the promise of doing something new, better and smarter. I get a lot of calls looking for ideas and strategies to help improve the focus and performance of marketers’ plans and businesses. What most organizations are looking for is one or more actionable marketing catalysts in their business.

To help you accelerate your thinking, here is a list of those catalysts that have something for everyone, some of which can be great food for thought as you tighten up plans. This year, you will do well if you resolve to do the following five things:

  • Build a Scalable Prospect Database Program. Achieving scale in your business is perhaps the greatest challenge we face as marketers. Those who achieve scale on their watch are the most sought-after marketing pros in their industries — because customer acquisition is far from cheap and competition grows more fiercely as the customer grows more demanding and promiscuous. A scientifically designed “Prospect Database Program” is one of the most effective ways great direct marketers can achieve scale — though not all prospecting databases and solutions are created equally.

A great prospecting database program requires creating a statistical advantage in targeting individuals who don’t already know your brand, or don’t already buy your brand. That advantage is critical if the program is to become cost-effective. Marketers who have engaged in structured prospecting know how challenging it is.

A prospect database program uses data about your very best existing customers: What they bought, when, how much and at what frequency. And it connects that transaction data to oceans of other data about those individuals. That data is then used to test which variables are, in fact, more predictive. They will come back in three categories: Those you might have “guessed” or “known,” those you guessed but proved less predictive than you might have thought, and those that are simply not predictive for your customer.

Repeated culling of that target is done through various statistical methods. What we’re left with is a target where we can begin to predict what the range of response looks like before we start. As the marketer, you can be more aggressive or conservative in the final target definition and have a good sense as to how well it will convert prospects in the target to new customers. This has a powerful effect on your ability to intelligently invest in customer acquisition, and is very effective — when done well — at achieving scale.

  • Methodically ID Your VIPs — and VVIPs to Distinguish Your ‘Gold’ Customers. It doesn’t matter what business you are in. Every business has “Gold” Customers — a surprisingly small percentage of customers that generate up to 80 percent of your revenue and profit.

With a smarter marketing database, you can easily identify these customers who are so crucial to your business. Once you have them, you can develop programs to retain and delight them. Here’s the “trick” though — don’t just personalize the website and emails to them. Don’t give them a nominally better offer. Instead, invest resources that you simply cannot afford to spend on all of your customers. When the level of investment in this special group begins to raise an eyebrow, you know for certain you are distinguishing that group, and wedding them to your brand.

Higher profits come from leveraging this target to retain the best customers, and motivating higher potential customers who aren’t “Gold” Customers yet to move up to higher “status” levels. A smart marketing database can make this actionable. One strategy we use is not only IDing the VIPs, but the VVIP’s (very, very important customers). Think about it, how would you feel being told you’re a “VVIP” by a brand that matters to you? You are now special to the brand — and customers who feel special tend not to shop with many other brands — a phenomenon also known as loyalty. So if you’d like more revenues from more loyal customers, resolve to use your data to ID which customers are worth investing in a more loyal relationship.

  • Target Customers Based on Their Next Most Likely Purchase. What if you knew when your customer was most likely to buy again? To determine the next most likely purchase, an analytics-optimized database is used to determine when customers in each segment usually buy and how often.

Once we have that purchase pattern calculated, we can ID customers who are not buying when the others who have acted (bought) similarly are buying. It is worth noting, there is a more strategic opportunity here to focus on these customers; as when they “miss” a purchase, this is usually because they are spending with a competitor. “Next Most Likely Purchase” models help you to target that spending before it’s “too late.”

The approach requires building a model that is statistically validated and then tested. Once that’s done, we have a capability that is consistently very powerful.

  • Target Customers Based on Their Next Most Likely Product or Category. We can determine the product a customer is most likely to buy “next.” An analytics-ready marketing database (not the same as a CRM or IT warehouse/database) is used to zero-in on the customers who bought a specific product or, more often, in a specific category or subcategory, by segment.

Similar to the “Next Most Likely Purchase” models, these models are used to find “gaps” in what was bought, as like-consumers tend to behave similarly when viewed in large enough numbers. When there is one of these gaps, it’s often because they bought the product from a competitor, or found an acceptable substitute — trading either up or down. When you target based upon what they are likely to buy at the right time, you can materially increase conversion across all consumers in your database.

  • Develop or Improve Your Customer Segmentation. Smart direct marketing database software is required to store all of the information and be able to support queries and actions that it will take to improve segmentation.

This is an important point, as databases tend to be purpose-specific. That is, a CRM database might be well-suited for individual communications and maintaining notes and histories about individual customers, but it’s probably not designed to perform the kind of queries required, or structure your data to do statistical target definition that is needed in effectively acquiring large numbers of new customers.

Successful segmentation must be done in a manner that helps you both understand your existing customers and their behaviors, lifestyles and most basic make up — and be able to help you acquire net-new customers, at scale. Success, of course, comes from creating useful segments, and developing customer marketing strategies for each segment.

3 Database Marketing Strategy Takeaways

An old friend in business called me to share her “crisis of confidence” in using her pricey new database/CRM/analytics system. She had led the organization through a major investment in “cleaning, compiling and organizing” the data to make it more “usable.” It was a herculean task,

An old friend in business called me to share her “crisis of confidence” in using her pricey new database/CRM/analytics system. She had led the organization through a major investment in “cleaning, compiling and organizing” the data to make it more “usable.” It was a herculean task, and she was proud of her accomplishment – but she was struggling to produce a material outcome beyond project completion.

The Business Problem
After wrestling with their data and building reports for another six months, there was a sinking feeling, one you may have even experienced yourself … for all the effort – where was this going? How is it driving the business? Are we making better decisions for it – or are our decisions just different? How will we justify the investment and produce returns? The catch all “infrastructure spending” was tossed around briefly.

After looking at a handful of reports and documents, I had many questions for her. Not surprising for an exceptionally successful executive like her, she gave me some fair and honest answers. One response she used more than once was, “We don’t know.”  That’s not trivial. In many organizations, it’s risky to think it, much less say it. Sometimes the best answer is “I don’t know” – but surrounded by data and smart people, it’s not entirely unreasonable for some folks to feel uncomfortable with a candid “I don’t know.”

Get Comfortable with ‘Not Knowing’
Just saying “I don’t know” can be the first step in solving the problem – so long as you’re wed to the fact that you “just know” you can’t even take the next step, which is problem definition, because you don’t have a problem if you “just know” something is good, important or even working – data and evidence aside. Take the challenge – I guarantee you this small act will spark more ideas, action and solutions for the de minimis time it takes than anything else you can do.

Take Away No. 1:
If you don’t know something, say so. Say it out loud, even. It will help in emotionally and logically moving on to defining the problem.

By now, I’m probably close to losing a few folks who are reading this. “I don’t know” is not something they’re comfortable with. If you are one of them remember, “I don’t know” is not where the process ends – it’s very often where the solution we’re hungry for begins to reveal itself.

Mike Ferranti blog pullquote

Problem Definition: It’s 90 Percent of the Problem
In discussing what the problem really was, we found another common issue. The problem wasn’t well-defined in the first place. The problem was essentially to “clean up the data” and to “have organization.” While that was a good thing to do, it didn’t solve any business problem. The manifestation was they now had a big (expensive) bucket of data that was judged to be better, or more valuable, than it was beforehand.

How was it better? The answer was it was more organized and more clean. Did it answer any specific questions? After looking closer, we saw it did. But did it actually begin to solve any specific problem? This was less obvious.

Here is where problem definition is so important. The problems that were defined as “organization” and “cleaning” weren’t business problems. They were symptoms of a data capture process that didn’t work, and that process came from a lack of a clear strategy.

Boiling the Ocean: Solutions That Are All Things
The specific problems being experienced were many and diverse. Focus was low. This was, in large part, a solution that was intended to do all things for all people.

I’m a direct marketer. I started my career in software development. I have a great appreciation for large systems and for what is commonly known as the “data warehouse” – a large database system that often starts with financial system data. Warehouse solutions often contain every cost in the enterprise, every operations metric, inventory, logistics, marketing, human resources and more.

But surely these are not “function-specific” solutions. In the vast majority of cases they are starting points, and they are not solutions to the problem the marketer has in selling one more widget. Those solutions need to be borne of a very specific set of marketing problems, and utilize a specific set of data – and in a specific format and data model – to actually solve them. That marketing-specific solution would likely need substantial transformation if taken from that “warehouse” solution. And when complete, it would virtually be a whole new dataset, altogether.

Take Away No. 2:
Ask yourself, are we taking a “Boil The Ocean” approach?

Boil the oceanAfter some discussion, we aligned that she had surely accomplished a lot, and that we could now access and view data about many things in the organization, including in marketing. But there were no specific capabilities that would speed the time-to-value present, and it was hard to make progress. Also, the data my friend’s organization quite reasonably thought was its most important failed to highlight the huge differences between the value of customers over the longer term. That created a strategic problem. The organization was trying to fix the long-term and its strategic business problems by looking at the wrong data and taking the wrong actions. These are very bright people with a compelling rationale for their course of action.

In the end, “Boil The Ocean” approaches are short on strategy, or are built on a strategy so grandiose, they become difficult or impossible to execute.

The Root Causes of These Strategic Challenges
So ultimately, how do such quality organizations go down an inefficient path like this? It ultimately comes down to a skills gap. What must change? It’s the skill set in marketing.

In the digital age, there are two major skill sets that we must buy, hire or develop in our organizations. Neither is trivial in marketing, and neither is possible without patience and focus.

Skill Set No. 1: Technology, Logic, Data
Marketers have traditionally come from a promotional and creative background. The big idea was always the highest-valued commodity. Today, things are changing faster – and permanently.

Marketers today are consistently spending more of their time with technologists, developers and data designers. The logical problem-solving skills by these folks are very different from those proposed by professionals with a creative or project management background. They need to solve problems that are not even being discussed on the way to solving the problems that are.

Because most organizations have some expertise with technology, and work with technology providers, the key takeaway here is that marketing data-specific applications require a different set of tech experience. Working with marketing data for marketing outcomes is unlike working with other types of data – the experience your IT department has working with finance or logistics data isn’t as useful as marketing “purpose-specific” data and technology experience.

Skill Set No. 2: Math
Barring some advanced direct marketers, marketers don’t always come from a math background. Only now are VPs beginning to have development, math and statistics experience. In an age of analytics, and now with the advent of tools and technologies to leverage large data sets, a solid understanding of math and basic statistics is becoming increasingly important.

Here’s an example to help make the point about the comfort level of using math and basic statistics to think about data:

You’re looking at the incomes and affluence of a customer base. With 1,000 members in the group, we have an average income of $100,000. That’s pretty telling, you might say.

We get more data, and the 1,001st customer is added to the sample – it’s Warren Buffett (net worth $67 billion). How useful is that average now?

There are many expressions for this common scenario – where outliers in your data can skew your numbers. From this come the expressions “The average lies” or “the tyranny of the averages.” Surely, the average isn’t the best number – though it’s a shortcut and a starting point. But it’s best to compare it to the median before taking too much faith in it – and a distribution histogram might tell an even better story about the composition of your customers’ incomes.

histogramDoing so will show if they are truly random … and follow a Gaussian distribution (AKA, normal distribution) and you can imagine this histogram with Warren Buffett added to the sample.

The takeaway here is the concepts required to evaluate and think about data require experienced and trained analysts. And those trained and experienced in evaluating marketing data are also required.

In the end in the scenario I described with my friend and client at the beginning of this column, her organization built a very large system, finished it on time and about at budget. But what the company invested in and created had some fundamental shortcomings. It was not a “purpose-specific” marketing solution – and it was conceived by a competent IT organization that was tactically adept – and strategically adrift.

Takeaway No. 3: Marketing Must Drive Marketing Outcomes
Marketing must drive marketing outcomes. Due to the discomfort that marketing often has with technology, math and statistics, key strategic decisions are quietly left to IT, a vendor or to chance.

This, of course, jeopardizes the results early. Marketing leadership can always ask good business and marketing questions and hold IT and technical resources accountable to prescribe only solutions that have a clear and simple strategy for achieving those goals.

The Bottom Line
Begin your database marketing endeavor with the “end in mind” by describing what success would look like in business terms. The decisions you’ll need to make will be easier.

If the discussions turn technical early, and they often do, ask business- and outcome-oriented questions to steer the conversation back on track. Maintain a focused strategy from the start and don’t let inexperience with math, programming or database science derail you. Get the help you need – and that means individuals and teams who know not only their discipline, but can answer the questions that matter most to you.

When you follow these strategic guideposts, your next big data-driven initiative will pay dividends now and into the future.