Understanding What a Customer Data Platform Needs to Be

Marketers try to achieve holistic personalization through all conceivable channels in order to stand out among countless messages hitting targeted individuals every day, if not every hour. If the message is not clearly about the target recipient, it will be quickly dismissed. So, how can marketers achieve such an advanced level of personalization?

Modern-day marketers try to achieve holistic personalization through all conceivable channels in order to stand out among countless marketing messages hitting targeted individuals every day, if not every hour. If the message is not clearly about the target recipient, it will be quickly dismissed.

So, how can marketers achieve such an advanced level of personalization? First, we have to figure out who each target individual is, which requires data collection: What they clicked, rejected, browsed, purchased, returned, repeated, recommended, look like, complained about, etc.  Pretty much every breath they take, every move they make (without being creepy). Let’s say that you achieved that level of data collection. Will it be enough?

Enter “Customer-360,” or “360-degree View of a Customer,” or “Customer-Centric Portrait,” or “Single View of a Customer.” You get the idea. Collected data must be consolidated around each individual to get a glimpse — never the whole picture — of who the targeted individual is.

You may say, “That’s cool, we just procured technology (or a vendor) that does all that.” Considering there is no CRM database or CDP (Customer Data Platform) company that does not say one of the terms I listed above, buyers of technology often buy into the marketing pitch.

Unfortunately,the 360-degree view of a customer is just a good start in this game, and a prerequisite. Not the end goal of any marketing effort. The goal of any data project should never be just putting all available data in one place. It must support great many complex and laborious functions during the course of planning, analysis, modeling, targeting, messaging, campaigning, and attribution.

So, for the interest of marketers, allow me to share the essentials of what a CDP needs to be and do, and what the common elements of useful marketing databases are.

A CDP Must Cover Omnichannel Sources

By definition, a CDP must support all touchpoints in an omnichannel marketing environment. No modern consumer lingers around just in one channel. The holistic view cannot be achieved by just looking at their past transaction history, either (even though the past purchase behavior still remains the most powerful predictor of future behavior).

Nor do marketers have time to wait until someone buys something through a particular channel for them to take actions. All movements and indicators — as much as possible — through every conceivable channel should be included in a CDP.

Yes, some data evaporates faster than others — such as browsing history — but we are talking about a game of inches here.  Besides, data atrophy can be delayed with proper use of modeling techniques.

Beware of vendors who want to stay in their comfort zone in terms of channels. No buyer is just an online or an offline person.

Data Must Be Connected on an Individual Level

Since buyers go through all kinds of online and offline channels during the course of their journey, collected data must be stitched together to reveal their true nature. Unfortunately, in this channel-centric world, characteristics of collected data are vastly different depending on sources.

Privacy concerns and regulations regarding Personally Identifiable Information (PII) greatly vary among channels. Even if PII is allowed to be collected, there may not be any common match key, such as address, email, phone number, cookie ID, device ID, etc.

There are third-party vendors who specialize in such data weaving work. But remember that no vendor is good with all types of data. You may have to procure different techniques depending on available channel data. I’ve seen cases where great technology companies that specialized in online data were clueless about “soft-match” techniques used by direct marketers for ages.

Remember, without accurate and consistent individual ID system, one cannot even start building a true Customer-360 view.

Data Must Be Clean and Reliable

You may think that I am stating the obvious, but you must assume that most data sources are dirty. There is no pristine dataset without a serious amount of data refinement work. And when I say dirty, I mean that databases are filled with inaccurate, inconsistent, uncategorized, and unstructured data. To be useful, data must be properly corrected, purged, standardized, and categorized.

Even simple time-stamps could be immensely inconsistent. What are date-time formats, and what time zones are they in?  Dollars aren’t just dollars either. What are net price, tax, shipping, discount, coupon, and paid amounts? No, the breakdown doesn’t have to be as precise as for an accounting system, but how would you identify habitual discount seekers without dissecting the data up front?

When it comes to free-form data, things get even more complicated. Let’s just say that most non-numeric data are not that useful without proper categorization, through strict rules along with text mining. And such work should all be done up front. If you don’t, you are simply deferring more tedious work to poor analysts, or worse, to the end-users.

Beware of vendors who think that loading the raw data onto some table is good enough. It never is, unless the goal is to hoard data.

Data Must Be Up-to-Date

“Real-time update” is one of the most abused word in this business. And I don’t casually recommend it, unless decisions must be made in real-time. Why? Because, generally speaking, more frequent updates mean higher maintenance cost.

Nevertheless, real-time update is a must, if we are getting into fully automated real-time personalization. It is entirely possible to rely on trigger data for reactive personalization outside the realm of CDP environment,  but such patch work will lead to regrets most of the time. For one, how would you figure out what elements really worked?

Even if a database is not updated in real-time, most source data must remain as fresh as they can be. For instance, it is generally not recommended to append third-party demographic data real-time (except for “hot-line” data, of course). But that doesn’t mean that you can just use old data indefinitely.

When it comes to behavioral data, time really is of an essence. Click data must be updated at least daily, if not real-time.  Transaction data may be updated weekly, but don’t go over a month without updating the base, as even simple measurements like “Days since last purchase” can be way off. You all know the importance of good old recency factor in any metrics.

Data Must Be Analytics-Ready

Just because the data in question are clean and error-free, that doesn’t mean that they are ready for advanced analytics. Data must be carefully summarized onto an individual level, in order to convert “event level information” into “descriptors of individuals.”  Presence of summary variables is a good indicator of true Customer-360.

You may have all the click, view, and conversion data, but those are all descriptors of events, not people. For personalization, you need know individual level affinities (you may call them “personas”). For planning and messaging, you may need to group target individuals into segments or cohorts. All those analytics run much faster and more effectively with analytics-ready data.

If not, even simple modeling or clustering work may take a very long time, even with a decent data platform in place. It is routinely quoted that over 80% of analysts’ time go into data preparation work — how about cutting that down to zero?

Most modern toolsets come with some analytics functions, such as KPI dashboards, basic queries, and even segmentation and modeling. However, for advanced level targeting and messaging, built-in tools may not be enough. You must ask how the system would support professional statisticians with data extraction, sampling, and scoring (on the backend). Don’t forget that most analytics work fails before or after the modeling steps. And when any meltdown happens, do not habitually blame the analysts, but dig deeper into the CDP ecosystem.

Also, remember that even automated modeling tools work much better with refined data on a proper level (i.e., Individual level data for individual level modeling).

CDP Must Be Campaign-Ready

For campaign execution, selected data may have to leave the CDP environment. Sometimes data may end up in a totally different system. A CDP must never be the bottleneck in data extraction and exchange. But in many cases, it is.

Beware of technology providers that only allow built-in campaign toolsets for campaign execution. You never know what new channels or technologies will spring up in the future. While at it, check how many different data exchange protocols are supported. Data going out is as important as data coming in.

CDP Must Support Omnichannel Attribution

Speaking of data coming in and out, CDPs must be able to collect campaign result data seamlessly, from all employed channels.  The very definition of “closed-loop” marketing is that we must continuously learn from past endeavors and improve effectiveness of targeting, messaging, and channel usage.

Omnichannel attribution is simply not possible without data coming from all marketing channels. And if you do not finish the backend analyses and attribution, how would you know what really worked?

The sad reality is that a great majority of marketers fly blind, even with a so-called CDP of their own. If I may be harsh here, you are not a database marketer if you are not measuring the results properly. A CDP must make complex backend reporting and attribution easier, not harder.

Final Thoughts

For a database system to be called a CDP, it must satisfy most — if not all — of these requirements. It may be daunting for some to read through this, but doing your homework in advance will make it easier for you in the long run.

And one last thing: Do not work with any technology providers that are stingy about custom modifications. Your business is unique, and you will have to tweak some features to satisfy your unique needs. I call that the “last-mile” service. Most data projects that are labeled as failures ended up there due to a lack of custom fitting.

Conversely, what we call “good” service providers are the ones who are really good at that last-mile service. Unless you are comfortable with one-size-fits-all pre-made — but cheaper — toolset, always insist on customizable solutions.

You didn’t think that this whole omnichannel marketing was that simple, did you?

 

How Well Do You Know Your Customer Data?

Some marketers seem to keep their distance from customer data. When I ask what kind of customer information they are working with, I hear things like, “Oh, Mary is in charge of our data. I leave it to her.” This is unfortunate.

Some marketers seem to keep their distance from customer data. When I ask what kind of customer information they are working with, I hear things like, “Oh, Mary is in charge of our data. I leave it to her.” This is unfortunate. I realize that the marketing profession may attract people who prefer to focus on “softer” functions like research, competitive strategy, and value propositions. But these days, it’s a real disadvantage, professionally and personally, to shun data. So, let me offer some painless steps to up your comfort level.

In this context, I am thinking about customer data at its most basic level: the customer or prospect record, which is usually found in a marketing database or a CRM system. This record contains the contact information, and descriptive and behavioral data elements we know about the customer. For B2B marketers, it will describe the account as well as the individual contacts. This subject arose in my mind recently as I read Steven Hayes’s interesting article called “Do Marketers Really Want to be Data Scientists?” in Oracle’s Modern Marketing blog. Hayes correctly concluded that marketers don’t need to do the science — build the models, run the experiments — but they do need to be familiar with the variables that drive customer behavior, in order to apply the science to marketing decision-making.

So, it behooves marketers to be deeply familiar with the customer record, which is where these variables are housed. This information serves as what I like to call “the recorded memory of the customer relationship,” and it reveals all sorts of insights into the nature of the customers and competitors, what they value, and how to communicate with them effectively.

So, how do you get familiar with your customer records, and make them your friends? Here are three steps to consider.

1. Take Mary — or whoever manages your customer data — out to lunch. Demonstrate your interest in understanding her world, her challenges and her interests. This puts a personal face to the data, and also makes Mary an ally and mentor in your quest.

2. Examine a handful of customer records. You’ll find all kinds of interesting things: What do we know about this person? Any ideas on how better to communicate and sell to him/her? How complete and accurate is the record? What additional data would help you develop even better ideas on how to treat the customer? Set an hour on your calendar every quarter or so, to repeat the process, becoming familiar with records from various types of customers and prospects.

3. Launch an initiative to develop a data strategy for your department or your company as a whole. This means a written policy that identifies the data elements you should collect on each customer, where each element will come from, and how you will use it to drive business value.

I guarantee, if you dive into the data records, your comfort level will rise dramatically. And so will your insight, and your skill as a marketer.

A version of this article appeared in Biznology, the digital marketing blog.

5 Data-Driven Marketing Catalysts for 2016 Growth

The new year tends to bring renewal, the promise of doing something new, better and smarter. I get a lot of calls looking for ideas and strategies to help improve the focus and performance of marketers’ plans and businesses. What most organizations are looking for is one or more actionable catalysts in their business.

The new year tends to bring renewal and the promise of doing something new, better and smarter. I get a lot of calls looking for ideas and strategies to help improve the focus and performance of marketers’ plans and businesses. What most organizations are looking for is one or more actionable marketing catalysts in their business.

To help you accelerate your thinking, here is a list of those catalysts that have something for everyone, some of which can be great food for thought as you tighten up plans. This year, you will do well if you resolve to do the following five things:

  • Build a Scalable Prospect Database Program. Achieving scale in your business is perhaps the greatest challenge we face as marketers. Those who achieve scale on their watch are the most sought-after marketing pros in their industries — because customer acquisition is far from cheap and competition grows more fiercely as the customer grows more demanding and promiscuous. A scientifically designed “Prospect Database Program” is one of the most effective ways great direct marketers can achieve scale — though not all prospecting databases and solutions are created equally.

A great prospecting database program requires creating a statistical advantage in targeting individuals who don’t already know your brand, or don’t already buy your brand. That advantage is critical if the program is to become cost-effective. Marketers who have engaged in structured prospecting know how challenging it is.

A prospect database program uses data about your very best existing customers: What they bought, when, how much and at what frequency. And it connects that transaction data to oceans of other data about those individuals. That data is then used to test which variables are, in fact, more predictive. They will come back in three categories: Those you might have “guessed” or “known,” those you guessed but proved less predictive than you might have thought, and those that are simply not predictive for your customer.

Repeated culling of that target is done through various statistical methods. What we’re left with is a target where we can begin to predict what the range of response looks like before we start. As the marketer, you can be more aggressive or conservative in the final target definition and have a good sense as to how well it will convert prospects in the target to new customers. This has a powerful effect on your ability to intelligently invest in customer acquisition, and is very effective — when done well — at achieving scale.

  • Methodically ID Your VIPs — and VVIPs to Distinguish Your ‘Gold’ Customers. It doesn’t matter what business you are in. Every business has “Gold” Customers — a surprisingly small percentage of customers that generate up to 80 percent of your revenue and profit.

With a smarter marketing database, you can easily identify these customers who are so crucial to your business. Once you have them, you can develop programs to retain and delight them. Here’s the “trick” though — don’t just personalize the website and emails to them. Don’t give them a nominally better offer. Instead, invest resources that you simply cannot afford to spend on all of your customers. When the level of investment in this special group begins to raise an eyebrow, you know for certain you are distinguishing that group, and wedding them to your brand.

Higher profits come from leveraging this target to retain the best customers, and motivating higher potential customers who aren’t “Gold” Customers yet to move up to higher “status” levels. A smart marketing database can make this actionable. One strategy we use is not only IDing the VIPs, but the VVIP’s (very, very important customers). Think about it, how would you feel being told you’re a “VVIP” by a brand that matters to you? You are now special to the brand — and customers who feel special tend not to shop with many other brands — a phenomenon also known as loyalty. So if you’d like more revenues from more loyal customers, resolve to use your data to ID which customers are worth investing in a more loyal relationship.

  • Target Customers Based on Their Next Most Likely Purchase. What if you knew when your customer was most likely to buy again? To determine the next most likely purchase, an analytics-optimized database is used to determine when customers in each segment usually buy and how often.

Once we have that purchase pattern calculated, we can ID customers who are not buying when the others who have acted (bought) similarly are buying. It is worth noting, there is a more strategic opportunity here to focus on these customers; as when they “miss” a purchase, this is usually because they are spending with a competitor. “Next Most Likely Purchase” models help you to target that spending before it’s “too late.”

The approach requires building a model that is statistically validated and then tested. Once that’s done, we have a capability that is consistently very powerful.

  • Target Customers Based on Their Next Most Likely Product or Category. We can determine the product a customer is most likely to buy “next.” An analytics-ready marketing database (not the same as a CRM or IT warehouse/database) is used to zero-in on the customers who bought a specific product or, more often, in a specific category or subcategory, by segment.

Similar to the “Next Most Likely Purchase” models, these models are used to find “gaps” in what was bought, as like-consumers tend to behave similarly when viewed in large enough numbers. When there is one of these gaps, it’s often because they bought the product from a competitor, or found an acceptable substitute — trading either up or down. When you target based upon what they are likely to buy at the right time, you can materially increase conversion across all consumers in your database.

  • Develop or Improve Your Customer Segmentation. Smart direct marketing database software is required to store all of the information and be able to support queries and actions that it will take to improve segmentation.

This is an important point, as databases tend to be purpose-specific. That is, a CRM database might be well-suited for individual communications and maintaining notes and histories about individual customers, but it’s probably not designed to perform the kind of queries required, or structure your data to do statistical target definition that is needed in effectively acquiring large numbers of new customers.

Successful segmentation must be done in a manner that helps you both understand your existing customers and their behaviors, lifestyles and most basic make up — and be able to help you acquire net-new customers, at scale. Success, of course, comes from creating useful segments, and developing customer marketing strategies for each segment.

5 Steps to Customer Data Hygiene: It’s Not Sexy, But It’s Essential

Are you happy with the quality of the information in your marketing database? Probably not. A new report from NetProspex confirms: 64 percent of company records in the database of a typical B-to-B marketer have no phone number attached. Pretty much eliminates phone as a reliable communications medium, doesn’t it? And 88 percent are missing basic firmographic data

Are you happy with the quality of the information in your marketing database? Probably not. A new report from NetProspex confirms: 64 percent of company records in the database of a typical B-to-B marketer have no phone number attached.

Pretty much eliminates phone as a reliable communications medium, doesn’t it?

And 88 percent are missing basic firmographic data, like industry, revenue or employee size—so profiling and segmentation is pretty tough. In fact, the Netprospex report concluded that 84 percent of B-to-B marketing databases are “barely functional.” Yipes. So, what can you do about it?

This is not a new problem. Dun & Bradstreet reports regularly on how quickly B-to-B data degrades. Get this: Every year, in the U.S., business postal addresses change at a rate of 20.7 percent. If your customer is a new business, the rate is 27.3 percent. Phone numbers change at the rate of 18 percent, and 22.7 percent among new businesses. Even company names fluctuate: 12.4 percent overall, and a staggering 36.4 percent percent among new businesses.

No wonder your sales force is always complaining that your data is no good (although they probably use more colorful words).

Here are five steps you can take to maintain data accuracy, a process known as “data hygiene.”

1. Key enter the data correctly in the first place.
Sounds obvious, but it’s often overlooked. This means following address guidelines from the Postal Service (for example, USPS Publication 28), and standardizing such complex things as job functions and company names. But it also means training for your key-entry personnel. These folks are often at the bottom of the status heap, but they are handling one of your most important corporate assets. So give them the respect they deserve.

2. Harness customer-facing personnel to update the data.
Leverage the access of customer-facing personnel to refresh contact information. Train and motivate call center personnel, customer service, salespeople and distributors—anyone with direct customer contact—to request updated information at each meeting. When it comes to sales people, this is an entirely debatable matter. You want sales people selling, not entering data. But it’s worth at least a conversation to see if you can come up with a painless way to extract fresh contact updates as sales people interact with their accounts.

3. Use data-cleansing software, internally or from a service provider, and delete obsolete records.
Use the software tools that are available, which will de-duplicate, standardize and sometimes append missing fields. These won’t correct much—it’s mostly email and postal address standardization—but they will save you time, and they are much cheaper than other methods.

4. Allow customers access to their records online, so they can make changes.
Consider setting up a customer preference center, where customers can manage the data you have on them, and indicate how they want to hear from you. Offer a premium or incentive, or even a discount, to obtain higher levels of compliance.

5. Outbound phone or email to verify, especially to top customers.
Segment your file, and conduct outbound confirmation campaigns for the highest value accounts. This can be by mail, email or telephone, and done annually. When you have some results, decide whether to put your less valuable accounts through the same process.

Do you have any favorite hygiene techniques to add to my list?

A version of this article appeared in Biznology, the digital marketing blog.

Chicken or the Egg? Data or Analytics?

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first.

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first. And I had the same answer when such a title didn’t even exist and CTOs or other types of executives covered that role in data-rich environments. As soon as an executive with a seemingly technical title starts representing the technology, that business is doomed. (Unless, of course, the business itself is about having fun with the technology. How nice!)

Nonetheless, if I really have to pick just one out of the two choices, I would definitely pick the analytics over data, as that is the key to providing answers to business questions. Data and databases must be supporting that critical role of analytics, not the other way around. Unfortunately, many organizations are completely backward about it, where analysts are confined within the limitations of database structures and affiliated technologies, and the business owners and decision-makers are dictated to by the analysts and analytical tool sets. It should be the business first, then the analytics. And all databases—especially marketing databases—should be optimized for analytical activities.

In my previous columns, I talked about the importance of marketing databases and statistical modeling in the age of Big Data; not all depositories of information are necessarily marketing databases, and statistical modeling is the best way to harness marketing answers out of mounds of accumulated data. That begs for the next question: Is your marketing database model-ready?

When I talk about the benefits of statistical modeling in data-rich environments (refer to my previous column titled “Why Model?”), I often encounter folks who list reasons why they do not employ modeling as part of their normal marketing activities. If I may share a few examples here:

  • Target universe is too small: Depending on the industry, the prospect universe and customer base are sometimes very small in size, so one may decide to engage everyone in the target group. But do you know what to offer to each of your prospects? Customized offers should be based on some serious analytics.
  • Predictive data not available: This may have been true years back, but not in this day and age. Either there is a major failure in data collection, or collected data are too unstructured to yield any meaningful answers. Aren’t we living in the age of Big Data? Surely we should all dig deeper.
  • 1-to-1 marketing channels not in plan: As I repeatedly said in my previous columns, “every” channel is, or soon will be, a 1-to-1 channel. Every audience is secretly screaming, “Entertain us!” And customized customer engagement efforts should be based on modeling, segmentation and profiling.
  • Budget doesn’t allow modeling: If the budget is too tight, a marketer may opt in for some software solution instead of hiring a team of statisticians. Remember that cookie-cutter models out of software packages are still better than someone’s intuitive selection rules (i.e., someone’s “gut” feeling).
  • The whole modeling process is just too painful: Hmm, I hear you. The whole process could be long and difficult. Now, why do you think it is so painful?

Like a good doctor, a consultant should be able to identify root causes based on pain points. So let’s hear some complaints:

  • It is not easy to find “best” customers for targeting
  • Modelers are fixing data all the time
  • Models end up relying on a few popular variables, anyway
  • Analysts are asking for more data all the time
  • It takes too long to develop and implement models
  • There are serious inconsistencies when models are applied to the database
  • Results are disappointing
  • Etc., etc…

I often get called in when model-based marketing efforts yield disappointing results. More often than not, the opening statement in such meetings is that “The model did not work.” Really? What is interesting is that in more than nine times out of 10 cases like that, the models are the only elements that seem to have been done properly. Everything else—from pre-modeling steps, such as data hygiene, conversion, categorization, and summarization; to post-modeling steps, such as score application and validation—often turns out to be the root cause of all the troubles, resulting in pain points listed here.

When I speak at marketing conferences, talking about this subject of this “model-ready” environment, I always ask if there are statisticians and analysts in the audience. Then I ask what percentage of their time goes into non-statistical activities, such as data preparation and remedying data errors. The absolute majority of them say they spend of 80 percent to 90 percent of their time fixing the data, devoting the rest to the model development work. You don’t need me to tell you that something is terribly wrong with this picture. And I am pretty sure that none of those analysts got their PhDs and master’s degrees in statistics to spend most of their waking hours fixing the data. Yeah, I know from experience that, in this data business, the last guy who happens to touch the dataset always ends up being responsible for all errors made to the file thus far, but still. No wonder it is often quoted that one of the key elements of being a successful data scientist is the programming skill.

When you provide datasets filled with unstructured, incomplete and/or missing data, diligent analysts will devote their time to remedying the situation and making the best out of what they have received. I myself often tell newcomers that analytics is really about making the best of what you’ve got. The trouble is that such data preparation work calls for a different set of skills that have nothing to do with statistics or analytics, and most analysts are not that great at programming, nor are they trained for it.

Even if they were able to create a set of sensible variables to play with, here comes the bigger trouble; what they have just fixed is just a “sample” of the database, when the models must be applied to the whole thing later. Modern databases often contain hundreds of millions of records, and no analyst in his or her right mind uses the whole base to develop any models. Even if the sample is as large as a few million records (an overkill, for sure) that would hardly be the entire picture. The real trouble is that no model is useful unless the resultant model scores are available on every record in the database. It is one thing to fix a sample of a few hundred thousand records. Now try to apply that model algorithm to 200 million entries. You see all those interesting variables that analysts created and fixed in the sample universe? All that should be redone in the real database with hundreds of millions of lines.

Sure, it is not impossible to include all the instructions of variable conversion, reformat, edit and summarization in the model-scoring program. But such a practice is the No. 1 cause of errors, inconsistencies and serious delays. Yes, it is not impossible to steer a car with your knees while texting with your hands, but I wouldn’t call that the best practice.

That is why marketing databases must be model-ready, where sampling and scoring become a routine with minimal data transformation. When I design a marketing database, I always put the analysts on top of the user list. Sure, non-statistical types will still be able to run queries and reports out of it, but those activities should be secondary as they are lower-level functions (i.e., simpler and easier) compared to being “model-ready.”

Here is list of prerequisites of being model-ready (which will be explained in detail in my future columns):

  • All tables linked or merged properly and consistently
  • Data summarized to consistent levels such as individuals, households, email entries or products (depending on the ranking priority by the users)
  • All numeric fields standardized, where missing data and zero values are separated
  • All categorical data edited and categorized according to preset business rules
  • Missing data imputed by standardized set of rules
  • All external data variables appended properly

Basically, the whole database should be as pristine as the sample datasets that analysts play with. That way, sampling should take only a few seconds, and applying the resultant model algorithms to the whole base would simply be the computer’s job, not some nerve-wrecking, nail-biting, all-night baby-sitting suspense for every update cycle.

In my co-op database days, we designed and implemented the core database with this model-ready philosophy, where all samples were presented to the analysts on silver platters, with absolutely no need for fixing the data any further. Analysts devoted their time to pondering target definitions and statistical methodologies. This way, each analyst was able to build about eight to 10 “custom” models—not cookie-cutter models—per “day,” and all models were applied to the entire database with more than 200 million individuals at the end of each day (I hear that they are even more efficient these days). Now, for the folks who are accustomed to 30-day model implementation cycle (I’ve seen as long as 6-month cycles), this may sound like a total science fiction. And I am not even saying that all companies need to build and implement that many models every day, as that would hardly be a core business for them, anyway.

In any case, this type of practice has been in use way before the words “Big Data” were even uttered by anyone, and I would say that such discipline is required even more desperately now. Everyone is screaming for immediate answers for their questions, and the questions should be answered in forms of model scores, which are the most effective and concise summations of all available data. This so-called “in-database” modeling and scoring practice starts with “model-ready” database structure. In the upcoming issues, I will share the detailed ways to get there.

So, here is the answer for the chicken-or-the-egg question. It is the business posing the questions first and foremost, then the analytics providing answers to those questions, where databases are optimized to support such analytical activities including predictive modeling. For the chicken example, with the ultimate goal of all living creatures being procreation of their species, I’d say eggs are just a means to that end. Therefore, for a business-minded chicken, yeah, definitely the chicken before the egg. Not that I’ve seen too many logical chickens.

Cheat Sheet: Is Your Database Marketing Ready?

Many data-related projects end up as big disappointments. And, in many cases, it is because they did not have any design philosophy behind them. Because many folks are more familiar with buildings and cars than geeky databases, allow me to use them as examples here.

Many data-related projects end up as big disappointments. And, in many cases, it is because they did not have any design philosophy behind them. Because many folks are more familiar with buildings and cars than geeky databases, allow me to use them as examples here.

Imagine someone started constructing a building without a clear purpose. What is it going to be? An office building or a residence? If residential, for how many people? For a family, or for 200 college kids? Are they going to just eat and sleep in there, or are they going to engage in other activities in it? What is the budget for development and ongoing maintenance?

If someone starts building a house without answering these basic questions, well, it is safe to say that the guy who commissioned such a project is not in the right state of mind. Then again, he may be a filthy rich rock star with some crazy ideas. But let us just say that is an exceptional case. Nonetheless, surprisingly, a great many database projects start out exactly this way.

Just like a house is not just a sum of bricks, mortar and metal, a database is not just a sum of data, and there has to be design philosophy behind it. And yet, many companies think that putting all available data in one place is just good enough. Call it a movie without a director or a building without an architect; you know and I know that such a project cannot end well.

Even when a professional database designer gets involved, too often the project goes out of control—as the business requirement document ends up being a summary of
everyone’s wish lists, without any prioritization or filtering. It is a case of a movie without a director. The goal becomes something like “a database that stores all conceivable marketing, accounting and payment activities, handling both prospecting and customer relationship management through all conceivable channels, including face-to-face sales and lead management for big accounts. And it should include both domestic and international activities, and the update has to be done in real time.”

Really. Someone in that organization must have attended a database marketing conference recently to get all that listed. It might be simpler and cheaper building a 2-ton truck that flies. But before we commission something like this from the get-go, shall we discuss why the truck has to fly, too? For one, if you want real-time updates, do you have a business case for it? (As in, someone in the field must make real-time decisions with real-time data.) Or do you just fancy a large object, moving really fast?

Companies that primarily sell database tools often do not help the matter, either. Some promise that the tool sets will categorize all kinds of input data, based on some auto-generated meta-tables. (Really?) The tool will clean the data automatically. (Is it a self-cleaning oven?) The tool will establish key links (by what?), build models on its own (with what target data?), deploy campaigns (every Monday?), and conduct result analysis (with responses from all channels?).

All these capabilities sound really wonderful, but does that system set long- and short-term marketing goals for you, too? Does it understand the subtle nuances in human behaviors and intentions?

Sorry for being a skeptic here. But in such cases, I think someone watched “Star Trek” too much. I have never seen a company that does not regret spending seven figures on a tool set that was supposed to do everything. Do you wonder why? It is not because such activities cannot be automated, but because:

  1. Machines do not think for us (not quite yet); and
  2. Such a system is often very expensive, as it needs to cover all contingencies (the opposite of “goal-oriented” cheaper options).

So it becomes nearly impossible to justify the cost with incremental improvements in marketing efficiency. Even if the response rates double, all related marketing costs go down by a quarter, and revenue jumps up by 200 percent, there are not many companies that can easily justify that kind of spending.

Worse yet, imagine that you just paid 10 times more for some factory-made suit than you would have paid for a custom-made Italian suit. Since when is an automated, cookie-cutter answer more desirable than custom-tailored ones? Ever since computing and storage costs started to go down significantly, and more so in this age of Big Data that has an “everything, all the time” mentality.

But let me ask you again: Do you really have a marketing database?

Let us just say that I am a car designer. A potential customer who has been doing a lot of research on the technology front presents me with a spec for a vehicle that is as big as a tractor-trailer and as quick as a passenger car. I guess that someone really needs to move lots of stuff, really fast. Now, let us assume that it will cost about $8 million or more to build a car like that, and that estimate is without the rocket booster (ah, my heart breaks). If my business model is to take a percentage out of that budget, I would say, “Yeah sure, we can build a car like that for you. When can we start?”

But let us stop for a moment and ask why the client would “need” (not “want”) a car like that in the first place. After some user interviews and prioritization, we may collectively conclude that a fleet of full-size vans can satisfy 98 percent of the business needs, saving about $7 million. If that client absolutely and positively has to get to that extra 2 percent to satisfy every possible contingency in his business and spend that money, well, that is his prerogative, is it not? But I have to ask the business questions first before initiating that inevitable long and winding journey without a roadmap.

Knowing exactly what the database is supposed to be doing must be the starting point. Not “let’s just gather everything in one place and hope to God that some user will figure something out eventually.” Also, let’s not forget that constantly adding new goals in any phase of the project will inevitably complicate the matter and increase the cost.

Conversely, repurposing a database designed for some other goal will cause lots of troubles down the line. Yeah, sure. Is it not possible to move 100 people from A to B with a 2-seater sports car, if you are willing to make lots of quick trips and get some speeding tickets along the way? Yes, but that would not be my first recommendation. Instead, here are some real possibilities.

Databases support many different types of activities. So let us name a few:

  • Order fulfillment
  • Inventory management and accounting
  • Contact management for sales
  • Dashboard and report generation
  • Queries and selections
  • Campaign management
  • Response analysis
  • Trend analysis
  • Predictive modeling and scoring
  • Etc., etc.

The list goes on, and some of the databases may be doing fine jobs in many areas already. But can we safely call them “marketing” databases? Or are marketers simply tapping into the central data depository somehow, just making do with lots of blood, sweat and tears?

As an exercise, let me ask a few questions to see if your organization has a functioning marketing database for CRM purposes:

  • What is the average order size per year for customers with tenure of more than one year? —You may have all the transaction data, but maybe not on an individual level in order to know the average.
  • What is the number of active and dormant customers based on the last transaction date? —You will be surprised to find out that many companies do not know exactly how many customers they really have. Beep! 1 million-“ish” is not a good answer.
  • What is the average number of days between activities for each channel for each customer? —With basic transaction data summarized “properly,” this is not a difficult question to answer. But it’s very difficult if there are divisional “channel-centric” databases scattered all over.
  • What is the average number of touches through all channels that you employ before your customer reaches the projected value potential? —This is a hard one. Without all the transaction and contact history by all channels in a “closed-loop” structure, one cannot even begin to formulate an answer for this one. And the “value potential” is a result of statistical modeling, is it not?
  • What are typical gateway products, and how are they correlated to other product purchases? —This may sound like a product question, but without knowing each customer’s purchase history lined up properly with fully standardized product categories, it may take a while to figure this one out.
  • Are basic RFM data—such as dollars, transactions, dates and intervals—routinely being used in predictive models? —The answer is a firm “no,” if the statisticians are spending the majority of their time fixing the data; and “not even close,” if you are still just using RFM data for rudimentary filtering.

Now, if your answer is “Well, with some data summarization and inner/outer joins here and there—though we don’t have all transaction records from last year, and if we can get all the campaign histories from all seven vendors who managed our marketing campaigns, except for emails—maybe?”, then I am sorry to inform you that you do not have a marketing database. Even if you can eventually get to the answer if some programmer takes two weeks to draw a 7-page flow chart.

Often, I get extra comments like “But we have a relational database!” Or, “We stored every transaction for the past 10 years in Hadoop and we can retrieve any one of them in less than a second!” To these comments, I would say “Congratulations, your car has four wheels, right?”

To answer the important marketing questions, the database should be organized in a “buyer-centric” format. Going back to the database philosophy question, the fundamental design of the database changes based on its main purpose, much like the way a sports sedan and an SUV that share the same wheel base and engine end up shaped differently.

Marketing is about people. And, at the center of the marketing database, there have to be people. Every data element in the base should be “describing” those people.

Unfortunately, most relational databases are transaction-, channel- or product-centric, describing events and transactions—but not the people. Unstructured databases that are tuned primarily for massive storage and rapid retrieval may just have pieces of data all over the place, necessitating serious rearrangement to answer some of the most basic business questions.

So, the question still stands. Is your database marketing ready? Because if it is, you would have taken no time to answer my questions listed above and say: “Yeah, I got this. Anything else?”

Now, imagine the difference between marketers who get to the answers with a few clicks vs. the ones who have no clue where to begin, even when sitting on mounds of data. The difference between the two is not the size of the investment, but the design philosophy.

I just hope that you did not buy a sports car when you needed a truck.

Building Your B-to-B Marketing Database

The single most important tool in B-to-B is, arguably, the marketing database. Without a robust collection of contact information, firmographic and transactional data about customers and prospects, you are at sea when it comes to customer segmentation, analytics and marketing communications of all sorts, whether for acquiring new customers or to expand the value of existing customers. In fact, you might call the database the “recorded history of the customer relationship.” So what goes into a marketing database? Plent 

The single most important tool in B-to-B is, arguably, the marketing database. Without a robust collection of contact information, firmographic and transactional data about customers and prospects, you are at sea when it comes to customer segmentation, analytics and marketing communications of all sorts, whether for acquiring new customers or to expand the value of existing customers. In fact, you might call the database the “recorded history of the customer relationship.” So what goes into a marketing database? Plenty.

First, let’s look at the special characteristics of B-to-B databases, which differ from consumer in several important ways:

  1. In consumer purchasing, the decision-maker and the buyer are usually the same person—a one-man (or, more likely, woman) show. In business buying, there’s an entire cast of characters. In the mix are employees charged with product specification, users of the product and purchasing agents, not to mention the decision-makers who hold final approval over the sale.
  2. B-to-B databases carry data at three levels: the enterprise or parent company; the site, or location, of offices, plants and warehouses; and the multitude of individual contacts within the company.
  3. B-to-B data tends to degrade at the rate of 4 percent to 6 percent per month, so keeping up with changing titles, email addresses, company moves, company name changes-this requires dedicated attention, spadework and resources.
  4. Companies that sell through channel partners will have a mix of customers, from distributors, agents and other business partners, through end-buyers.

Here are the elements you are likely to want to capture and maintain in a B-to-B marketing database.

  • Account name, address
    • Phone, fax, website
  • Contact(s) information
    • Title, function, buying role, email, direct phone
  • Parent company/enterprise link
  • SIC or NAICS
  • Year the company was started
  • Public vs. private
  • Revenue/sales
  • Employee size
  • Credit score
  • Fiscal year
  • Purchase history
  • Purchase preferences
  • Budgets, purchase plans
  • Survey questions (e.g., from market research)
  • Qualification questions (from lead qualification processes)
  • Promotion history (record of outbound and inbound communications)
  • Customer service history
  • Source (where the data came from, and when)
  • Unique identifier (to match and de-duplicate records)

To assemble the data, the place to begin in inside your company. With some sleuthing, you’ll find useful information about customers all over the place. Start with contact records, whether they sit in a CRM system, in Outlook files or even in Rolodexes. But don’t stop there. You also want to pull in transactional history from your operating systems-billing, shipping, credit—and your customer service systems.

Here’s a checklist of internal data sources that you should explore. Gather up every crumb.

  • Sales and marketing contacts
  • Billing systems
  • Credit files
  • Fulfillment systems
  • Customer services systems
  • Web data, from cookies, registrations and social media
  • Inquiry files and referrals

Once these elements are pulled in, matched and de-duplicated, it’s time to consider external data sources. Database marketing companies will sell you data elements that may be missing, most important among these being industry (in the form of SIC or NAICs codes), company size (revenue or number of employees, or both) and title or job function of contacts. Such elements can be appended to your database for pennies apiece.

In some situations, it makes sense to license and import prospect lists, as well. If you are targeting relatively narrow industry verticals, or certain job titles, and especially if you experience long sales cycles, it may be wise to buy prospecting names for multiple use and import them into your database, rather than renting them serially for each prospecting campaign.

After filling in the gaps with data append, the next step is the process of “data discovery.” Essentially this means gathering essential data by hand—or, more accurately, by outbound phone or email contact. This costs a considerable sum, so only perform discovery on the most important accounts, and only collect the data elements that are essential to your marketing success, like title, direct phone number and level of purchasing authority. Some data discovery can be done via LinkedIn and scouring corporate websites, which are likely to provide contact names, titles and email addresses you can use to populate your company records.

Be thorough, be brave, and have fun. And let me know your experiences.

A version of this article appeared in Biznology, the digital marketing blog.

Left Hand? I’d Like to Introduce Right Hand

What happened to good, old fashioned, “please” and “thank you”? As a customer, it’s nice to be thanked for my business, or appreciated for my subscription to a service. It makes me feel part of the brand and valued for my investment. But as a cold prospect, it’s even more important since making a good impression should always be part of the process. So why is it missing from so many marketing communications programs?

What happened to good, old fashioned, “please” and “thank you”?

As a customer, it’s nice to be thanked for my business, or appreciated for my subscription to a service. It makes me feel part of the brand and valued for my investment. But as a cold prospect, it’s even more important since making a good impression should always be part of the process. So why is it missing from so many marketing communications programs?

After attending a B-to-B webinar recently, I fully expected to receive a follow-up email thanking me for my attendance, and a continued nurturing of me along their sales cycle: A request for a meeting, an invitation to participate in a live demo, or even a link to a case study or two that were geared to my industry. Instead, I got an email that sounded as if they were talking to a cold prospect.

Perhaps the marketing manager failed to merge/purge the webinar registration/attendee list against their cold prospecting list (tsk, tsk, tsk). But I suspect this business didn’t even think to conduct a merge/purge. Why?

Because, like most mid-to-large B-to-B organizations, one marketing manager is responsible for acquisition and someone else is responsible for sales support—and it seems that neither of them talk to each other … EVER.

If this company maintains a database, I should be flagged as “responded” AND “attended an event” so the sales team can take over the management of this “lead.” I’ve met with many, many organizations that don’t have a lead database (or, even worse, they have multiple databases because no one is happy with the company solution, or the solution is too hard to manage/maintain). Worse still, they may have a customer database, but it’s not well maintained, or is too difficult to access/use. So when it comes time to upsell or cross-sell a product, they don’t even know who their customers are, or how to talk to them in a meaningful way.

Thus we circle back to my dilemma. How can you thank me for attending an event and start to sell me on your product/solution, if you don’t know that I attended in the first place?

As marketers, we’re all busy with our heads down, trying to get work out the door. I get it. But at some point, you have to stop all the day-to-day madness and realize that you’re just putting off the inevitable. Insist on investing in a proper marketing database and a database manager to help your company communicate with more intelligence and insight. In turn, that will lead to your ability to target any particular audience and craft smarter, more relevant marketing messages, which will, in turn, lead to better results. I guarantee it.

Oh, and you’re welcome.

Updating Your Marketing Database

It’s amazing how quickly things go obsolete these days. For those of us in the business of customer data, times and technologies have changed along with the times. Some has to do with the advent of new technologies; some of it has to do with changing expectations. Let’s take a look at how the landscape has changed and what it means for marketers.

It’s amazing how quickly things go obsolete these days. For those of us in the business of customer data, times and technologies have changed along with the times. Some has to do with the advent of new technologies; some of it has to do with changing expectations. Let’s take a look at how the landscape has changed and what it means for marketers.

For marketing departments, maintaining updating customer data has always been a major headache. One way to update data is by relying on sales team members to make the updates themselves as they go about their jobs. For lack of a better term, let’s call this method internal crowd-sourcing, and there are two reasons why it has its limitations.

The first reason is technology. Typically, customer data is stored in a data hub or data warehouse, which is usually a home-grown and oftentimes proprietary database built using one of many popular database architectures. Customer databases tend to be proprietary because each organization sells different products and services, to different types of firms, and consequently collects different data points. Additionally, customer databases are usually grown organically over many years, and as a result tend to contain disparate information, often collected from different sources during different timeframes, of varying degrees of accuracy.

It’s one thing having data stored in a data warehouse somewhere. It’s quite another altogether to give salespeople access to a portal where the edits can be made—that’s been the real challenge. The database essentially needs to be integrated with or housed in some kind of tool, such as an enterprise resource planning (ERP) software or customer relationship management (CRM) software that gives sales teams some capability to update customer records on the fly with front-end read/write/edit capabilities.

Cloud-based CRM technology (such as SalesForce.com) has grown by leaps and bounds in recent years to fill this gap. Unlike purpose-built customer databases, however, out-of-the-box cloud-based CRM tools are developed for a mass market, and without customizations contain only a limited set of standard data fields plus a finite set of “custom fields.” Without heavy customizations, in other words, data stored in a cloud-based CRM solution only contains a subset of a company’s customer data file, and is typically only used by salespeople and customer service reps. Moreover, data in the CRM is usually not connected to that of other business units like marketing or finance divisions who require a more complete data set to do their job.

The second challenge to internal crowd-sourcing has more to do with the very nature of salespeople themselves. Anyone who has worked in marketing knows firsthand that it’s a monumental challenge to get salespeople to update contact records on a regular basis—or do anything else, for that matter, that doesn’t involve generating revenue or commissions.

Not surprisingly, this gives marketers fits. Good luck sending our effective (and hopefully highly personalized) CRM campaigns if customer records are either out of date or flat out wrong. Anyone who has used Salesforce.com has seen that “Stay in Touch” function, which gives salespeople an easy and relatively painless method for scrubbing contact data by sending out an email to contacts in the database inviting them to “update” their contact details. The main problem with this tool is that it necessitates a correct email address in the first place.

Assuming your salespeople are diligently updating data in the CRM, another issue with this approach is it essentially limits your data updates to whatever the sales team happens to know or glean from each customer. It assumes, in other words, that your people are asking the right questions in the first place. If your salesperson does not ask a customer how many employees they have globally or at a particular location, it won’t get entered into the CRM. Nor, for that matter, will data on recent mergers and acquisitions or financial statements—unless your sales team is extremely inquisitive and is speaking with the right people in your customers’ organizations.

The other way to update customer data is to rely on a third-party data provider to do it for you—to cleanse, correct, append and replace the data on a regular basis. This process usually involves taking the entire database, uploading it to an FTP site somewhere. The database is then grabbed by the third party, who then works their magic on the file—comparing it against a central database that is presumably updated quite regularly—and then returning the file so it can be resubmitted and merged back into the database on the data hub or residing in the CRM.

Because this process involves technology, has a lot of moving parts and involves several steps, it’s generally set up as an automated process and allowed to run on a schedule. Moreover, because the process involves overwriting an entire database (even though it is automated) it requires having IT staff around to supervise the process in a best-case scenario, or jump in if something goes wrong and it blows up completely. Not surprisingly, because we’re dealing with large files, multiple stakeholders and room for technology meltdowns, most marketers tend to shy away from running a batch update more than once per month. Some even run them quarterly. Needless to say, given the current pace of change many feel that’s not frequent enough.

It’s interesting to note that not very long ago, sending database updates quarterly via FTP file dump was seen as state-of-the-art. Not any longer, you see, FTP is soooo 2005. What’s replaced FTP is what we call a “transactional” database update system. Unlike an FTP set-up, which requires physically transferring a file from one server and onto another, transactional data updates rely on an Application Programming Interface, or API, to get the data from one system to another.

For those of you unfamiliar with the term, an API is a pre-established set of rules that different software programs can use to communicate with each other. An apt analogy might be the way a User Interface (UI) facilitates interaction between humans and computers. Using an API, data can be updated in real time, either on a record-by-record basis or in bulk. If a Company A wants to update a record in their CRM with fresh data from Company B, for instance, all they need to do is transmit a unique identifier for the record in question over to Company B, who will then return the updated information to Company A using the API.

Perhaps the best part of the transactional update architecture is that it can be set up to connect with the data pretty much anywhere it resides—in a cloud-based CRM solution or on a purpose built data warehouse sitting in your data center. For those using a cloud-based solution, a huge advantage of this architecture is that once a data provider builds hooks into popular CRM solutions, there are usually no additional costs for integration and transactional updates can be initiated in bulk by the CRM administrator, or on a transaction-by-transaction basis by salespeople themselves. It’s quite literally plug and play.

For those with an on-site data hub, integrating with the transactional data provider is usually pretty straightforward as well, because most APIs not only rely on standard Web technology, but also come equipped with easy-to-follow API keys and instructions. Setting the integration, in other words, can usually be implemented by a small team in a short timeframe and for a surprisingly small budget. And once it’s set up, it will pretty much run on its own. Problem solved.