Don’t Settle for Last-Touch Attribution in Marketing

Last month, I talked about factors marketers should consider for attribution rules. Here, I would like to get a little deeper and discuss last-touch attribtuion, as just talking about contributing factors won’t get you anywhere. As in all data-related subjects, the devil hides in the details.

Last month, I talked about factors marketers should consider for attribution rules. Here, I would like to get a little deeper and discuss last-touch attribution, as just talking about contributing factors won’t get you anywhere. As in all data-related subjects, the devil hides in the details. How to collect the data, what to consider, how to manipulate clean and dirty data, and in what order one must execute different steps.

I wonder sometimes why last-touch attribution is such a popular industry, with all of the flaws embedded in that methodology. Without even getting into geeky programming details, let’s think about the limitation of last-touch attribution, in a logical sense.

First off, by giving all of the credit for a conversion to “one” medium of the last touch, you would be ignoring all of the previous efforts done by your own company. If you are the lucky channel manager of the last touch, you wouldn’t mind that at all. But is that fair? C-level executives should not accept such flaws in the name of efficiency or programming convenience.

Why You Shouldn’t Settle for Last-Touch Attribution

Let’s use my own experience as a buyer to illustrate a typical customer journey in a multichannel marketing environment. Like any man who shaves daily, I’ve always felt that most quality brand blades were way overpriced. And I found it quite inconvenient that I had to visit a physical store to buy them, when I knew that I would need new blades at a regular interval. All of that changed when a few blade delivery services popped up in that lucrative men’s grooming market a few years ago.

I was one of the early adopters who signed on with one of the programs. But after cutting my face a few times with defective blades, I just canceled the delivery service, and went back to my favorite brand of my adult life, knowing that it would cost more. I considered that to be an affordable luxury.

Then one day, I saw an ad on Facebook, that my favorite brand now offers home delivery service, at a significantly lower price point in comparison to store purchases. Call that my first touch before conversion (to the newly offered service). But I didn’t sign up for it at that time, even though I clicked-through to the landing page of its website. I was probably on my mobile phone, and I also wanted to examine options regarding types of blades and delivery intervals further when I had more time.

That means, I visited the site multiple times before I committed to the subscription model. I remember using Google to get to the site at least once; and later, I hit on a bookmark with its URL a few more times. Let’s say that Touch No. 2 would be labeled as “Organic Search,” and touches No. 3 and No. 4 would be considered “Direct-to-Site.”

If you employ last-touch attribution, then Facebook and organic search would get zero credit for the transaction here. That type of false intel may lead to a budget cut for the wrong channel, for sure. But as a consumer, I “know” that it was the Facebook where I first learned about the new service from the brand.

Imagine if you, as a marketer, had a toggle switch between Last Touch and First Touch rules. When thousands, if not millions, of touch data points are aggregated in an attribution report, even a simple concept, such as “the most important acquisition channel,” will have a different look depending on the attribution rules. In one, Social Media may look like the most effective channel. In another, Organic Search may take the cake. The important lesson is that one should never settle for last-touch attribution, just because that is how it’s been done within the organization (or by the analytics vendors).

There Are a Few More Attribution Methods

The Last and First Touch rules are the easy ones — if you have access to all touch data on an individual level (because you’d have to line touchpoints up for each buyer, in sequence). As I briefly introduced last month, there are a few more methods to consider. Let’s dig a little deeper this time:

  • Last Touch: Although there could have been many touchpoints before conversion, this method would just give all of the credit to the last one. As flawed as it may be, there are some merits. For one, last touch would be the one that is directly attributable (i.e., connected) to the transaction (and the session that led to it) without any assumptions or rules. I suspect that the simplicity of it all is the main reason for its popularity.
  • First Touch: This would be useful for the study of acquisition sources. Timeline is an important consideration here, as effectiveness of channel or offer may decay at different rates, depending on product and channels in question. A consumer may have researched for a washing machine four months ago. And saw a newspaper insert about it three weeks ago. And then got an email about it a week ago. How far back can we go with this? A catalog that was mailed six months ago? Maybe, as we are dealing with a big-ticket item here. And are we sure that we have any digital touch data that go back that far? Let’s not forget that the word “Big Data” was coined to describe click-level data to begin with.
  • Double Credit: If a person was exposed to and engaged in multiple channels before the purchase, why not credit all involved channels? Overkill? Maybe. But we use this type of reporting method when dealing with store-level reports. There is no law that one customer can visit only one store. If one visits multiple stores, why not count that person multiple times for store-level reports? So, with the same reasoning, if a transaction is attributable to multiple channels, then count the transaction multiple times for the channel report. Each channel manager would be able to examine the effectiveness of her channel in an isolation mode (well, sort of).
  • Equal Credit: This would be the opposite of Double Credit. If there are multiple channels that are attributable to a transaction, create a discount factor for each channel. If one is exposed to four channels (identified via various tags and tracking mechanisms), each would get ¼ of the transaction credit. When such discounted numbers are aggregated (instead of transactions, as a whole number), there will be no double-counting in the end (i.e., the total would add up to a known number of transactions).
  • Proportional Credit: Some channel managers may think that even Equal Split is not a fair methodology. What if there were eight emails, two organic searches, three paid searches and a link on a Facebook page that was clicked once? Shouldn’t we give more weight to the email channel for multiple exposures? One simple way to compromise (I chose this word carefully) in a situation like this would be to create a factor based on the number of total touches for each channel, divided by the total number of touches before conversion.
  • Weighted Value: An organization may have time-tested — or politically prevailing — attribution percentages for each employed channel. I would not even argue why one would boldly put down 50 percent for direct marketing, or 35 percent for organic search. Like I said last month, it is best for analysts to stay away from politics. Or should we?
  • Modeled Weighted Value: Modeling is, of course, a mathematical way to derive factors and scores, considering multiple variables at once. It would assign a weighted factor to each channel based on empirical data, so one might argue that it is the most unbiased and neutral method. The only downside of the modeling is that it would require statistically trained analysts, and that spells extra cost for many organizations. In any case, if an organization is committed to it, there are multiple modeling methods (such as the Shapley Value Method, based on cooperative game theory — to name one) to assign proper weight to each channel.

I must point out that no one method would paint the whole picture. Choosing a “right” attribution method in an organization with vastly different interests among teams is more about “finding the least wrong answer” for all involved parties. And that may be more like Tony Soprano mediating turf disputes among his Capos than sheer mathematics spitting out answers. That means the logically sound answer may not void all of the arguments. When it comes to protecting one’s job, there won’t be enough “logical” answers as to why one must give credit for the sale to someone else.

While all of this has much to do with executive decisions, people who sit between an ample amount of data and decision-makers must consider all possible options. So, having multiple methods of attribution will help the situation. For one, it is definitely better than just following the Last Touch.

Start With Proper Data Collection

In any case, none of these attribution methods will mean anything, if we don’t have any decent data to play with. Touch data starts with those little pixels on web pages in the digital world. Pages must be carefully tagged, and if you want to find out “what worked,” then, well, you must put in tracking requests properly for all channels.

A simple example. In a UTM tag, we see Medium coded with values such as Paid Social. A good start. Then we go to Source, we would see entries like Facebook, Instagram, Twitter, Pinterest, etc. So far, so good. But the goal is to figure out how much one must spend on “paid” social media. Without differentiating (1) Company’s own social media page, (2) Paid ads on social media sites, and (3) Referrals by users on social media (on their Facebook Wall, for example), we won’t be able to figure out the value of “Paid Social.” That means, all of the differentiation must be done at the time of data collection.

And while at it, please keep the data consistent, too. I’ve seen at least 10 different ways to say Facebook, start with “fb.”

Further, let’s not stop at traditional digital tags, either. There are too many attribution projects that completely block out offline efforts, like direct mail. If we need to understand where the marketing dollars must go, why settle with one type of tracking mechanism? Any old marketer would know that there is a master mail file behind every direct mailing campaign. With all those pieces of PII in it, we can convert them into yet another type of touch data — easily.

Yes, collecting such touch data for general media won’t be easy; but that doesn’t mean that we keep the wall up between online and offline worlds indefinitely. Let’s start with all of the known contact lists, online or offline.

Attribution Should Be Done in Multiple Steps

Attribution is difficult enough when we try to assign credit to “1” transaction, when there could be multiple touchpoints before the conversion. Now let’s go one step further, and try to call a buyer a “Social Media” responder, when we “know” that she must have been exposed to the brand at least 20 times through multiple media channels including Facebook, Instagram, paid search through Google, organic search through some default search engine on a phone, a series of banner ads on various websites, campaign emails and even a postcard. Now imagine she purchased multiple times from the brand — each time as a result of a different series of inbound and outbound exposures. What is she really? Just a buyer from Facebook?

We often get requests to produce customer value — present and future — by each channel. To do that, we should be able to assign a person to a channel. But must we? Why not apply the attribution options for transaction to buyers, as I listed in this article?

That means we must think about attribution in steps. In terms of programming, it may not exactly be like that, but for us to determine the optimal way to assign channels to an individual, we need to think about it in steps.

Conclusion

Now, if you are just settling for last-touch attribution, you may save some headaches that come with all of these attribution methods. But I hope that I intrigued you enough that you won’t settle so easily.

Analyzing the Marketing Value of Onsite Resources

As a business, your primary marketing priority is, of course, your products. Those sales keep you afloat and fulfill key user needs. As you establish a core customer base, however, how do you keep them coming back? This is where your onsite resources come into the picture.

As a business, your primary marketing priority is, of course, your products. Those sales keep you afloat and fulfill key user needs. As you establish a core customer base, however, how do you keep them coming back? Businesses need to offer something new or supplementary to build customer loyalty, sustain site traffic and further develop your reputation as a high-value brand. And this is where your onsite resources come into the picture.

Your onsite resources are a valuable tool for keeping the line of communication open with clients and maintaining your reputation during lulls in the formal business process. If you’re going to make the most of these tools, however, you can’t just create them and hope for the best. No, you need to set goals and measure the performance of those supplementary materials if they’re going to benefit your entire operation.

These three KPIs pair with different design styles and can help you assess how well your add-ons are performing.

Time on Page

Time on page is a common KPI, described by the marketers at Vertical Measures as a key way of measuring user engagement. Historically, sites have used this measure to determine if their blog content was interesting, if people were reading sales offers, or even to measure the value of news-style content. Similar principles apply when using time on page to measure the effectiveness of supplemental resources.

At IncFile, a company whose primary business is helping small businesses incorporate, supplemental material both attracts new customers and assists current ones — and time on page is a vital measure in both cases. For example, their page on forming an LLC in Florida provides a step-by-step breakdown explaining the process, followed by an array of links to added resources. IncFile, then, has the ability to measure both time on page for the initial post and clickthrough and bounce rates for the resources that follow — and that information can be used to measure conversion rates.

Counting on Downloads

For many companies, supplementary resources are designed to go beyond the page and into the real world — users are encouraged to download them. Downloadable resources are a powerful format for customers because it’s very easy to measure how many users select the file. And unlike some measures — including time on page, which may count false-positives because a user left the page open on their computer, for example — downloads are targets of intention. In both cases, though, the user activity is a measure of overall engagement.

Scholastic, the publisher of such well-known books as “Clifford the Big Red Dog” and “The Magic School Bus,” is recognized for their educational resources, but at first glance, their Teacher’s Tool Kit page appears to consist largely of blog posts.

Look further, though, and you’ll find countless downloadable lesson plans. These individual files allow Scholastic to measure the efficacy of individual lessons, teaching styles, and topics, as well as what age group’s lessons are downloaded most frequently. That’s a lot of data packed into each interaction.

Measure by Media Type

Though the majority of content marketing focuses on written materials, modern outreach demands a multimedia approach. Videos, infographics, and other visually enticing content hold user attention longer and can convey a greater amount of information with fewer words. They can be harder to measure, however.

When it comes to developing KPIs for non-standard content types, you may need to combine several different data points. For example, you might try launching video via both website and social media platforms and comparing engagement rates. Social media analytics can provide greater real-time feedback and direct user commentary than onsite variants of the same material. With video, you should also measure the length of video watched and determine when viewers tend to bounce out so that you can maximize content within the limited view time.

Don’t undercut your supplementary resources by ignoring the resultant data — you invested time and money into creating those tools for your customers. Now, take the time to apply the same key metric you would use to measure activity on the rest of your site to assess this content and then use it to enhance your monetized offerings. There’s a lot you can learn from those add-ons if you take the time to read more deeply.

Omnichannel Customers Are 2X as Valuable – How to Make Them Yours

With so many trying to sort out an “omnichannel” marketing strategy, I thought it would make the most sense this month to provide some structure around what it is, the best way to take the “buzz” out of the term, and provide a framework for thinking strategically about this new mandate in marketing and strategy. For starters, here’s a simple idea, or “true north,” you can use to drive your own marketing strategy as you embrace the omnichannel consumer. “Put the Customer First” and build your “omnichannel strategy” around them.

With so many trying to sort out an “omnichannel” marketing strategy, I thought it would make the most sense this month to provide some structure around what it is, the best way to take the “buzz” out of the term, and provide a framework for thinking strategically about this new mandate in marketing and strategy.

For starters, here’s a simple idea, or “true north,” you can use to drive your own marketing strategy as you embrace the omnichannel consumer. “Put the Customer First” and build your “omnichannel strategy” around them.

Let’s remember, connecting with, engaging and finding the right new customers are where customer value is created and realized in omnichannel marketing. Optimizing that value comes through studying and tuning communications, improving your relevance and becoming more creatively authentic, not in the boardroom, but in the eyes of your customer.

Today, marketers appreciate that consumers engage on multiple platforms, devices and channels—the ones they want, when they want. With mobile devices being a spontaneous window into their thoughts and an outlet for their wants and needs as they arise. What’s a bit more subtle and more often missed is the objective and capability to respect the way your customers choose to engage and buy across them in a scalable manner—as it will either fragment their relationship with your brand or galvanize it.

Consider Kohls. Not exactly a high tech player in most folks’ minds. However they now deliver an omnichannel experience that deepens relationships with them. Recently, my wife received a promotion by direct mail (I doubt if she remembers when they asked for her phone number the first time, making the connection between the POS and her online purchases), she had it in hand as she went to the website to browse. Later, she used another promotion from her email right at the POS with her iPhone.

In a single engagement with the brand, she hopped across three channels, not including a customer service call by phone. As a consumer, she didn’t even notice—she just expected it to work.

Similarly, OpenTable will consistently get you to a good restaurant based on where you’ve dined before, and what your current online browsing and mobile location is. You probably do it all the time. Your relationship with that brand hops between mobile, desktop and point of sale effortlessly—but as a consumer, you’re not exactly impressed: You expect it to work.

As a result, effective omnichannel organizations have become “stitched into” the lifestyles of their customers. Moreover, this supports the creation of competitive advantage in the measurable, trackable, digital age.

Omnichannel Means Understanding the Customer
Putting the customer first obviates really knowing and understanding your customer in more meaningful and actionable ways. Not just with an anecdote of the “average customer,” but with legitimate, fact-based methods that are built on a statistical and logical foundation. This is the basis for the “absolute truth” that your omnichannel source is dependent on.

This, too, is no small task for many organizations, but it’s becoming more “doable.” And it has to be—because your competition is thinking and investing in this path, and it’s not a long-term, viable position to not have an actionable strategy to miss the boat on knowing your customer in a way that is valuable, actionable and profitable.

But first, let’s clear up some of the confusion that we’ve been hearing for at least a year now: Is omnichannel more than the buzzword of 2015, or is it something much more important?

Multichannel
At the most basic level, “multi” means many. As soon as you adopted your second or third channel, be it a catalog or an e-commerce website, your organization became a multichannel organization. Multichannel came quickly—as it’s not uncommon that the majority of a customer base has made a purchase across more than one channel—whether you have that resolution or not is another matter, and often requires a smarter approach to collection.

Digital growth is accelerating channel expansion. With the explosion of online and digital channels and the rapid adoption of mobile smartphones, tablets and now wearables, digital can no longer be viewed as a single channel. We now have the merging and proliferation of digital, physical and traditional channels.

Many marketers have experienced as much challenge in juggling an increasing number of channels as there is opportunity. But digital channels, of course, are more measurable and challenge the traditional approaches by bringing a greater resolution and visibility for some, and confusion for others.

Key factors in leveraging, managing, and maximizing those channels include:

  • Competencies developed in the organization
  • Identifying third-party competencies, especially in digital partnerships
  • The culture of the organization
  • Support for change and innovation in marketing
  • The depth of technical capability in an organization

As channel usage expands, data assets “pile up,” though most of the data in its raw format is of limited practical use and less actionable as one would hope. From the inside of dozens of IT organizations, the refrain is common; “We’re just capturing everything right now.” Creating marketing value would require strategists and the business units.

Omnichannel Is the Way Forward
While most organizations are still working through mastering their channels and the data they perpetually generate, the next wave of both competitive advantage and threats have come with them. The customer learns what works for them relatively quickly and easily, adopting new channels and buying where they want, how they want. Those touches are often lower touch, and introduce intermediaries, and are surrounded by contextual advertising, often from competitors.

Omnichannel buyers aren’t just more complex, they are substantially more valuable. We’ve seen them be as much as twice as valuable as those whose relationship is on a single channel. Perhaps this a reflection of the greater engagement with the brand.

Delivering that omnichannel experience will require more thought, focus and expertise than before. It requires the integration of systems, apps and experiences in a way that’s meaningful—to the customer—and that of course requires an integration of the data about those purchases and experiences.

To serve the business, the Omnichannel Readiness Process has six components, each of which require thoughtful consideration:

1. Capture—many organizations are aware that they need to capture “the data.” The challenge here is shifting to what to capture, and what they may be missing. The key challenge is: It’s impossible to capture “everything” without understanding how it can and should be used and leveraged. How that data is captured in terms of format and organization is of great importance.

2. Consolidate—In order to act on the omnichannel reality, we must have all our data in one place. In the ongoing effort to find the balance between cost, speed and value, “silos” have been built to house various data components. Those data sources must be consolidated through a process that is not quite trivial if those data sources are to create value in the customer experience and over the customer lifetime.

3. Enhance—Even after we’ve pulled our data together into an intelligent framework and model, built to support the business needs, virtually every marketer is missing data that consumers generally don’t provide, or don’t provide reliably on a self-reported basis. “Completing the customer record” requires planning and investing in appropriate third-party data. This will be a requirement if we’re to utilize tools and technology to mine for opportunity in our customer base.

4. Transform—much of the data we need to perform the kinds of analysis and create the kinds of communication that maximize response now, and the customer value over time, utilizes the derivation of new data points from the data you already have. Here is one example: Inter-order purchase time. Calculating the number of days between purchases for every customer in your base allows you to see whose purchase cadences are similar, faster, slower or in decline. On average, we’ll derive hundreds of such fields. This is one example of how a marketer can “mine” data for evidence of opportunity worth acting on and investing in.

5. Summarize—The richest view of a customer with the best data in its most complete state is a lot to digest. So to help make it actionable, we must roll it up into logical and valuable cohorts and components. Call them what you will—segments, personas or models—they are derivative groups that have value and potential that you can act on and learn from.

Many marketers traditionally spend 80 percent to 90 percent of their time and effort on getting their data to a point where it serves both the omnichannel customer and their brand. However, marketers can do better with emerging tools and technologies.There is no replacement for solid data strategy that is built around the customer, but efficiencies can be gained that speed time-to-value in an omnichannel environment.

6. Communicate—The prep work has been done, you’ve found the pockets of opportunity, now it’s time to deliver on the expectations the omnichannel customer holds for marketers. At this juncture, we need to quickly craft and deploy messages that resonate in ways consumers will think about their situation and your brand. They must address the concerns they have and the desires and opportunities they tend to perceive.

Omnichannel customers expect you will recognize them for their loyalty and their engagement with your brand at multiple levels, and that those experiences will be tailored in small ways that can make a bigger difference.

They expect your story to better-fit with their own, if not complete it. That sounds like a dramatic promise, but the ability to know your customers and engage them in the way they prefer, and at scale, is upon us.

Keep It Relevant to Your Business
This entire process must include of course, the answers to key business questions about the types of discoveries we’d make and questions we’d answer with it—for example, does the Web cannibalize our traditional channels? (Hint: It surely doesn’t have to).

That said, we’ve learned to start with the most basic questions—and are not surprised when there are no robust answers:

1. How many customers do you have today?

2. Do you have a working definition of a High Value or Most Valuable Customer?

3. If so, how many of those customers do you have?

4. How many customers did you gain this past quarter? How many did you lose?

a. Assuming you know how many you lost, what was the working definition of a lost customer?

5. How many customers have bought more than once?

6. What’s the value of your “average” customer, understanding that averages are misleading and synthetic numbers are not to be trusted? But we can measure where other customers are in terms of their distance from the mean.

7. Who paid full price? Who bought at discount? Who did both? How many of all the above?

8. For those who bought “down-market,” did they trade up?

9. How many times does a customer or logical customer group (let’s call them “segments,” for now) buy? How long, on average, is it between their purchases? And the order sizes, all channels included?

10. All this, of course, gets back to understanding more deeply, “Who is your customer?” While all this information about how they engage and buy from us is powerful, how old are they? Where are they from? What is relevant to them?

Now, even if a marketer could get the answers to all of these questions, how does this relate to this “Omnichannel” Evolution?

Simple. It only relates to your customer. Of course, they are the most important actors in this business of marketing—in fact in the business of business. What this really means is deceptively simple, often overlooked, and awesomely powerful:

Omnichannel Is Singularly Focused on Customers, Not Channels
It’s about the customer, and having the resources, data and insights at your disposal to serve that customer better. Virtually all of your customers are “multichannel” already. Granted, some are more dominantly influenced by a single channel. For example, online through the voice of the “crowd.” But even then, the point of omnichannel only means one thing: Know your customers across all the channels on which they engage with you. Note the chasm between having the dexterity to examine and serve customers across all the channels, and just knowing their transactions, behaviors or directional, qualitative descriptors.

So “knowing the customer” really means having ready access to actionable customer data. Think about it. If your understanding of your customer data isn’t actionable, how well do you really know your customer in the first place?

Considering the 10 questions above, and evaluating the answers in terms of the most important questions about your customers, is a solid starting point.

When you’ve worked through all of these, you’re now ready to create experiences and communications for customers that are not only relevant, but valuable—to your customer and to the business.

When you’re adding value and are channel-agnostic, as you must become, you’ve achieved the coveted omnichannel distinction that market leaders are bringing to bear already.

Not only is this an impressive accomplishment professionally, it surely is—but remember—it’s the customer we have to impress.

Exciting New Tools for B-to-B Prospecting

Finding new customers is a lot easier these days, what with innovative, digitally based ways to capture and collect data. Early examples of this exciting new trend in prospecting were Jigsaw, a business card swapping tool that allowed salespeople to trade contacts, and ZoomInfo, which scrapes corporate websites for information about businesspeople and merges the information into a vast pool of data for analysis and lead generation campaigns. New ways to find prospects continue to come on the scene—it seems like on the daily.

Finding new customers is a lot easier these days, what with innovative, digitally based ways to capture and collect data. Early examples of this exciting new trend in prospecting were Jigsaw, a business card swapping tool that allowed salespeople to trade contacts, and ZoomInfo, which scrapes corporate websites for information about businesspeople and merges the information into a vast pool of data for analysis and lead generation campaigns. New ways to find prospects continue to come on the scene—it seems like on the daily.

One big new development is the trend away from static name/address lists, and towards dynamic sourcing of prospect names complete with valuable indicators of buying readiness culled from their actual behavior online. Companies such as InsideView and Leadspace are developing solutions in this area. Leadspace’s process begins with constructing an ideal buyer persona by analyzing the marketer’s best customers, which can be executed by uploading a few hundred records of name, company name and email address. Then, Leadspace scours the Internet, social networks and scores of contact databases for look-alikes and immediately delivers prospect names, fresh contact information and additional data about their professional activities.

Another dynamic data sourcing supplier with a new approach is Lattice, which also analyzes current customer data to build predictive models for prospecting, cross-sell and churn prevention. The difference from Leadspace is that Lattice builds the client models using their own massive “data cloud” of B-to-B buyer behavior, fed by 35 data sources like LexisNexis, Infogroup, D&B, and the US Government Patent Office. CMO Brian Kardon says Lattice has identified some interesting variables that are useful in prospecting, for example:

  • Juniper Networks found that a company that has recently “signed a lease for a new building” is likely to need new networks and routers.
  • American Express’s foreign exchange software division identified “opened an office in a foreign country” suggests a need for foreign exchange help.
  • Autodesk searches for companies who post job descriptions online that seek “design engineers with CAD/CAM experience.”

Lattice faces competition from Mintigo and Infer, which are also offering prospect scoring models—more evidence of the growing opportunity for marketers to take advantage of new data sources and applications.

Another new approach is using so-called business signals to identify opportunity. As described by Avention’s Hank Weghorst, business signals can be any variable that characterizes a business. Are they growing? Near an airport? Unionized? Minority owned? Susceptible to hurricane damage? The data points are available today, and can be harnessed for what Weghorst calls “hyper segmentation.” Avention’s database of information flowing from 70 suppliers, overlaid by data analytics services, intends to identify targets for sales, marketing and research.

Social networks, especially LinkedIn, are rapidly becoming a source of marketing data. For years, marketers have mined LinkedIn data by hand, often using low-cost offshore resources to gather targets in niche categories. Recently, a gaggle of new companies—like eGrabber and Social123—are experimenting with ways to bring social media data into CRM systems and marketing databases, to populate and enhance customer and prospect records.

Then there’s 6Sense, which identifies prospective accounts that are likely to be in the market for particular products, based on the online behavior of their employees, anonymous or identifiable. 6Sense analyzes billions of rows of 3rd party data, from trade publishers, blogs and forums, looking for indications of purchase intent. If Cisco is looking to promote networking hardware, for example, 6Sense will come back with a set of accounts that are demonstrating an interest in that category, and identify where they were in their buying process, from awareness to purchase. The account data will be populated with contacts, indicating their likely role in the purchase decision, and an estimate of the likely deal size. The data is delivered in real-time to whatever CRM or marketing automation system the client wants, according to CEO and founder Amanda Kahlow.

Just to whet your appetite further, have a look at CrowdFlower, a start-up company in San Francisco, which sends your customer and prospect records to a network of over five million individual contributors in 90 countries, to analyze, clean or collect the information at scale. Crowd sourcing can be very useful for adding information to, and checking on the validity and accuracy of, your data. CrowdFlower has developed an application that lets you manage the data enrichment or validity exercises yourself. This means that you can develop programs to acquire new fields whenever your business changes and still take advantage of their worldwide network of individuals who actually look at each record.

The world of B-to-B data is changing quickly, with exciting new technologies and data sources coming available at record pace. Marketers can expect plenty of new opportunity for reaching customers and prospects efficiently.

A version of this article appeared in Biznology, the digital marketing blog.

Not All Databases Are Created Equal

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

However, even when the database in question is attached to analytical, visualization, CRM or drill-down tools, it is not so easy to evaluate it completely, as such practice reveals only a few aspects of a database, hardly all of them. That is because such tools are like window treatments of a building, through which you may look into the database. Imagine a building inspector inspecting a building without ever entering it. Would you respect the opinion of the inspector who just parks his car outside the building, looks into the building through one or two windows, and says, “Hey, we’re good to go”? No way, no sir. No one should judge a book by its cover.

In the age of the Big Data (you should know by now that I am not too fond of that word), everything digitized is considered data. And data reside in databases. And databases are supposed be designed to serve specific purposes, just like buildings and cars are. Although many modern databases are just mindless piles of accumulated data, granted that the database design is decent and functional, we can still imagine many different types of databases depending on the purposes and their contents.

Now, most of the Big Data discussions these days are about the platform, environment, or tool sets. I’m sure you heard or read enough about those, so let me boldly skip all that and their related techie words, such as Hadoop, MongoDB, Pig, Python, MapReduce, Java, SQL, PHP, C++, SAS or anything related to that elusive “cloud.” Instead, allow me to show you the way to evaluate databases—or data sources—from a business point of view.

For businesspeople and decision-makers, it is not about NoSQL vs. RDB; it is just about the usefulness of the data. And the usefulness comes from the overall content and database management practices, not just platforms, tool sets and buzzwords. Yes, tool sets are important, but concert-goers do not care much about the types and brands of musical instruments that are being used; they just care if the music is entertaining or not. Would you be impressed with a mediocre guitarist just because he uses the same brand of guitar that his guitar hero uses? Nope. Likewise, the usefulness of a database is not about the tool sets.

In my past column, titled “Big Data Must Get Smaller,” I explained that there are three major types of data, with which marketers can holistically describe their target audience: (1) Descriptive Data, (2) Transaction/Behavioral Data, and (3) Attitudinal Data. In short, if you have access to all three dimensions of the data spectrum, you will have a more complete portrait of customers and prospects. Because I already went through that subject in-depth, let me just say that such types of data are not the basis of database evaluation here, though the contents should be on top of the checklist to meet business objectives.

In addition, throughout this series, I have been repeatedly emphasizing that the database and analytics management philosophy must originate from business goals. Basically, the business objective must dictate the course for analytics, and databases must be designed and optimized to support such analytical activities. Decision-makers—and all involved parties, for that matter—suffer a great deal when that hierarchy is reversed. And unfortunately, that is the case in many organizations today. Therefore, let me emphasize that the evaluation criteria that I am about to introduce here are all about usefulness for decision-making processes and supporting analytical activities, including predictive analytics.

Let’s start digging into key evaluation criteria for databases. This list would be quite useful when examining internal and external data sources. Even databases managed by professional compilers can be examined through these criteria. The checklist could also be applicable to investors who are about to acquire a company with data assets (as in, “Kick the tire before you buy it.”).

1. Depth
Let’s start with the most obvious one. What kind of information is stored and maintained in the database? What are the dominant data variables in the database, and what is so unique about them? Variety of information matters for sure, and uniqueness is often related to specific business purposes for which databases are designed and created, along the lines of business data, international data, specific types of behavioral data like mobile data, categorical purchase data, lifestyle data, survey data, movement data, etc. Then again, mindless compilation of random data may not be useful for any business, regardless of the size.

Generally, data dictionaries (lack of it is a sure sign of trouble) reveal the depth of the database, but we need to dig deeper, as transaction and behavioral data are much more potent predictors and harder to manage in comparison to demographic and firmographic data, which are very much commoditized already. Likewise, Lifestyle variables that are derived from surveys that may have been conducted a long time ago are far less valuable than actual purchase history data, as what people say they do and what they actually do are two completely different things. (For more details on the types of data, refer to the second half of “Big Data Must Get Smaller.”)

Innovative ideas should not be overlooked, as data packaging is often very important in the age of information overflow. If someone or some company transformed many data points into user-friendly formats using modeling or other statistical techniques (imagine pre-developed categorical models targeting a variety of human behaviors, or pre-packaged segmentation or clustering tools), such effort deserves extra points, for sure. As I emphasized numerous times in this series, data must be refined to provide answers to decision-makers. That is why the sheer size of the database isn’t so impressive, and the depth of the database is not just about the length of the variable list and the number of bytes that go along with it. So, data collectors, impress us—because we’ve seen a lot.

2. Width
No matter how deep the information goes, if the coverage is not wide enough, the database becomes useless. Imagine well-organized, buyer-level POS (Point of Service) data coming from actual stores in “real-time” (though I am sick of this word, as it is also overused). The data go down to SKU-level details and payment methods. Now imagine that the data in question are collected in only two stores—one in Michigan, and the other in Delaware. This, by the way, is not a completely made -p story, and I faced similar cases in the past. Needless to say, we had to make many assumptions that we didn’t want to make in order to make the data useful, somehow. And I must say that it was far from ideal.

Even in the age when data are collected everywhere by every device, no dataset is ever complete (refer to “Missing Data Can Be Meaningful“). The limitations are everywhere. It could be about brand, business footprint, consumer privacy, data ownership, collection methods, technical limitations, distribution of collection devices, and the list goes on. Yes, Apple Pay is making a big splash in the news these days. But would you believe that the data collected only through Apple iPhone can really show the overall consumer trend in the country? Maybe in the future, but not yet. If you can pick only one credit card type to analyze, such as American Express for example, would you think that the result of the study is free from any bias? No siree. We can easily assume that such analysis would skew toward the more affluent population. I am not saying that such analyses are useless. And in fact, they can be quite useful if we understand the limitations of data collection and the nature of the bias. But the point is that the coverage matters.

Further, even within multisource databases in the market, the coverage should be examined variable by variable, simply because some data points are really difficult to obtain even by professional data compilers. For example, any information that crosses between the business and the consumer world is sparsely populated in many cases, and the “occupation” variable remains mostly blank or unknown on the consumer side. Similarly, any data related to young children is difficult or even forbidden to collect, so a seemingly simple variable, such as “number of children,” is left unknown for many households. Automobile data used to be abundant on a household level in the past, but a series of laws made sure that the access to such data is forbidden for many users. Again, don’t be impressed with the existence of some variables in the data menu, but look into it to see “how much” is available.

3. Accuracy
In any scientific analysis, a “false positive” is a dangerous enemy. In fact, they are worse than not having the information at all. Many folks just assume that any data coming out a computer is accurate (as in, “Hey, the computer says so!”). But data are not completely free from human errors.

Sheer accuracy of information is hard to measure, especially when the data sources are unique and rare. And the errors can happen in any stage, from data collection to imputation. If there are other known sources, comparing data from multiple sources is one way to ensure accuracy. Watching out for fluctuations in distributions of important variables from update to update is another good practice.

Nonetheless, the overall quality of the data is not just up to the person or department who manages the database. Yes, in this business, the last person who touches the data is responsible for all the mistakes that were made to it up to that point. However, when the garbage goes in, the garbage comes out. So, when there are errors, everyone who touched the database at any point must share in the burden of guilt.

Recently, I was part of a project that involved data collected from retail stores. We ran all kinds of reports and tallies to check the data, and edited many data values out when we encountered obvious errors. The funniest one that I saw was the first name “Asian” and the last name “Tourist.” As an openly Asian-American person, I was semi-glad that they didn’t put in “Oriental Tourist” (though I still can’t figure out who decided that word is for objects, but not people). We also found names like “No info” or “Not given.” Heck, I saw in the news that this refugee from Afghanistan (he was a translator for the U.S. troops) obtained a new first name as he was granted an entry visa, “Fnu.” That would be short for “First Name Unknown” as the first name in his new passport. Welcome to America, Fnu. Compared to that, “Andolini” becoming “Corleone” on Ellis Island is almost cute.

Data entry errors are everywhere. When I used to deal with data files from banks, I found that many last names were “Ira.” Well, it turned out that it wasn’t really the customers’ last names, but they all happened to have opened “IRA” accounts. Similarly, movie phone numbers like 777-555-1234 are very common. And fictitious names, such as “Mickey Mouse,” or profanities that are not fit to print are abundant, as well. At least fake email addresses can be tested and eliminated easily, and erroneous addresses can be corrected by time-tested routines, too. So, yes, maintaining a clean database is not so easy when people freely enter whatever they feel like. But it is not an impossible task, either.

We can also train employees regarding data entry principles, to a certain degree. (As in, “Do not enter your own email address,” “Do not use bad words,” etc.). But what about user-generated data? Search and kill is the only way to do it, and the job would never end. And the meta-table for fictitious names would grow longer and longer. Maybe we should just add “Thor” and “Sponge Bob” to that Mickey Mouse list, while we’re at it. Yet, dealing with this type of “text” data is the easy part. If the database manager in charge is not lazy, and if there is a bit of a budget allowed for data hygiene routines, one can avoid sending emails to “Dear Asian Tourist.”

Numeric errors are much harder to catch, as numbers do not look wrong to human eyes. That is when comparison to other known sources becomes important. If such examination is not possible on a granular level, then median value and distribution curves should be checked against historical transaction data or known public data sources, such as U.S. Census Data in the case of demographic information.

When it’s about the companies’ own data, follow your instincts and get rid of data that look too good or too bad to be true. We all can afford to lose a few records in our databases, and there is nothing wrong with deleting the “outliers” with extreme values. Erroneous names, like “No Information,” may be attached to a seven-figure lifetime spending sum, and you know that can’t be right.

The main takeaways are: (1) Never trust the data just because someone bothered to store them in computers, and (2) Constantly look for bad data in reports and listings, at times using old-fashioned eye-balling methods. Computers do not know what is “bad,” until we specifically tell them what bad data are. So, don’t give up, and keep at it. And if it’s about someone else’s data, insist on data tallies and data hygiene stats.

4. Recency
Outdated data are really bad for prediction or analysis, and that is a different kind of badness. Many call it a “Data Atrophy” issue, as no matter how fresh and accurate a data point may be today, it will surely deteriorate over time. Yes, data have a finite shelf-life, too. Let’s say that you obtained a piece of information called “Golf Interest” on an individual level. That information could be coming from a survey conducted a long time ago, or some golf equipment purchase data from a while ago. In any case, someone who is attached to that flag may have stopped shopping for new golf equipment, as he doesn’t play much anymore. Without a proper database update and a constant feed of fresh data, irrelevant data will continue to drive our decisions.

The crazy thing is that, the harder it is to obtain certain types of data—such as transaction or behavioral data—the faster they will deteriorate. By nature, transaction or behavioral data are time-sensitive. That is why it is important to install time parameters in databases for behavioral data. If someone purchased a new golf driver, when did he do that? Surely, having bought a golf driver in 2009 (“Hey, time for a new driver!”) is different from having purchased it last May.

So-called “Hot Line Names” literally cease to be hot after two to three months, or in some cases much sooner. The evaporation period maybe different for different product types, as one may stay longer in the market for an automobile than for a new printer. Part of the job of a data scientist is to defer the expiration date of data, finding leads or prospects who are still “warm,” or even “lukewarm,” with available valid data. But no matter how much statistical work goes into making the data “look” fresh, eventually the models will cease to be effective.

For decision-makers who do not make real-time decisions, a real-time database update could be an expensive solution. But the databases must be updated constantly (I mean daily, weekly, monthly or even quarterly). Otherwise, someone will eventually end up making a wrong decision based on outdated data.

5. Consistency
No matter how much effort goes into keeping the database fresh, not all data variables will be updated or filled in consistently. And that is the reality. The interesting thing is that, especially when using them for advanced analytics, we can still provide decent predictions if the data are consistent. It may sound crazy, but even not-so-accurate-data can be used in predictive analytics, if they are “consistently” wrong. Modeling is developing an algorithm that differentiates targets and non-targets, and if the descriptive variables are “consistently” off (or outdated, like census data from five years ago) on both sides, the model can still perform.

Conversely, if there is a huge influx of a new type of data, or any drastic change in data collection or in a business model that supports such data collection, all bets are off. We may end up predicting such changes in business models or in methodologies, not the differences in consumer behavior. And that is one of the worst kinds of errors in the predictive business.

Last month, I talked about dealing with missing data (refer to “Missing Data Can Be Meaningful“), and I mentioned that data can be inferred via various statistical techniques. And such data imputation is OK, as long as it returns consistent values. I have seen so many so-called professionals messing up popular models, like “Household Income,” from update to update. If the inferred values jump dramatically due to changes in the source data, there is no amount of effort that can save the targeting models that employed such variables, short of re-developing them.

That is why a time-series comparison of important variables in databases is so important. Any changes of more than 5 percent in distribution of variables when compared to the previous update should be investigated immediately. If you are dealing with external data vendors, insist on having a distribution report of key variables for every update. Consistency of data is more important in predictive analytics than sheer accuracy of data.

6. Connectivity
As I mentioned earlier, there are many types of data. And the predictive power of data multiplies as different types of data get to be used together. For instance, demographic data, which is quite commoditized, still plays an important role in predictive modeling, even when dominant predictors are behavioral data. It is partly because no one dataset is complete, and because different types of data play different roles in algorithms.

The trouble is that many modern datasets do not share any common matching keys. On the demographic side, we can easily imagine using PII (Personally Identifiable Information), such as name, address, phone number or email address for matching. Now, if we want to add some transaction data to the mix, we would need some match “key” (or a magic decoder ring) by which we can link it to the base records. Unfortunately, many modern databases completely lack PII, right from the data collection stage. The result is that such a data source would remain in a silo. It is not like all is lost in such a situation, as they can still be used for trend analysis. But to employ multisource data for one-to-one targeting, we really need to establish the connection among various data worlds.

Even if the connection cannot be made to household, individual or email levels, I would not give up entirely, as we can still target based on IP addresses, which may lead us to some geographic denominations, such as ZIP codes. I’d take ZIP-level targeting anytime over no targeting at all, even though there are many analytical and summarization steps required for that (more on that subject in future articles).

Not having PII or any hard matchkey is not a complete deal-breaker, but the maneuvering space for analysts and marketers decreases significantly without it. That is why the existence of PII, or even ZIP codes, is the first thing that I check when looking into a new data source. I would like to free them from isolation.

7. Delivery Mechanisms
Users judge databases based on visualization or reporting tool sets that are attached to the database. As I mentioned earlier, that is like judging the entire building based just on the window treatments. But for many users, that is the reality. After all, how would a casual user without programming or statistical background would even “see” the data? Through tool sets, of course.

But that is the only one end of it. There are so many types of platforms and devices, and the data must flow through them all. The important point is that data is useless if it is not in the hands of decision-makers through the device of their choice, at the right time. Such flow can be actualized via API feed, FTP, or good, old-fashioned batch installments, and no database should stay too far away from the decision-makers. In my earlier column, I emphasized that data players must be good at (1) Collection, (2) Refinement, and (3) Delivery (refer to “Big Data is Like Mining Gold for a Watch—Gold Can’t Tell Time“). Delivering the answers to inquirers properly closes one iteration of information flow. And they must continue to flow to the users.

8. User-Friendliness
Even when state-of-the-art (I apologize for using this cliché) visualization, reporting or drill-down tool sets are attached to the database, if the data variables are too complicated or not intuitive, users will get frustrated and eventually move away from it. If that happens after pouring a sick amount of money into any data initiative, that would be a shame. But it happens all the time. In fact, I am not going to name names here, but I saw some ridiculously hard to understand data dictionary from a major data broker in the U.S.; it looked like the data layout was designed for robots by the robots. Please. Data scientists must try to humanize the data.

This whole Big Data movement has a momentum now, and in the interest of not killing it, data players must make every aspect of this data business easy for the users, not harder. Simpler data fields, intuitive variable names, meaningful value sets, pre-packaged variables in forms of answers, and completeness of a data dictionary are not too much to ask after the hard work of developing and maintaining the database.

This is why I insist that data scientists and professionals must be businesspeople first. The developers should never forget that end-users are not trained data experts. And guess what? Even professional analysts would appreciate intuitive variable sets and complete data dictionaries. So, pretty please, with sugar on top, make things easy and simple.

9. Cost
I saved this important item for last for a good reason. Yes, the dollar sign is a very important factor in all business decisions, but it should not be the sole deciding factor when it comes to databases. That means CFOs should not dictate the decisions regarding data or databases without considering the input from CMOs, CTOs, CIOs or CDOs who should be, in turn, concerned about all the other criteria listed in this article.

Playing with the data costs money. And, at times, a lot of money. When you add up all the costs for hardware, software, platforms, tool sets, maintenance and, most importantly, the man-hours for database development and maintenance, the sum becomes very large very fast, even in the age of the open-source environment and cloud computing. That is why many companies outsource the database work to share the financial burden of having to create infrastructures. But even in that case, the quality of the database should be evaluated based on all criteria, not just the price tag. In other words, don’t just pick the lowest bidder and hope to God that it will be alright.

When you purchase external data, you can also apply these evaluation criteria. A test-match job with a data vendor will reveal lots of details that are listed here; and metrics, such as match rate and variable fill-rate, along with complete the data dictionary should be carefully examined. In short, what good is lower unit price per 1,000 records, if the match rate is horrendous and even matched data are filled with missing or sub-par inferred values? Also consider that, once you commit to an external vendor and start building models and analytical framework around their its, it becomes very difficult to switch vendors later on.

When shopping for external data, consider the following when it comes to pricing options:

  • Number of variables to be acquired: Don’t just go for the full option. Pick the ones that you need (involve analysts), unless you get a fantastic deal for an all-inclusive option. Generally, most vendors provide multiple-packaging options.
  • Number of records: Processed vs. Matched. Some vendors charge based on “processed” records, not just matched records. Depending on the match rate, it can make a big difference in total cost.
  • Installment/update frequency: Real-time, weekly, monthly, quarterly, etc. Think carefully about how often you would need to refresh “demographic” data, which doesn’t change as rapidly as transaction data, and how big the incremental universe would be for each update. Obviously, a real-time API feed can be costly.
  • Delivery method: API vs. Batch Delivery, for example. Price, as well as the data menu, change quite a bit based on the delivery options.
  • Availability of a full-licensing option: When the internal database becomes really big, full installment becomes a good option. But you would need internal capability for a match and append process that involves “soft-match,” using similar names and addresses (imagine good-old name and address merge routines). It becomes a bit of commitment as the match and append becomes a part of the internal database update process.

Business First
Evaluating a database is a project in itself, and these nine evaluation criteria will be a good guideline. Depending on the businesses, of course, more conditions could be added to the list. And that is the final point that I did not even include in the list: That the database (or all data, for that matter) should be useful to meet the business goals.

I have been saying that “Big Data Must Get Smaller,” and this whole Big Data movement should be about (1) Cutting down on the noise, and (2) Providing answers to decision-makers. If the data sources in question do not serve the business goals, cut them out of the plan, or cut loose the vendor if they are from external sources. It would be an easy decision if you “know” that the database in question is filled with dirty, sporadic and outdated data that cost lots of money to maintain.

But if that database is needed for your business to grow, clean it, update it, expand it and restructure it to harness better answers from it. Just like the way you’d maintain your cherished automobile to get more mileage out of it. Not all databases are created equal for sure, and some are definitely more equal than others. You just have to open your eyes to see the differences.

Mobile’s Impact on the Consumer Path to Purchase

One in three ad dollars will go to digital advertising next year, meaning digital media spending will be almost equal to television spending. Digital strategies will help drive the U.S. advertising market to $172 billion in 2015, according to new research from Magna Global. This—in combination with mobile and social networking—will push digital to the forefront

One in three ad dollars will go to digital advertising next year, meaning digital media spending will be almost equal to television spending. Digital strategies will help drive the U.S. advertising market to $172 billion in 2015, according to new research from Magna Global. Additional research shows that digital advertising will overtake television advertising by 2017, due in large part to the growing popularity of online video, with sites like YouTube and Netflix. This—in combination with mobile and social networking—will push digital to the forefront.

A digital strategy is no longer a nice-to-have, but a must-have for retailers and brands. If you don’t believe that, then you need to take a hard look at the following data points:

  • Mobile devices lead to in-store purchases. 52 percent of U.S. shoppers have used a mobile device to research products while browsing in a store.
  • Tablets are the cornerstone of online shopping. Tablets are expected to bring in $76 billion in online sales, two times that of mobile devices.
  • Digital content and mobile devices go hand in hand. According to eMarketer, U.S. adults will spend 23 percent of their time consuming media on a mobile device this year.
  • Mobile advertising is at its tipping point. Ad spend is expected to hit $31.45 billion this year. By 2018, it will top $94 billion.

How Do You Get There From Here?
Effective digital strategies take a cross-channel approach that integrates the various mobile channels, such as SMS, app, Web and social.

Value comes behind the scenes, as brands can learn useful information from mobile interactions. For example, customers reveal their operating system when they download an app or open their Web browser. Smart marketers collate such data points into one centralized customer profile—an ideal asset to maximize personalization for mobile.

Companies just getting started with cross-channel mobile marketing should focus on small wins. True cross-channel takes time and iteration, so commit to integrating what makes sense in the short, medium and long terms instead of trying to do everything simultaneously. Below you will find some key areas to consider when building out a mobile strategy:

1. Tablets, Smartphones and Watches, Oh My!
It will be vital for brands to take different form factors into account as they roll out their mobile campaigns. Mobile campaigns can quickly be compromised if brands don’t think about the impact on visuals and the call to action across various screen sizes.

2. The Mobile Marketing Tipping Point
Mobile marketing is evolving as more than just a tactic and is being embraced as a core part of the marketing strategy. With the goals being relatively the same as traditional marketing, marketers will be able to attract, engage and retain new and existing customers. Marketers will be able to target their audiences through highly relevant content based on location, interests and interactions throughout the mobile lifecycle.

3. Deliver a Seamless Experience From Discovery to Purchase
Brands have to make a conscious effort to remove the silos across organizations to be successful at mobile marketing. The goal of marketers should be to collaborate across initiatives by taking in to account different screen sizes, channels, design and messages to deliver ONE consistent experience to consumers.

4. Connecting the Dots Across all of the Consumer Lifecycle
As digital becomes a more integral part of the marketing strategy it will be vitally important to understand how mobile campaigns are performing across the entire customer lifecycle—including mobile ads and messaging, QR Codes, mobile website, branded apps and social media. With these insights, marketers will be able to optimize their campaigns to better understand the triggers that lead consumers down the path-to-purchase.

People everywhere are becoming more reliant on mobile devices and mobile websites to provide them with instant access to product information, deals and the opportunity to purchase in an easy, straightforward manner. Brands have to make it easy for their customers to navigate mobile sites and quickly decide to purchase, regardless of what device they are on.

Are We Too Nosy?

With today’s technology, we are able to collect more information about individual users than we could possibly process and certainly more than we could find useful—but that in itself isn’t irresponsible; it’s when we’re not careful with or careful to protect this data to the extent we should. Yes, the big data bandwagon is here and many marketers are eager to jump on, but we also have an ethical responsibility to our constituents

A few days ago a fellow Android user recommended an app[1] enabling me to view what risk each of my smartphone’s apps represent—what information they collect; what they share; and what degree of security they exhibit in collecting, storing and transmitting the data they collect. After the install, I was flabbergasted to learn how reckless these companies are, and I promptly performed sweeping deletes.

On the heels of this disconcerting revelation I started to think through our recent vetting of marketing-automation applications. Each vendor was quick to illustrate how its application—for instance—is a Google Analytics embed, augments these data points with individual user information, and boasts additional features profiling ever more complete pictures of visitors and subscribers. As a person hell-bent on keeping my private life private, have we marketers gone too far? Are we too nosy?

I think perhaps we have and we are.

Our Responsibility
With today’s technology, we are able to collect more information about individual users than we could possibly process and certainly more than we could find useful—but that in itself isn’t irresponsible; it’s when we’re not careful with or careful to protect this data to the extent we should. Yes, the big data bandwagon is here and many marketers are eager to jump on, but we also have an ethical responsibility to our constituents.

I see a great failure in our wastefulness; we collect data we don’t need and, in doing so, run the risk of eroding the trust of our constituents. When we collect data and send personalized campaigns referencing information our constituents have not knowingly provided us, they become aware of our activities, and in a way that may not be welcomed.

I received an email after visiting a website, and it started with, “Not to be creepy, but we saw that you visited our website …” Not to be creepy? The only possible reason for starting an email in such a manner is the sender is acutely aware their clients probably do not know that they can be personally tracked at a website. Most people believe their browsing activity is private, and by clearing their browser history, no one is the wiser about where they’ve been. We, of course, know differently.

Just because we can collect data doesn’t mean we should. We should be careful, useful, respectful and protective in our collection and use of both implicit and explicit data.

Full Disclosure
When I tell people that I am a behavioral marketer—I create campaigns designed to disclose the recipients’ behavior and interests—and describe how the process works, they are always shocked to learn how much information can be collected through drip and nurture marketing and website visits. They immediately draw parallels between the NSA, Google, and Spider Trainers or the clients for whom we work. They also swear off ever clicking another link—as if it were that simple.

Is full disclosure in our future? In the same way that website owners must adhere to what is commonly known as the cookie law[2], will we marketers be required to disclose what we are collecting about our subscribers, how it is used and allow them to opt out without unsubscribing? If we continue on our path of increased surveillance, wasteful collection and irresponsible use, yes, I believe we will.


[1] Clueful is the Android app. It’s also a feature included in the iOS of iPhones.

[2] A law that started as an E.U. directive, adopted by all E.U. countries in 2011, designed to protect online privacy, by making consumers aware of how information about them is collected and used online, and give them a choice to allow it or not.

Updating Your Marketing Database

It’s amazing how quickly things go obsolete these days. For those of us in the business of customer data, times and technologies have changed along with the times. Some has to do with the advent of new technologies; some of it has to do with changing expectations. Let’s take a look at how the landscape has changed and what it means for marketers.

It’s amazing how quickly things go obsolete these days. For those of us in the business of customer data, times and technologies have changed along with the times. Some has to do with the advent of new technologies; some of it has to do with changing expectations. Let’s take a look at how the landscape has changed and what it means for marketers.

For marketing departments, maintaining updating customer data has always been a major headache. One way to update data is by relying on sales team members to make the updates themselves as they go about their jobs. For lack of a better term, let’s call this method internal crowd-sourcing, and there are two reasons why it has its limitations.

The first reason is technology. Typically, customer data is stored in a data hub or data warehouse, which is usually a home-grown and oftentimes proprietary database built using one of many popular database architectures. Customer databases tend to be proprietary because each organization sells different products and services, to different types of firms, and consequently collects different data points. Additionally, customer databases are usually grown organically over many years, and as a result tend to contain disparate information, often collected from different sources during different timeframes, of varying degrees of accuracy.

It’s one thing having data stored in a data warehouse somewhere. It’s quite another altogether to give salespeople access to a portal where the edits can be made—that’s been the real challenge. The database essentially needs to be integrated with or housed in some kind of tool, such as an enterprise resource planning (ERP) software or customer relationship management (CRM) software that gives sales teams some capability to update customer records on the fly with front-end read/write/edit capabilities.

Cloud-based CRM technology (such as SalesForce.com) has grown by leaps and bounds in recent years to fill this gap. Unlike purpose-built customer databases, however, out-of-the-box cloud-based CRM tools are developed for a mass market, and without customizations contain only a limited set of standard data fields plus a finite set of “custom fields.” Without heavy customizations, in other words, data stored in a cloud-based CRM solution only contains a subset of a company’s customer data file, and is typically only used by salespeople and customer service reps. Moreover, data in the CRM is usually not connected to that of other business units like marketing or finance divisions who require a more complete data set to do their job.

The second challenge to internal crowd-sourcing has more to do with the very nature of salespeople themselves. Anyone who has worked in marketing knows firsthand that it’s a monumental challenge to get salespeople to update contact records on a regular basis—or do anything else, for that matter, that doesn’t involve generating revenue or commissions.

Not surprisingly, this gives marketers fits. Good luck sending our effective (and hopefully highly personalized) CRM campaigns if customer records are either out of date or flat out wrong. Anyone who has used Salesforce.com has seen that “Stay in Touch” function, which gives salespeople an easy and relatively painless method for scrubbing contact data by sending out an email to contacts in the database inviting them to “update” their contact details. The main problem with this tool is that it necessitates a correct email address in the first place.

Assuming your salespeople are diligently updating data in the CRM, another issue with this approach is it essentially limits your data updates to whatever the sales team happens to know or glean from each customer. It assumes, in other words, that your people are asking the right questions in the first place. If your salesperson does not ask a customer how many employees they have globally or at a particular location, it won’t get entered into the CRM. Nor, for that matter, will data on recent mergers and acquisitions or financial statements—unless your sales team is extremely inquisitive and is speaking with the right people in your customers’ organizations.

The other way to update customer data is to rely on a third-party data provider to do it for you—to cleanse, correct, append and replace the data on a regular basis. This process usually involves taking the entire database, uploading it to an FTP site somewhere. The database is then grabbed by the third party, who then works their magic on the file—comparing it against a central database that is presumably updated quite regularly—and then returning the file so it can be resubmitted and merged back into the database on the data hub or residing in the CRM.

Because this process involves technology, has a lot of moving parts and involves several steps, it’s generally set up as an automated process and allowed to run on a schedule. Moreover, because the process involves overwriting an entire database (even though it is automated) it requires having IT staff around to supervise the process in a best-case scenario, or jump in if something goes wrong and it blows up completely. Not surprisingly, because we’re dealing with large files, multiple stakeholders and room for technology meltdowns, most marketers tend to shy away from running a batch update more than once per month. Some even run them quarterly. Needless to say, given the current pace of change many feel that’s not frequent enough.

It’s interesting to note that not very long ago, sending database updates quarterly via FTP file dump was seen as state-of-the-art. Not any longer, you see, FTP is soooo 2005. What’s replaced FTP is what we call a “transactional” database update system. Unlike an FTP set-up, which requires physically transferring a file from one server and onto another, transactional data updates rely on an Application Programming Interface, or API, to get the data from one system to another.

For those of you unfamiliar with the term, an API is a pre-established set of rules that different software programs can use to communicate with each other. An apt analogy might be the way a User Interface (UI) facilitates interaction between humans and computers. Using an API, data can be updated in real time, either on a record-by-record basis or in bulk. If a Company A wants to update a record in their CRM with fresh data from Company B, for instance, all they need to do is transmit a unique identifier for the record in question over to Company B, who will then return the updated information to Company A using the API.

Perhaps the best part of the transactional update architecture is that it can be set up to connect with the data pretty much anywhere it resides—in a cloud-based CRM solution or on a purpose built data warehouse sitting in your data center. For those using a cloud-based solution, a huge advantage of this architecture is that once a data provider builds hooks into popular CRM solutions, there are usually no additional costs for integration and transactional updates can be initiated in bulk by the CRM administrator, or on a transaction-by-transaction basis by salespeople themselves. It’s quite literally plug and play.

For those with an on-site data hub, integrating with the transactional data provider is usually pretty straightforward as well, because most APIs not only rely on standard Web technology, but also come equipped with easy-to-follow API keys and instructions. Setting the integration, in other words, can usually be implemented by a small team in a short timeframe and for a surprisingly small budget. And once it’s set up, it will pretty much run on its own. Problem solved.

Creepy Marketing—When Database Marketing Goes Awry

With Halloween over and the holidays on their way, I thought that Creepy Marketing made a timely subject for today’s blog. Now I’m not referring to marketing for ghouls, witches or mummies. I’m talking about adding a creepy factor to your marketing program—a major pitfall of 1:1 marketing.

With Halloween over and the holidays here, I thought that Creepy Marketing made a timely subject for today’s blog. Now I’m not referring to marketing for ghouls, witches or mummies. I’m talking about adding a creepy factor to your marketing program—a major pitfall of 1:1 marketing.

Creeping people out is, after all, contrary to what we’re trying to achieve as marketers, which is namely to use promotion to advance the brand’s sales and branding objectives. That is, of course, unless it’s your goal to damage your brand and drive away customers. Assuming that’s not the case, let’s assume that creepy is bad. Very bad. In the age of social media, one creeped out customer can very easily spread the word to hundreds of thousands of customers and prospects. In other words, better safe than sorry.

But before we go any further, however, let’s attempt to define creepy. This is important because many marketers I speak with cite there often is a razor thin line between casual and its inappropriate Cousin Creepy, between making a sale and detonating a potential long-term relationship. Fair enough. Creepiness is also a bit slippery because, like fashion tastes, standards for creepiness definitely do tend to change with time. To quote Sean parker, former CEO of Facebook, “Today’s creepy is tomorrow’s necessity.”

When it comes to detecting creepiness, I’m a firm believer of what I’ll call the ad oculos school of thought. For those of you who do not understand Latin, ad oculos means “to the eyes,” and roughly translates into “obvious to anyone that sees it.” In other words, if it looks creepy and feels creepy, then it probably is creepy and you shouldn’t do it.

You shouldn’t, for example, write out your customer’s names on a postcard or landing page—or anywhere that might be, or seem, visible to the general public. Nor for that matter should you display your customer’s age, marital status, or medical condition on any piece of marketing collateral. This doesn’t mean that you shouldn’t send offers for dating services to a customer you know is single, or information on chiropractors to someone who has acknowledged a back problem. What this means is you need to be careful with the language you use in these offers, taking care not to publicize information your customers want to remain in the private sphere.

It’s also important to keep in mind that 1:1 marketing works because it focuses like a laser on your customer’s interests and presents them with compelling and compatible product information and offers. Personalized communication is not an exercise in regurgitating your customer’s personal data in an effort to prove to them how much you know.

Remember, successful database marketers use profile data to run highly compelling and relevant campaigns to their customers. What makes the campaign successful is the fact that the offer and marketing message contain relevant information that the recipient will have a strong affinity for—not simply because it is personalized. Personalization for the sake a personalization is nothing but a gimmick—it might work once but that’s it. Successful and sustainable personalized marketing programs ultimately find a formula for identifying customer interests based on key data points and indicators, and use this formula to create and disseminate offers that will strike a chord with prospects and customers.

Have you ever been creeped out? If so, I’d love to find out how and get your feedback.