Dysrationalia and Other Consumer Disorders

It’s true. Most consumers suffer from a bad case of dysrationalia which, according to Keith Stanovich, emeritus professor of applied psychology and human development at the University of Toronto, is the “inability to think and behave rationally, despite having adequate intelligence.” He should know, he coined the term.

It’s true. Most consumers suffer from a bad case of dysrationalia which, according to Keith Stanovich, emeritus professor of applied psychology and human development at the University of Toronto, is the “inability to think and behave rationally, despite having adequate intelligence.” He should know, he coined the term.

Yep. I’m guilty, as well. And assuming you have adequate intelligence, you are, too. I came out when I bought a Suburban a couple of years ago. The slightly used car sat on the dealer’s lot for about four months when other Suburbans that were older and had more miles were selling within days of showing up on the lot. The only noticeable difference, other than the year and miles, were price. The older models that sold almost immediately were priced higher! For four months, I toyed with testing the car that wouldn’t sell, but my rational mind decided it would be a waste of time because, after all, it was priced nearly $6,500 below Kelley Blue Book value. Clearly, something had to be wrong with it.

WAKE UP!!! How did my rational mind know that there was something wrong with the lower-priced, newer car with fewer miles, unless my irrational mind was telling it so? And why does my irrational mind have more influence over my actions than my rational mind?

Upon discovering I, too, suffered from dysrationalia, I bought the car. And two years later, we have discovered absolutely nothing wrong with the car, other than the time my husband didn’t put the brake on and it rolled into a neighbor’s garage.

This same type of decision-making thought process and resulting behavior takes place daily among consumers of all ages, in all cultures, in all parts of the world. It’s human nature. For the most part, consumers never become aware that they are driven by irrational thinking and therefore, it never changes. So the reality is that we marketers have to address it, instead.

A great example of dysrationalia is found in the book of another one of my favorite psychologists, Daniel Kahnemann. In “Thinking Fast, Thinking Slow,” he asks the question, “A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?”

Your first thought, if you’re like the majority of students at MIT, Harvard, Princeton and other prestigious schools, was 10 cents. Admit it. That’s the smart person’s answer. But it’s wrong. Do the math. If you do the simple math associated with this question, it’s 5 cents, because if you add 10 cents to the cost of the bat, which would be $1.10 if the bat is $1 more than the ball, you get $1.20. Wrong.

So why do most intelligent people get it wrong intuitively? Because we rely on our first thoughts to guide us, because in most of life’s circumstances, we don’t want to bother to really work out solutions and just want to go with our “gut.” Just like my “gut” told me the car was a lemon because it was priced below value, many consumers convince themselves to make poor choices daily.

Think back on the last time you went grocery shopping. Did you really stop to look at the shelf notes telling you the price per ounce of various items so you could see which brand and which price offered the best value for the money? And did you take the time to compare the price per ounce of the generic brands vs. the advertised brands? If so, get a life! Of course we don’t spend hours comparing price per value for every purchase we make. We rely on our “rational” thinking to do that for us quickly, and that “rational” thinking tells us that bigger boxes give us more value, and generic brands costs less. Pay attention next time you shop and you’ll realize it just isn’t always so, even at those big box warehouse stores.

Not only do you make irrational shopping choices daily, but so do your customers. To compensate, we marketers must present our messages in a way that fits our consumers’ irrational decision processes. As Dan Ariely pointed out in his book, “Predictably Irrational,” there are many ways we can do this. For one, when giving customers three options to choose from, put the one you want them to purchase in the middle. Consumers are not gong to do the math to see which offer provides the best price for the money, but instead are going to make a quick “gut” choice and purchase the one in the middle, because our intuitive mind tells us the first option is too basic, and the third option is likely extravagant or superfluous. So the middle option is most practical and therefore intelligent.

Another way we can appeal to irrational thinking is through price. Most of us will never buy the highest priced option, as it seems irresponsible. But we will buy something less expensive and feel good about it — even if it, too, was overpriced. Many studies show that if a salesperson shows us a clearly overpriced item, say a Lady Date Pearlmaster Rolex watch for $38,000, we will say “no, thanks.” But when they immediately afterward show us the Cartier Santos Demoiselle watch for roughly $15,000, it’s suddenly a bargain we have to have. Really? $15,000 for a watch is a bargain? My conscious mind tells me that it isn’t intelligent for anyone, regardless of income. (Yes, call me cheap.) The difference was our “rational” mind suddenly kicked into “irrational” thinking due to pre-set reference points created by someone trying to sell us something, and $15,000 is less than half of $38,000, so that is a practical and intelligent decision. For some.

So how do you, as a brand, create sales outside of marketing campaigns through psychologically driven pricing strategies, and how do you as a marketer position your products to sell precisely the items you want to sell most? Offering sales and promotional pricing can often backfire, as you saw in my car purchasing example.

Appealing to how our mind thinks, processes information, and calculates solutions — rational or not — is the key to “winning customers and influencing behavior” for life. Integrating other psychological drivers, such as authority and reward, will keep those same customers coming back for more.

Some key takeaways:

  • Never assume your target consumer is really going to read all the details of your message. Make it clear and actionable with a scan of the eyeballs.
  • Price according to what is reasonable and credible for the generation and mindset of customers you are targeting. Not too low and not too high.
  • Don’t make your customers think. Simply create a promotion that is simple and appeals to key psychological drivers: social proof, our need for rewards and authority.
  • Immediately recognize your customers with a “Thank You” (there are no excuses with automated emails today) and reward them at least with a gesture of appreciation for their business.
  • Finally, spend some time studying shopping patterns of your most valuable customers to identify rational and irrational behavioral trends. Plan future promotions accordingly and enjoy a strong ROI!

Google’s Mobile Algorithm Update: What You Need to Know

Google announced some very big news about a major algorithm update that landed on April 21, 2015—yesterday. Due to the shift in how people are searching and surfing the internet, Google has updated its search algorithm to take into account more mobile signals.

Google announced some very big news about a major algorithm update that landed on April 21, 2015—yesterday. As I’m sure you know, mobile traffic and the number of Google searches from mobile devices is on the rise. Well, it’s more than rising, because it’s about to surpass desktop computer traffic online by the end of the year.

Due to this shift in how people are searching and surfing the internet, Google has updated its search algorithm to take into account more mobile signals.

What Does This Mean for Your Business?
Zineb Ait Bahajji, a member of Google’s Webmaster Trends team, said this Google update will have a bigger impact on search rankings than the infamous Panda or Penguin updates. If you’ve been following SEO for a while, then I’m sure you’ve heard of the Panda and Penguin updates, which both caused massive changes in website rankings.

In other words, April 21 should have sent a lot of ill-prepared businesses off of the first page of Google!

What Do You Need to Do to Fix It?
If you haven’t already, then now is the time to get serious about your mobile website strategy. Answer these three questions to determine if you were ready for the April 21 update:

  1. Do you have a mobile version of your website?

  2. Can Google’s mobile bots crawl your website?

  3. Are your mobile webpages easy to use and navigate?

If you answered no to any of those questions, then you need to take action ASAP. Let’s go through each one in more detail.

1. Mobile Website
The two most common options to create a mobile website are:

  1. Create a separate mobile website on a subdomain like m.yourdomain.com. This is a great option if you have a limited budget. In fact, you can set this up for free using DudaMobile.com. If you have a complex website or a large e-commerce website, then this is not going to be a good option for you and I would recommend Option No. 2.
  2. Create a responsive website. A responsive website responds automatically to the device requesting the pages and displays the page differently depending on the device. Many popular CMS systems like WordPress have themes that are already responsive, so I recommend using an existing theme whenever possible. If you’re just getting started, then I highly recommend creating a responsive website rather than a separate mobile website because it’s easier to maintain in the long run.

2. Allow Google’s Mobile Bots
This should be fairly obvious, but if Google’s mobile bots can’t crawl your website, then Google is not going to show your website in the search results. It’s like your website doesn’t exist!

To check if Google can crawl your website, go to Google Webmaster Tools. Create an account (if you don’t have one already) and then go to the Crawl section in the left navigation. Then click “Fetch as Google”, select “Mobile: Smartphone” and click the big red button that says “Fetch.” Google will then tell you if there are any issues crawling your mobile website.

3. Mobile Website Usability
Finally, go to every page on your mobile website and make sure the pages are easy to use and navigate. Google started using mobile usability as a ranking factor as of April 21. Check all the images, hyperlinks, videos, and any other functionality normally available on the desktop version and fix anything that’s broken on the mobile version because that will drag down your rankings.

5 Types of Google AdWords Conversion Tracking

When I first started using Google AdWords in 2006, conversion tracking was in its infancy. There was only one type of conversion pixel code and there was no option to customize anything. Oh boy, have the times changed

When I first started using Google AdWords in 2006, conversion tracking was in its infancy. There was only one type of conversion pixel code and there was no option to customize anything.

Oh boy, have the times changed. AdWords now gives advertisers five different conversion types, along with options to customize exactly how conversions are tracking in your account. For example, you can now track all conversions or you can track only unique conversions to exclude the instances when prospects complete multiple forms on your website.

In this article, I’m going to bring you up to speed on all five different conversion types:

  1. Webform Submissions
  2. Online Sales with Revenue
  3. Calls from Website
  4. Calls from Ads [Call Extensions]
  5. Offline Sales [Import]

1. Webform Submissions:
Again, this was the only option for me back in 2006. Webform submissions like quote requests, demo requests, or any other key action on your website should be tracked as a conversion in your AdWords campaign. This can be easily set up by adding the conversion code to the “thank you” page of all your webforms.

2. Online Sales with Revenue:
Eventually, Google introduced the ability to assign a value to your conversions, which revolutionized campaign management. If your business sells anything online, then you absolutely must set up revenue tracking for your shopping cart. Once set up, you’ll start to see revenue data in AdWords so you can calculate your profit per keyword, placement or ad.

3. Calls from Website:
Just last year website call tracking was launched so that advertisers can see how many phone calls are generated from the AdWords ads. This code is fairly technical so I recommend assigning this task to your webmaster to get set up. Once installed you’ll start to see conversions in your AdWords account any time a prospect calls after clicking on one of your ads.

4. Calls from Ads:
Most people do not call directly from the phone number listed in an ad, but some do. In AdWords you can track these calls by using a Call Extension, which is one of the many Ad Extensions available in AdWords. When you set up your Call Extension, make sure to click on the advanced options and check the box to track phone calls using a Google forwarding number.

5. Offline Sales [Import]:
Up to this point all the conversion tracking options sound great, but they don’t solve the major problem for non-eCommerce businesses, which is tracking sales generated off of the internet.Luckily Google recognized this problem and introduce the Offline Sales Import conversion option. This is the most technical of them all, but it’s well worth the effort to have your webmaster set this up. Here’s how it works:

  • Your webmaster will have to edit all the forms on your website to add a hidden field called “GCLID” (stands for “Google Click ID”)
  • Your webmaster will set the value of this hidden field using the URL parameter called “gclid”. For example, when someone clicks on one of your ads, Google automatically ads the “gclid” URL parameter, which looks like this 123ABC567DEF. This is the unique tracking code you’ll use to track sales back to your ads.
  • You’ll need to send the GCLID code to your sales team and/or your customer relationship management (CRM) tool like Salesforce.
  • On a monthly basis, you’ll need to find all the sales that have a corresponding GCLID code and import those codes, along with the sales revenue, into Google AdWords.
  • AdWords will automatically match the GCLID codes to the keywords, placements and ads that the customers originally clicked on before ultimately making a purchase off of the internet.

If that didn’t make sense, then just send your webmaster this page and he or she will be able to help. Trust me, it sounds more complicated than it is.

Go through the 5 conversion types again and make sure you have them all set up in your AdWord campaign. These are all critical to maximize the performance of your campaigns.

Want more free Google AdWords tips? Click here to get my Google AdWords checklist.

Not All Databases Are Created Equal

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

However, even when the database in question is attached to analytical, visualization, CRM or drill-down tools, it is not so easy to evaluate it completely, as such practice reveals only a few aspects of a database, hardly all of them. That is because such tools are like window treatments of a building, through which you may look into the database. Imagine a building inspector inspecting a building without ever entering it. Would you respect the opinion of the inspector who just parks his car outside the building, looks into the building through one or two windows, and says, “Hey, we’re good to go”? No way, no sir. No one should judge a book by its cover.

In the age of the Big Data (you should know by now that I am not too fond of that word), everything digitized is considered data. And data reside in databases. And databases are supposed be designed to serve specific purposes, just like buildings and cars are. Although many modern databases are just mindless piles of accumulated data, granted that the database design is decent and functional, we can still imagine many different types of databases depending on the purposes and their contents.

Now, most of the Big Data discussions these days are about the platform, environment, or tool sets. I’m sure you heard or read enough about those, so let me boldly skip all that and their related techie words, such as Hadoop, MongoDB, Pig, Python, MapReduce, Java, SQL, PHP, C++, SAS or anything related to that elusive “cloud.” Instead, allow me to show you the way to evaluate databases—or data sources—from a business point of view.

For businesspeople and decision-makers, it is not about NoSQL vs. RDB; it is just about the usefulness of the data. And the usefulness comes from the overall content and database management practices, not just platforms, tool sets and buzzwords. Yes, tool sets are important, but concert-goers do not care much about the types and brands of musical instruments that are being used; they just care if the music is entertaining or not. Would you be impressed with a mediocre guitarist just because he uses the same brand of guitar that his guitar hero uses? Nope. Likewise, the usefulness of a database is not about the tool sets.

In my past column, titled “Big Data Must Get Smaller,” I explained that there are three major types of data, with which marketers can holistically describe their target audience: (1) Descriptive Data, (2) Transaction/Behavioral Data, and (3) Attitudinal Data. In short, if you have access to all three dimensions of the data spectrum, you will have a more complete portrait of customers and prospects. Because I already went through that subject in-depth, let me just say that such types of data are not the basis of database evaluation here, though the contents should be on top of the checklist to meet business objectives.

In addition, throughout this series, I have been repeatedly emphasizing that the database and analytics management philosophy must originate from business goals. Basically, the business objective must dictate the course for analytics, and databases must be designed and optimized to support such analytical activities. Decision-makers—and all involved parties, for that matter—suffer a great deal when that hierarchy is reversed. And unfortunately, that is the case in many organizations today. Therefore, let me emphasize that the evaluation criteria that I am about to introduce here are all about usefulness for decision-making processes and supporting analytical activities, including predictive analytics.

Let’s start digging into key evaluation criteria for databases. This list would be quite useful when examining internal and external data sources. Even databases managed by professional compilers can be examined through these criteria. The checklist could also be applicable to investors who are about to acquire a company with data assets (as in, “Kick the tire before you buy it.”).

1. Depth
Let’s start with the most obvious one. What kind of information is stored and maintained in the database? What are the dominant data variables in the database, and what is so unique about them? Variety of information matters for sure, and uniqueness is often related to specific business purposes for which databases are designed and created, along the lines of business data, international data, specific types of behavioral data like mobile data, categorical purchase data, lifestyle data, survey data, movement data, etc. Then again, mindless compilation of random data may not be useful for any business, regardless of the size.

Generally, data dictionaries (lack of it is a sure sign of trouble) reveal the depth of the database, but we need to dig deeper, as transaction and behavioral data are much more potent predictors and harder to manage in comparison to demographic and firmographic data, which are very much commoditized already. Likewise, Lifestyle variables that are derived from surveys that may have been conducted a long time ago are far less valuable than actual purchase history data, as what people say they do and what they actually do are two completely different things. (For more details on the types of data, refer to the second half of “Big Data Must Get Smaller.”)

Innovative ideas should not be overlooked, as data packaging is often very important in the age of information overflow. If someone or some company transformed many data points into user-friendly formats using modeling or other statistical techniques (imagine pre-developed categorical models targeting a variety of human behaviors, or pre-packaged segmentation or clustering tools), such effort deserves extra points, for sure. As I emphasized numerous times in this series, data must be refined to provide answers to decision-makers. That is why the sheer size of the database isn’t so impressive, and the depth of the database is not just about the length of the variable list and the number of bytes that go along with it. So, data collectors, impress us—because we’ve seen a lot.

2. Width
No matter how deep the information goes, if the coverage is not wide enough, the database becomes useless. Imagine well-organized, buyer-level POS (Point of Service) data coming from actual stores in “real-time” (though I am sick of this word, as it is also overused). The data go down to SKU-level details and payment methods. Now imagine that the data in question are collected in only two stores—one in Michigan, and the other in Delaware. This, by the way, is not a completely made -p story, and I faced similar cases in the past. Needless to say, we had to make many assumptions that we didn’t want to make in order to make the data useful, somehow. And I must say that it was far from ideal.

Even in the age when data are collected everywhere by every device, no dataset is ever complete (refer to “Missing Data Can Be Meaningful“). The limitations are everywhere. It could be about brand, business footprint, consumer privacy, data ownership, collection methods, technical limitations, distribution of collection devices, and the list goes on. Yes, Apple Pay is making a big splash in the news these days. But would you believe that the data collected only through Apple iPhone can really show the overall consumer trend in the country? Maybe in the future, but not yet. If you can pick only one credit card type to analyze, such as American Express for example, would you think that the result of the study is free from any bias? No siree. We can easily assume that such analysis would skew toward the more affluent population. I am not saying that such analyses are useless. And in fact, they can be quite useful if we understand the limitations of data collection and the nature of the bias. But the point is that the coverage matters.

Further, even within multisource databases in the market, the coverage should be examined variable by variable, simply because some data points are really difficult to obtain even by professional data compilers. For example, any information that crosses between the business and the consumer world is sparsely populated in many cases, and the “occupation” variable remains mostly blank or unknown on the consumer side. Similarly, any data related to young children is difficult or even forbidden to collect, so a seemingly simple variable, such as “number of children,” is left unknown for many households. Automobile data used to be abundant on a household level in the past, but a series of laws made sure that the access to such data is forbidden for many users. Again, don’t be impressed with the existence of some variables in the data menu, but look into it to see “how much” is available.

3. Accuracy
In any scientific analysis, a “false positive” is a dangerous enemy. In fact, they are worse than not having the information at all. Many folks just assume that any data coming out a computer is accurate (as in, “Hey, the computer says so!”). But data are not completely free from human errors.

Sheer accuracy of information is hard to measure, especially when the data sources are unique and rare. And the errors can happen in any stage, from data collection to imputation. If there are other known sources, comparing data from multiple sources is one way to ensure accuracy. Watching out for fluctuations in distributions of important variables from update to update is another good practice.

Nonetheless, the overall quality of the data is not just up to the person or department who manages the database. Yes, in this business, the last person who touches the data is responsible for all the mistakes that were made to it up to that point. However, when the garbage goes in, the garbage comes out. So, when there are errors, everyone who touched the database at any point must share in the burden of guilt.

Recently, I was part of a project that involved data collected from retail stores. We ran all kinds of reports and tallies to check the data, and edited many data values out when we encountered obvious errors. The funniest one that I saw was the first name “Asian” and the last name “Tourist.” As an openly Asian-American person, I was semi-glad that they didn’t put in “Oriental Tourist” (though I still can’t figure out who decided that word is for objects, but not people). We also found names like “No info” or “Not given.” Heck, I saw in the news that this refugee from Afghanistan (he was a translator for the U.S. troops) obtained a new first name as he was granted an entry visa, “Fnu.” That would be short for “First Name Unknown” as the first name in his new passport. Welcome to America, Fnu. Compared to that, “Andolini” becoming “Corleone” on Ellis Island is almost cute.

Data entry errors are everywhere. When I used to deal with data files from banks, I found that many last names were “Ira.” Well, it turned out that it wasn’t really the customers’ last names, but they all happened to have opened “IRA” accounts. Similarly, movie phone numbers like 777-555-1234 are very common. And fictitious names, such as “Mickey Mouse,” or profanities that are not fit to print are abundant, as well. At least fake email addresses can be tested and eliminated easily, and erroneous addresses can be corrected by time-tested routines, too. So, yes, maintaining a clean database is not so easy when people freely enter whatever they feel like. But it is not an impossible task, either.

We can also train employees regarding data entry principles, to a certain degree. (As in, “Do not enter your own email address,” “Do not use bad words,” etc.). But what about user-generated data? Search and kill is the only way to do it, and the job would never end. And the meta-table for fictitious names would grow longer and longer. Maybe we should just add “Thor” and “Sponge Bob” to that Mickey Mouse list, while we’re at it. Yet, dealing with this type of “text” data is the easy part. If the database manager in charge is not lazy, and if there is a bit of a budget allowed for data hygiene routines, one can avoid sending emails to “Dear Asian Tourist.”

Numeric errors are much harder to catch, as numbers do not look wrong to human eyes. That is when comparison to other known sources becomes important. If such examination is not possible on a granular level, then median value and distribution curves should be checked against historical transaction data or known public data sources, such as U.S. Census Data in the case of demographic information.

When it’s about the companies’ own data, follow your instincts and get rid of data that look too good or too bad to be true. We all can afford to lose a few records in our databases, and there is nothing wrong with deleting the “outliers” with extreme values. Erroneous names, like “No Information,” may be attached to a seven-figure lifetime spending sum, and you know that can’t be right.

The main takeaways are: (1) Never trust the data just because someone bothered to store them in computers, and (2) Constantly look for bad data in reports and listings, at times using old-fashioned eye-balling methods. Computers do not know what is “bad,” until we specifically tell them what bad data are. So, don’t give up, and keep at it. And if it’s about someone else’s data, insist on data tallies and data hygiene stats.

4. Recency
Outdated data are really bad for prediction or analysis, and that is a different kind of badness. Many call it a “Data Atrophy” issue, as no matter how fresh and accurate a data point may be today, it will surely deteriorate over time. Yes, data have a finite shelf-life, too. Let’s say that you obtained a piece of information called “Golf Interest” on an individual level. That information could be coming from a survey conducted a long time ago, or some golf equipment purchase data from a while ago. In any case, someone who is attached to that flag may have stopped shopping for new golf equipment, as he doesn’t play much anymore. Without a proper database update and a constant feed of fresh data, irrelevant data will continue to drive our decisions.

The crazy thing is that, the harder it is to obtain certain types of data—such as transaction or behavioral data—the faster they will deteriorate. By nature, transaction or behavioral data are time-sensitive. That is why it is important to install time parameters in databases for behavioral data. If someone purchased a new golf driver, when did he do that? Surely, having bought a golf driver in 2009 (“Hey, time for a new driver!”) is different from having purchased it last May.

So-called “Hot Line Names” literally cease to be hot after two to three months, or in some cases much sooner. The evaporation period maybe different for different product types, as one may stay longer in the market for an automobile than for a new printer. Part of the job of a data scientist is to defer the expiration date of data, finding leads or prospects who are still “warm,” or even “lukewarm,” with available valid data. But no matter how much statistical work goes into making the data “look” fresh, eventually the models will cease to be effective.

For decision-makers who do not make real-time decisions, a real-time database update could be an expensive solution. But the databases must be updated constantly (I mean daily, weekly, monthly or even quarterly). Otherwise, someone will eventually end up making a wrong decision based on outdated data.

5. Consistency
No matter how much effort goes into keeping the database fresh, not all data variables will be updated or filled in consistently. And that is the reality. The interesting thing is that, especially when using them for advanced analytics, we can still provide decent predictions if the data are consistent. It may sound crazy, but even not-so-accurate-data can be used in predictive analytics, if they are “consistently” wrong. Modeling is developing an algorithm that differentiates targets and non-targets, and if the descriptive variables are “consistently” off (or outdated, like census data from five years ago) on both sides, the model can still perform.

Conversely, if there is a huge influx of a new type of data, or any drastic change in data collection or in a business model that supports such data collection, all bets are off. We may end up predicting such changes in business models or in methodologies, not the differences in consumer behavior. And that is one of the worst kinds of errors in the predictive business.

Last month, I talked about dealing with missing data (refer to “Missing Data Can Be Meaningful“), and I mentioned that data can be inferred via various statistical techniques. And such data imputation is OK, as long as it returns consistent values. I have seen so many so-called professionals messing up popular models, like “Household Income,” from update to update. If the inferred values jump dramatically due to changes in the source data, there is no amount of effort that can save the targeting models that employed such variables, short of re-developing them.

That is why a time-series comparison of important variables in databases is so important. Any changes of more than 5 percent in distribution of variables when compared to the previous update should be investigated immediately. If you are dealing with external data vendors, insist on having a distribution report of key variables for every update. Consistency of data is more important in predictive analytics than sheer accuracy of data.

6. Connectivity
As I mentioned earlier, there are many types of data. And the predictive power of data multiplies as different types of data get to be used together. For instance, demographic data, which is quite commoditized, still plays an important role in predictive modeling, even when dominant predictors are behavioral data. It is partly because no one dataset is complete, and because different types of data play different roles in algorithms.

The trouble is that many modern datasets do not share any common matching keys. On the demographic side, we can easily imagine using PII (Personally Identifiable Information), such as name, address, phone number or email address for matching. Now, if we want to add some transaction data to the mix, we would need some match “key” (or a magic decoder ring) by which we can link it to the base records. Unfortunately, many modern databases completely lack PII, right from the data collection stage. The result is that such a data source would remain in a silo. It is not like all is lost in such a situation, as they can still be used for trend analysis. But to employ multisource data for one-to-one targeting, we really need to establish the connection among various data worlds.

Even if the connection cannot be made to household, individual or email levels, I would not give up entirely, as we can still target based on IP addresses, which may lead us to some geographic denominations, such as ZIP codes. I’d take ZIP-level targeting anytime over no targeting at all, even though there are many analytical and summarization steps required for that (more on that subject in future articles).

Not having PII or any hard matchkey is not a complete deal-breaker, but the maneuvering space for analysts and marketers decreases significantly without it. That is why the existence of PII, or even ZIP codes, is the first thing that I check when looking into a new data source. I would like to free them from isolation.

7. Delivery Mechanisms
Users judge databases based on visualization or reporting tool sets that are attached to the database. As I mentioned earlier, that is like judging the entire building based just on the window treatments. But for many users, that is the reality. After all, how would a casual user without programming or statistical background would even “see” the data? Through tool sets, of course.

But that is the only one end of it. There are so many types of platforms and devices, and the data must flow through them all. The important point is that data is useless if it is not in the hands of decision-makers through the device of their choice, at the right time. Such flow can be actualized via API feed, FTP, or good, old-fashioned batch installments, and no database should stay too far away from the decision-makers. In my earlier column, I emphasized that data players must be good at (1) Collection, (2) Refinement, and (3) Delivery (refer to “Big Data is Like Mining Gold for a Watch—Gold Can’t Tell Time“). Delivering the answers to inquirers properly closes one iteration of information flow. And they must continue to flow to the users.

8. User-Friendliness
Even when state-of-the-art (I apologize for using this cliché) visualization, reporting or drill-down tool sets are attached to the database, if the data variables are too complicated or not intuitive, users will get frustrated and eventually move away from it. If that happens after pouring a sick amount of money into any data initiative, that would be a shame. But it happens all the time. In fact, I am not going to name names here, but I saw some ridiculously hard to understand data dictionary from a major data broker in the U.S.; it looked like the data layout was designed for robots by the robots. Please. Data scientists must try to humanize the data.

This whole Big Data movement has a momentum now, and in the interest of not killing it, data players must make every aspect of this data business easy for the users, not harder. Simpler data fields, intuitive variable names, meaningful value sets, pre-packaged variables in forms of answers, and completeness of a data dictionary are not too much to ask after the hard work of developing and maintaining the database.

This is why I insist that data scientists and professionals must be businesspeople first. The developers should never forget that end-users are not trained data experts. And guess what? Even professional analysts would appreciate intuitive variable sets and complete data dictionaries. So, pretty please, with sugar on top, make things easy and simple.

9. Cost
I saved this important item for last for a good reason. Yes, the dollar sign is a very important factor in all business decisions, but it should not be the sole deciding factor when it comes to databases. That means CFOs should not dictate the decisions regarding data or databases without considering the input from CMOs, CTOs, CIOs or CDOs who should be, in turn, concerned about all the other criteria listed in this article.

Playing with the data costs money. And, at times, a lot of money. When you add up all the costs for hardware, software, platforms, tool sets, maintenance and, most importantly, the man-hours for database development and maintenance, the sum becomes very large very fast, even in the age of the open-source environment and cloud computing. That is why many companies outsource the database work to share the financial burden of having to create infrastructures. But even in that case, the quality of the database should be evaluated based on all criteria, not just the price tag. In other words, don’t just pick the lowest bidder and hope to God that it will be alright.

When you purchase external data, you can also apply these evaluation criteria. A test-match job with a data vendor will reveal lots of details that are listed here; and metrics, such as match rate and variable fill-rate, along with complete the data dictionary should be carefully examined. In short, what good is lower unit price per 1,000 records, if the match rate is horrendous and even matched data are filled with missing or sub-par inferred values? Also consider that, once you commit to an external vendor and start building models and analytical framework around their its, it becomes very difficult to switch vendors later on.

When shopping for external data, consider the following when it comes to pricing options:

  • Number of variables to be acquired: Don’t just go for the full option. Pick the ones that you need (involve analysts), unless you get a fantastic deal for an all-inclusive option. Generally, most vendors provide multiple-packaging options.
  • Number of records: Processed vs. Matched. Some vendors charge based on “processed” records, not just matched records. Depending on the match rate, it can make a big difference in total cost.
  • Installment/update frequency: Real-time, weekly, monthly, quarterly, etc. Think carefully about how often you would need to refresh “demographic” data, which doesn’t change as rapidly as transaction data, and how big the incremental universe would be for each update. Obviously, a real-time API feed can be costly.
  • Delivery method: API vs. Batch Delivery, for example. Price, as well as the data menu, change quite a bit based on the delivery options.
  • Availability of a full-licensing option: When the internal database becomes really big, full installment becomes a good option. But you would need internal capability for a match and append process that involves “soft-match,” using similar names and addresses (imagine good-old name and address merge routines). It becomes a bit of commitment as the match and append becomes a part of the internal database update process.

Business First
Evaluating a database is a project in itself, and these nine evaluation criteria will be a good guideline. Depending on the businesses, of course, more conditions could be added to the list. And that is the final point that I did not even include in the list: That the database (or all data, for that matter) should be useful to meet the business goals.

I have been saying that “Big Data Must Get Smaller,” and this whole Big Data movement should be about (1) Cutting down on the noise, and (2) Providing answers to decision-makers. If the data sources in question do not serve the business goals, cut them out of the plan, or cut loose the vendor if they are from external sources. It would be an easy decision if you “know” that the database in question is filled with dirty, sporadic and outdated data that cost lots of money to maintain.

But if that database is needed for your business to grow, clean it, update it, expand it and restructure it to harness better answers from it. Just like the way you’d maintain your cherished automobile to get more mileage out of it. Not all databases are created equal for sure, and some are definitely more equal than others. You just have to open your eyes to see the differences.

I’m a Black Widow … What Spider Are You?

Over the last couple of months, I’ve noticed a growing Facebook trend—an increase in those annoyingly stupid quizzes. What flower are you? What actress would play you in the movie version of your life? What Rolling Stones song describes you? What else is surprising is that there is a marketing method behind this madness

Over the last couple of months, I’ve noticed a growing Facebook trend—an increase in those annoyingly stupid quizzes.

What flower are you? What actress would play you in the movie version of your life? What Rolling Stones song describes you? Are we so bored with our lives that we have to take a quiz to help us with self-actualization?

It always surprises me how many of my seemingly intelligent friends participate in these time-wasters. And I’m not sure I care that if my neighbor were a flower, she’d be a Lily … or if my sister were a dog, she’d be a lab.

What else is surprising is that there is a marketing method behind this madness.

As Americans, we love games, trivia, puzzles, quizzes—anything where we can demonstrate our superiority or prowess. I’ll admit that The New York Times Crossword puzzle is sometimes the sole reason I purchase a newsstand copy of the Times (and if you’re a regular reader of my blog, you already know that I’m obsessed with Words With Friends).

It should come as no surprise that smart brands have figured out how to turn this obsession into a marketing opportunity. Long before Facebook came into our lives, magazines used quizzes to entice readers to purchase—right from the front cover that screamed to us in the grocery check-out lane: “Are you a good kisser? Take this quiz and find out!” Cosmo turned the quizzes into an art form starting in the early 1960’s.

Online quizzes are simply a means to a financial end for popular quiz-maker Buzzfeed. They’ve figured out how to use the data to help brands market things to you.

When you take a quiz about “American Idol,” for example, you’re not just telling the network that you’re a viewer. By connecting the dots to your profile data, now the network knows your age range, gender, marital status and other habits like favorite alcohol, or food—and that can be a goldmine.

But Facebook isn’t the only one to cash in on quizzes to drive advertising sales, LinkedIn is also guilty. Recently we created a digital banner campaign for a B-to-C client that ran on LinkedIn for a few weeks. We tested different messages and offers, and our clicks (and subsequent registration) data was good, but not great. Then we leveraged their quiz option.

On LinkedIn, you create a single question with multiple response options, and the collective response results are posted in real time. After the targets answers the quiz, they are then exposed to the results—and to your banner ads—and the results were impressive. Much higher number of clicks, and a much higher percent of clicks, and a much higher number of registrants—all at a much lower CPC. Now that’s an ROI that makes much more sense to this marketer.

If a reader has figured out how to really leverage the Facebook quizzes for marketing gain, I’d love to hear about it.

And, for the record, if I were a city, I’d be …

The Future of Online Is Offline

I find it offensive when marketers call anyone an “online person.” Let’s get this straight: At the end of some not-so-memorable transaction with you, if I opt in for your how-bad-can-it-be email promotions, or worse, neglect to uncheck the pre-checked check-box that says “You will hear from us from time to time” (which could turn into a daily commitment for the rest of my cognitive life, or, until I decide finding that invisible unsubscribe link presented in the font size of a few pixels is a better option than hitting the delete key every day), I get to be an online person to you? How nice.

I find it offensive when marketers call anyone an “online person.” Let’s get this straight: At the end of some not-so-memorable transaction with you, if I opt in for your how-bad-can-it-be email promotions, or worse, neglect to uncheck the pre-checked check-box that says “You will hear from us from time to time” (which could turn into a daily commitment for the rest of my cognitive life, or, until I decide finding that invisible unsubscribe link presented in the font size of a few pixels is a better option than hitting the delete key every day), I get to be an online person to you? How nice.

What if I receive an email offer from you, research the heck out of the product on the Internet, and then show up at a store to have instant gratification? Does that make me an offline person now? Sorry to break your channel-oriented marketing mind, but hey, I am just a guy. I am neither an online person nor an offline person; which, by the way, happens to be a dirty word in some pretentious marketing circles (as in “Eew, you’re in the offline space?!”).

Marketers often forget to recognize that all this “Big Data” stuff (or any size data, for that matter) and channel management tools are just tools to get to people. In the age of Big Data, it shouldn’t be so hard to know “a lot” about a person, and tailor messages and offers for that person. Then why is that I get confusing offers all the time? How is that I receive multiple types of credit card offers from the same bank within weeks? Don’t they know all about my banking details? Don’t they have some all-inclusive central data depository for all that kind of stuff?

The sad and short answer to all this is that it really doesn’t matter if the users of such databases still think only in terms of her division, his channel assignment, and only through to the very next campaign. And such mindsets may even alter the structure of the marketing database, where everything is organized by division, product or channel. That is how one becomes an online person, who might as well be invisible when it comes to his offline activities.

What is the right answer, then? Both database and users of such databases should be “buyer-centric” or “individual-centric” at the core. In a well-designed marketing database, every variable should be a descriptor for the individual, regardless of the data sources or channels through which she happens to have navigated to end up in the database. There, what she has been buying, her typical spending level, her pricing threshold, channels that she uses to listen, channels that she employs to make purchases or to express herself, stores she visited, lapsed time since her last activities by each channel, contact/response history, her demographic profile, etc. should all be nicely lined up as “her” personal record. That is how modern marketing databases should be structured. Just putting various legacy datasets in one place isn’t going to cut it, even if some individual ID is assigned to everyone in every table. Through some fancy Big Data tools, you may be able to store and retrieve records for every transaction for the past 20 years, but such records describe transactions, not people. Again, it’s all about people.

Why should marketing databases be “buyer-centric”? (1) Nobody is one-dimensional, locked into one channel or division of some marketer, and (2) Individualized targeting and messaging can only be actualized through buyer-centric data platforms. Want to use advanced statistical models? You would need individualized structure because the main goal of any model for marketing is to rank “people” in terms of your target’s susceptibility to certain offers or products. If an individual’s information is scattered all over the database, requiring lots of joins and manipulations, then that database simply isn’t model-ready.

Further, when I look into the future, I see the world where one-click checkout is the norm, even in the offline world. The technology to identify ourselves and to make payment will be smaller and more ubiquitous. Today, when we go to a drug store, we need to bring out the membership card, coupons and our credit card to finish the transaction. Why couldn’t that be just one step? If I identify myself with an ID card or with some futuristic device that I would wear such as a phone, glasses or a wristwatch, shouldn’t that be enough to finish the deal and let me out of the store? When that kind of future becomes a reality (in the not-too distant future), will marketers still think and behave within that channel-centric box? Will we even attempt to link what just happened at the store to other activities the person engaged in online or offline? Not if some guy is in charge of that “one” new channel, no matter how fancy that department title would be.

I have been saying this all along, but let me say it again. The future of online is offline. The distinction of such things would be as meaningless as debating if interactive TV of the future should be called a TV or a computer. Is an iPhone a phone or mobile computer? My answer? Who cares? We should be concentrating our efforts on talking to the person who is looking at the device, whether it is through a computer screen, mobile screen or TV screen. That is the first step toward the buyer-centric mindset; that it is and always has been about people, not channel or devices that would come and go. And it is certainly not about some marketing department that may handle just one channel or one product at a time.

The Big Data movement should about the people. The only difference this new wave brings is the amount of data that we need to deal with and the speed in which we need to operate. Soon, marketers should be able to do things in less than a second that used to take three months. Displaying an individually customized real-time offer built with past and present data through fancy statistical model via hologram won’t be just a scene in a science fiction movie (remember the department store scene in “Minority Report”?). And if marketing databases are not built in a buyer-centric structure, someone along the line will waste a lot of time just to understand what the target individual is all about. That could have been OK in the last century, but not in the age of abundant and ubiquitous data.

Irrational Customers and 2013’s Tip Top Marketing Campaign

Exhale, just landed from a jam-packed Direct Marketing Association DMA13 conference … You have to hand it to New Zealanders. For two years’ running, that nation’s marketing practitioners have nailed a Diamond ECHO from the Direct Marketing Association’s International ECHO Awards, which were presented last week during DMA13, the association’s annual conference in Chicago.

Exhale, just landed from a jam-packed Direct Marketing Association DMA13 conference

New Zealanders are Diamond
You have to hand it to New Zealanders. For two years’ running, that nation’s marketing practitioners have nailed a Diamond ECHO from the Direct Marketing Association’s International ECHO Awards, which were presented last week during DMA13, the association’s annual conference in Chicago.

This year’s top data-driven marketing campaign in the world was for ice cream maker Tip Top (Fonterra Brands Ltd), in a campaign created by Colenso BBDO/Proximity New Zealand called “Feel Tip Top.” According to the ECHO Award entry:

A 75-year-old local ice cream brand in New Zealand aimed to regain relevance and brand momentum using customer experience. New Zealanders flocked to Facebook for the opportunity to nominate friends, family members or colleagues to receive a personally addressed, hand-delivered ice cream. By encouraging folks to ‘feel tip-top’ and indulge in a sweet treat and fond memory with friends, Tip Top highlighted new flavors and sub-brands, exceeded its nomination goal by more than fifteen-fold, and turned around a 17.6 percent decline into 16.7 percent growth across all categories.

I guess I ought to “like” Tip Top on Facebook.

Solidifying DMA’s Books
During the Annual Business Meeting of the association, it was announced that DMA has streamlined and simplified its annual dues structure into six tiers—from less than $800 on the low end (startups, consultants and the like) up to $75,000 for US and global direct marketing leaders. DMA generated $22.5 million in revenue last year, compared to $20.7 million in expenses.

While at the Annual Business Meeting, President & CEO Linda Woolley spoke to the recently approved Strategic Plan of the association, where she reported advocacy, networking and compliance services are the three areas of focus for association activity in the year ahead. DMA recently (in late May) launched a DMA Litigation Center, which will look to help businesses cope with privacy litigation, and to fight patent abuse, among other legal issues. Outgoing DMA Chairman Matt Blumberg, CEO & chairman of ReturnPath, also announced that the new DMA Chairman for 2013-2015 (a two-year term) is Alliant President & CEO JoAnne Monfradi Dunn (congratulations to my client), who told members she plans to serve as an ambassador between DMA’s management and its members.

(Ir)rational Consumers
Dan Ariely, in a keynote session sponsored by The Wilde Agency, gave case after case where consumers were seen to act irrationally, and that marketers can influence outcomes (and response) markedly by designing and testing creative offers and incentives. One of my favorites was the offer by The Economist (I’m an avid reader) where potential subscribers were offered $125 for the print magazine, and $59 for an online-only magazine, and the online-only offer won. But when a third option was added—$125 for both the print & online magazine—that option was the clear winner.

When an insurance company wanted to sell life insurance policies, and try to convince persons to upgrade, it tried repeatedly to sell in copy the benefits of more coverage—but with little access. When it decided to include a chart that clearly showed the higher amounts of coverage available—that the consumer was foregoing at his or her existing amount of coverage—well, it resulted in a 500 percent lift. My takeaways: always test, find a clever way to visualize data and offers, and always expect the irrational as much as the rational. “Standards Economics are not the same as Behavioral Economics,” he said. Indeed.

Well, that was just from two page of notes from the conference—I’m still dissecting a dozen more sessions. I have to say, this was the first conference in many years where I was accompanied by a “newbie,” a practitioner on the brand side making her first DMA appearance. She had a lot to complain about—there were way too many great sessions on offer at the same time, and we tag-teamed a bit to cover them simultaneously where we could. I think next year, she’ll be bringing some of her colleagues.

Mark your calendar for San Diego for the last week of October 2014.

7 Email Tactics That Improve Customer Service

The quality of a customer service program determines the effectiveness of marketing. Enticing people to make additional purchases is much easier when they know that the experience will be pleasant. Service has long been considered a necessary expense of doing business. What if that view were changed to, “Service is an opportunity to solidify relationships and improve retention?”

The quality of a customer service program determines the effectiveness of marketing. Enticing people to make additional purchases is much easier when they know that the experience will be pleasant. Service has long been considered a necessary expense of doing business. What if that view were changed to, “Service is an opportunity to solidify relationships and improve retention?”

Customer service is the best marketing tool available. People who have a positive shopping experience come back and bring their friends. The open forum known as social media is a perfect venue for sharing good and not-so-good experiences. Smart marketers partner with the service team to create great experiences and encourage sharing.

Email offers a one to one service connection at minimal cost. If your company isn’t using email to provide a better experience, you are missing an excellent opportunity. Use email to:

  1. Connect personally with customers and prospects. Capture as much information as possible so you can stay in touch with the people who make your company successful. Sending personal messages that say “happy birthday” or “thank you” without heavy promotions differientiates your company from the competition and improves relationships.
  2. Show how to use your products and services. The more people know about your products and services, the more likely they will use them. Consumption is a key component of the sales funnel. Unused items don’t need accessories or replacements. If your customers are buying without using, there are problems that need resolving.
  3. Keep customers informed about order statuses and new items. This is as close as you can get to automated service. Set up triggers that are driven by every change in order status. This significantly reduces “where is my order” calls. Create business rules that send emails when new items arrive that fit customers’ buying history.
  4. Remind people of special events. Birthdays, anniversaries and holidays have a way of sneaking up on people. Create a tool or app for customers to input special events and request reminders. The reminder messages can be automated and should include suggestions for the perfect gift along with an order timeline.
  5. Make life easier. Studies have shown that people prefer easy to exceptional when it comes to service. Providing the information needed to resolve issues without calling your company pleases customers and reduces operating costs. Most people will start with the self-serve options; including the customer care number as a back-up option is still a good practice.
  6. Automate the ordering process. Simplify ordering to as few steps as possible. If your company sells consumable products offer the opportunity for automated ordering. The process should include advance emails with options to delay or cancel, confirmation of order placement and updates throughout the shipping process. This improves sales and productivity.
  7. Share industry news. The things that happen in your industry affect the people who use your products and services. Is there a new trend or invention that changes things? Share the information with your customers and prospects. It keeps them informed and your company in their mind.