Smart Data – Not Big Data

As a concerned data professional, I am already plotting an exit strategy from this Big Data hype. Because like any bubble, it will surely burst. That inevitable doomsday could be a couple of years away, but I can feel it coming. At the risk of sounding too much like Yoda the Jedi Grand Master, all hypes lead to over-investments, all over-investments lead to disappointments, and all disappointments lead to blames. Yes, in a few years, lots of blames will go around, and lots of heads will roll.

As a concerned data professional, I am already plotting an exit strategy from this Big Data hype. Because like any bubble, it will surely burst. That inevitable doomsday could be a couple of years away, but I can feel it coming. At the risk of sounding too much like Yoda the Jedi Grand Master, all hypes lead to over-investments, all over-investments lead to disappointments, and all disappointments lead to blames. Yes, in a few years, lots of blames will go around, and lots of heads will roll.

So, why would I stay on the troubled side? Well, because, for now, this Big Data thing is creating lots of opportunities, too. I am writing this on my way back from Seoul, Korea, where I presented this Big Data idea nine times in just two short weeks, trotting from large venues to small gatherings. Just a few years back, I used to have a hard time explaining what I do for living. Now, I just have to say “Hey, I do this Big Data thing,” and the doors start to open. In my experience, this is the best “Open Sesame” moment for all data specialists. But it will last only if we play it right.

Nonetheless, I also know that I will somehow continue to make living setting data strategies, fixing bad data, designing databases and leading analytical activities, even after the hype cools down. Just with a different title, under a different banner. I’ve seen buzzwords come and go, and this data business has been carried on by the people who cut through each hype (and gargantuan amount of BS along with it) and create real revenue-generating opportunities. At the end of the day (I apologize for using this cliché), it is all about the bottom line, whether it comes from a revenue increase or cost reduction. It is never about the buzzwords that may have created the business opportunities in the first place; it has always been more about the substance that turned those opportunities into money-making machines. And substance needs no fancy title or buzzwords attached to it.

Have you heard Google or Amazon calling themselves a “Big Data” companies? They are the ones with sick amounts of data, but they also know that it is not about the sheer amount of data, but it is all about the user experience. “Wannabes” who are not able to understand the core values often hang onto buzzwords and hypes. As if Big Data, Cloud Computing or coding language du jour will come and save the day. But they are just words.

Even the name “Big Data” is all wrong, as it implies that bigger is always better. The 3 Vs of Big Data—volume, velocity and variety—are also misleading. That could be a meaningful distinction for existing data players, but for decision-makers, it gives a notion that size and speed are the ultimate quest. But for the users, small is better. They don’t have time to analyze big sets of data. They need small answers in fun size packages. Plus, why is big and fast new? Since the invention of modern computers, has there been any year when the processing speed did not get faster and storage capacity did not get bigger?

Lest we forget, it is the software industry that came up with this Big Data thing. It was created as a marketing tagline. We should have read it as, “Yes, we can now process really large amounts of data, too,” not as, “Big Data will make all your dreams come true.” If you are in the business of selling toolsets, of course, that is how you present your product. If guitar companies keep emphasizing how hard it is to be a decent guitar player, would that help their businesses? It is a lot more effective to say, “Hey, this is the same guitar that your guitar hero plays!” But you don’t become Jeff Beck just because you bought a white Fender Stratocaster with a rosewood neck. The real hard work begins “after” you purchase a decent guitar. However, this obvious connection is often lost in the data business. Toolsets never provide solutions on their own. They may make your life easier, but you’d still have to formulate the question in a logical fashion, and still have to make decisions based on provided data. And harnessing meanings out of mounds of data requires training of your mind, much like the way musicians practice incessantly.

So, before business people even consider venturing into this Big Data hype, they should ask themselves “Why data?” What are burning questions that you are trying to solve with the data? If you can’t answer this simple question, then don’t jump into it. Forget about it. Don’t get into it just because everyone else seems to be getting into it. Yeah, it’s a big party, but why are you going there? Besides, if you formulate the question properly, often you will find that you don’t need Big Data all the time. If fact, Big Data can be a terrible detour if your question can be answered by “small” data. But that happens all the time, because people approach their business questions through the processes set by the toolsets. Big Data should be about the business, not about the IT or data.

Smart Data, Not Big Data
So, how do we get over this hype? All too often, perception rules, and a replacement word becomes necessary to summarize the essence of the concept for the general public. In my opinion, “Big Data” should have been “Smart Data.” Piles of unorganized dumb data aren’t worth a damn thing. Imagine a warehouse full of boxes with no labels, collecting dust since 1943. Would you be impressed with the sheer size of the warehouse? Great, the ark that Indiana Jones procured (or did he?) may be stored in there somewhere. But if no one knows where it is—or even if it can be located, if no one knows what to do with it—who cares?

Then, how do data get smarter? Smart data are bite-sized answers to questions. A thousand variables could have been considered to provide the weather forecast that calls for a “70 percent chance of scattered showers in the afternoon,” but that one line that we hear is the smart piece of data. Not the list of all the variables that went into the formula that created that answer. Emphasizing the raw data would be like giving paints and brushes to a person who wants a picture on the wall. As in, “Hey, here are all the ingredients, so why don’t you paint the picture and hang it on the wall?” Unfortunately, that is how the Big Data movement looks now. And too often, even the ingredients aren’t all that great.

I visit many companies only to find that the databases in question are just messy piles of unorganized and unstructured data. And please do not assume that such disarrays are good for my business. I’d rather spend my time harnessing meanings out of data and creating values, not taking care of someone else’s mess all the time. Really smart data are small, concise, clean and organized. Big Data should only be seen in “Behind the Scenes” types of documentaries for manias, not for everyday decision-makers.

I have been already saying that Big Data must get smaller for some time (refer to “Big Data Must Get Smaller“) and I would repeat it until it becomes a movement on its own. The Big Data movement must be about:

  1. Cutting down the noise
  2. Providing the answers

There is too much noise in the data, and cutting it out is the first step toward making the data smaller and smarter. The trouble is that the definition of “noise” is not static. Rock music that I grew up with was certainly a noise to my parents’ generation. In turn, some music that my kids listen to is pure noise to me. Likewise, “product color,” which is essential for a database designed for an inventory management system, may or may not be noise if the goal is to sell more apparel items. In such cases, more important variables could be style, brand, price range, target gender, etc., but color could be just peripheral information at best, or even noise (as in, “Uh, she isn’t going to buy just red shoes all the time?”). How do we then determine the differences? First, set the clear goals (as in, “Why are we playing with the data to begin with?”), define the goals using logical expressions, and let mathematics take care of it. Now you can drop the noise with conviction (even if it may look important to human minds).

If we continue with that mathematical path, we would reach the second part, which is “providing answers to the question.” And the smart answers are in the forms of yes/no, probability figures or some type of scores. Like in the weather forecast example, the question would be “chance of rain on a certain day” and the answer would be “70 percent.” Statistical modeling is not easy or simple, but it is the essential part of making the data smarter, as models are the most effective way to summarize complex and abundant data into compact forms (refer to “Why Model?”).

Most people do not have degrees in mathematics or statistics, but they all know what to do with a piece of information such as “70 percent chance of rain” on the day of a company outing. Some may complain that it is not a definite yes/no answer, but all would agree that providing information in this form is more humane than dumping all the raw data onto users. Sales folks are not necessarily mathematicians, but they would certainly appreciate scores attached to each lead, as in “more or less likely to close.” No, that is not a definite answer, but now sales people can start calling the leads in the order of relative importance to them.

So, all the Big Data players and data scientists must try to “humanize” the data, instead of bragging about the size of the data, making things more complex, and providing irrelevant pieces of raw data to users. Make things simpler, not more complex. Some may think that complexity is their job security, but I strongly disagree. That is a sure way to bring down this Big Data movement to the ground. We are already living in a complex world, and we certainly do not need more complications around us (more on “How to be a good data scientist” in a future article).

It’s About the Users, Too
On the flip side, the decision-makers must change their attitude about the data, as well.

1. Define the goals first: The main theme of this series has been that the Big Data movement is about the business, not IT or data. But I’ve seen too many business folks who would so willingly take a hands-off approach to data. They just fund the database; do not define clear business goals to developers; and hope to God that someday, somehow, some genius will show up and clear up the mess for them. Guess what? That cavalry is never coming if you are not even praying properly. If you do not know what problems you want to solve with data, don’t even get started; you will get to nowhere really slowly, bleeding lots of money and time along the way.

2. Take the data seriously: You don’t have to be a scientist to have a scientific mind. It is not ideal if someone blindly subscribes anything computers spew out (there are lots of inaccurate information in databases; refer to “Not All Databases Are Created Equal.”). But too many people do not take data seriously and continue to follow their gut feelings. Even if your customer profile coming out of a serious analysis does not match with your preconceived notions, do not blindly reject it; instead, treat it as a newly found gold mine. Gut feelings are even more overrated than Big Data.

3. Be logical: Illogical questions do not lead anywhere. There is no toolset that reads minds—at least not yet. Even if we get to have such amazing computers—as seen on “Star Trek” or in other science fiction movies—you would still have to ask questions in a logical fashion for them to be effective. I am not asking decision-makers to learn how to code (or be like Mr. Spock or his loyal follower, Dr. Sheldon Cooper), but to have some basic understanding of logical expressions and try to learn how analysts communicate with computers. This is not data geek vs. non-geek world anymore; we all have to be a little geekier. Knowing Boolean expressions may not be as cool as being able to throw a curve ball, but it is necessary to survive in the age of information overload.

4. Shoot for small successes: Start with a small proof of concept before fully investing in large data initiatives. Even with a small project, one gets to touch all necessary steps to finish the job. Understanding the flow of information is as important as each specific step, as most breakdowns occur in between steps, due to lack of proper connections. There was Gemini program before Apollo missions. Learn how to dock spaceships in space before plotting the chart to the moon. Often, over-investments are committed when the discussion is led by IT. Outsource even major components in the beginning, as the initial goal should be mastering the flow of things.

5. Be buyer-centric: No customer is bound by the channel of the marketer’s choice, and yet, may businesses act exactly that way. No one is an online person just because she did not refuse your email promotions yet (refer to “The Future of Online is Offline“). No buyer is just one dimensional. So get out of brand-, division-, product- or channel-centric mindsets. Even well-designed, buyer-centric marketing databases become ineffective if users are trapped in their channel- or division-centric attitudes, as in “These email promotions must flow!” or “I own this product line!” The more data we collect, the more chances marketers will gain to impress their customers and prospects. Do not waste those opportunities by imposing your own myopic views on them. Big Data movement is not there to fortify marketers’ bad habits. Thanks to the size of the data and speed of machines, we are now capable of disappointing a lot of people really fast.

What Did This Hype Change?
So, what did this Big Data hype change? First off, it changed people’s attitudes about the data. Some are no longer afraid of large amounts of information being thrown at them, and some actually started using them in their decision-making processes. Many realized that we are surrounded by numbers everywhere, not just in marketing, but also in politics, media, national security, health care and the criminal justice system.

Conversely, some people became more afraid—often with good reasons. But even more often, people react based on pure fear that their personal information is being actively exploited without their consent. While data geeks are rejoicing in the age of open source and cloud computing, many more are looking at this hype with deep suspicions, and they boldly reject storing any personal data in those obscure “clouds.” There are some people who don’t even sign up for EZ Pass and voluntarily stay on the long lane to pay tolls in the old, but untraceable way.

Nevertheless, not all is lost in this hype. The data got really big, and types of data that were previously unavailable, such as mobile and social data, became available to many marketers. Focus groups are now the size of Twitter followers of the company or a subject matter. The collection rate of POS (point of service) data has been increasingly steady, and some data players became virtuosi in using such fresh and abundant data to impress their customers (though some crossed that “creepy” line inadvertently). Different types of data are being used together now, and such merging activities will compound the predictive power even further. Analysts are dealing with less missing data, though no dataset would ever be totally complete. Developers in open source environments are now able to move really fast with new toolsets that would just run on any device. Simply, things that our forefathers of direct marketing used to take six months to complete can be done in few hours, and in the near future, maybe within a few seconds.

And that may be a good thing and a bad thing. If we do this right, without creating too many angry consumers and without burning holes in our budgets, we are currently in a position to achieve great many things in terms of predicting the future and making everyone’s lives a little more convenient. If we screw it up badly, we will end up creating lots of angry customers by abusing sensitive data and, at the same time, wasting a whole lot of investors’ money. Then this Big Data thing will go down in history as a great money-eating hype.

We should never do things just because we can; data is a powerful tool that can hurt real people. Do not even get into it if you don’t have a clear goal in terms of what to do with the data; it is not some piece of furniture that you buy just because your neighbor bought it. Living with data is a lifestyle change, and it requires a long-term commitment; it is not some fad that you try once and give up. It is a continuous loop where people’s responses to marketer’s data-based activities create even more data to be analyzed. And that is the only way it keeps getting better.

There Is No Big Data
And all that has nothing to do with “Big.” If done right, small data can do plenty. And in fact, most companies’ transaction data for the past few years would easily fit in an iPhone. It is about what to do with the data, and that goal must be set from a business point of view. This is not just a new playground for data geeks, who may care more for new hip technologies that sound cool in their little circle.

I recently went to Brazil to speak at a data conference called QIBRAS, and I was pleasantly surprised that the main theme of it was the quality of the data, not the size of the data. Well, at least somewhere in the world, people are approaching this whole thing without the “Big” hype. And if you look around, you will not find any successful data players calling this thing “Big Data.” They just deal with small and large data as part of their businesses. There is no buzzword, fanfare or a big banner there. Because when something is just part of your everyday business, you don’t even care what you call it. You just do. And to those masters of data, there is no Big Data. If Google all of a sudden starts calling itself a Big Data company, it would be so uncool, as that word would seriously limit it. Think about that.

Not All Databases Are Created Equal

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

However, even when the database in question is attached to analytical, visualization, CRM or drill-down tools, it is not so easy to evaluate it completely, as such practice reveals only a few aspects of a database, hardly all of them. That is because such tools are like window treatments of a building, through which you may look into the database. Imagine a building inspector inspecting a building without ever entering it. Would you respect the opinion of the inspector who just parks his car outside the building, looks into the building through one or two windows, and says, “Hey, we’re good to go”? No way, no sir. No one should judge a book by its cover.

In the age of the Big Data (you should know by now that I am not too fond of that word), everything digitized is considered data. And data reside in databases. And databases are supposed be designed to serve specific purposes, just like buildings and cars are. Although many modern databases are just mindless piles of accumulated data, granted that the database design is decent and functional, we can still imagine many different types of databases depending on the purposes and their contents.

Now, most of the Big Data discussions these days are about the platform, environment, or tool sets. I’m sure you heard or read enough about those, so let me boldly skip all that and their related techie words, such as Hadoop, MongoDB, Pig, Python, MapReduce, Java, SQL, PHP, C++, SAS or anything related to that elusive “cloud.” Instead, allow me to show you the way to evaluate databases—or data sources—from a business point of view.

For businesspeople and decision-makers, it is not about NoSQL vs. RDB; it is just about the usefulness of the data. And the usefulness comes from the overall content and database management practices, not just platforms, tool sets and buzzwords. Yes, tool sets are important, but concert-goers do not care much about the types and brands of musical instruments that are being used; they just care if the music is entertaining or not. Would you be impressed with a mediocre guitarist just because he uses the same brand of guitar that his guitar hero uses? Nope. Likewise, the usefulness of a database is not about the tool sets.

In my past column, titled “Big Data Must Get Smaller,” I explained that there are three major types of data, with which marketers can holistically describe their target audience: (1) Descriptive Data, (2) Transaction/Behavioral Data, and (3) Attitudinal Data. In short, if you have access to all three dimensions of the data spectrum, you will have a more complete portrait of customers and prospects. Because I already went through that subject in-depth, let me just say that such types of data are not the basis of database evaluation here, though the contents should be on top of the checklist to meet business objectives.

In addition, throughout this series, I have been repeatedly emphasizing that the database and analytics management philosophy must originate from business goals. Basically, the business objective must dictate the course for analytics, and databases must be designed and optimized to support such analytical activities. Decision-makers—and all involved parties, for that matter—suffer a great deal when that hierarchy is reversed. And unfortunately, that is the case in many organizations today. Therefore, let me emphasize that the evaluation criteria that I am about to introduce here are all about usefulness for decision-making processes and supporting analytical activities, including predictive analytics.

Let’s start digging into key evaluation criteria for databases. This list would be quite useful when examining internal and external data sources. Even databases managed by professional compilers can be examined through these criteria. The checklist could also be applicable to investors who are about to acquire a company with data assets (as in, “Kick the tire before you buy it.”).

1. Depth
Let’s start with the most obvious one. What kind of information is stored and maintained in the database? What are the dominant data variables in the database, and what is so unique about them? Variety of information matters for sure, and uniqueness is often related to specific business purposes for which databases are designed and created, along the lines of business data, international data, specific types of behavioral data like mobile data, categorical purchase data, lifestyle data, survey data, movement data, etc. Then again, mindless compilation of random data may not be useful for any business, regardless of the size.

Generally, data dictionaries (lack of it is a sure sign of trouble) reveal the depth of the database, but we need to dig deeper, as transaction and behavioral data are much more potent predictors and harder to manage in comparison to demographic and firmographic data, which are very much commoditized already. Likewise, Lifestyle variables that are derived from surveys that may have been conducted a long time ago are far less valuable than actual purchase history data, as what people say they do and what they actually do are two completely different things. (For more details on the types of data, refer to the second half of “Big Data Must Get Smaller.”)

Innovative ideas should not be overlooked, as data packaging is often very important in the age of information overflow. If someone or some company transformed many data points into user-friendly formats using modeling or other statistical techniques (imagine pre-developed categorical models targeting a variety of human behaviors, or pre-packaged segmentation or clustering tools), such effort deserves extra points, for sure. As I emphasized numerous times in this series, data must be refined to provide answers to decision-makers. That is why the sheer size of the database isn’t so impressive, and the depth of the database is not just about the length of the variable list and the number of bytes that go along with it. So, data collectors, impress us—because we’ve seen a lot.

2. Width
No matter how deep the information goes, if the coverage is not wide enough, the database becomes useless. Imagine well-organized, buyer-level POS (Point of Service) data coming from actual stores in “real-time” (though I am sick of this word, as it is also overused). The data go down to SKU-level details and payment methods. Now imagine that the data in question are collected in only two stores—one in Michigan, and the other in Delaware. This, by the way, is not a completely made -p story, and I faced similar cases in the past. Needless to say, we had to make many assumptions that we didn’t want to make in order to make the data useful, somehow. And I must say that it was far from ideal.

Even in the age when data are collected everywhere by every device, no dataset is ever complete (refer to “Missing Data Can Be Meaningful“). The limitations are everywhere. It could be about brand, business footprint, consumer privacy, data ownership, collection methods, technical limitations, distribution of collection devices, and the list goes on. Yes, Apple Pay is making a big splash in the news these days. But would you believe that the data collected only through Apple iPhone can really show the overall consumer trend in the country? Maybe in the future, but not yet. If you can pick only one credit card type to analyze, such as American Express for example, would you think that the result of the study is free from any bias? No siree. We can easily assume that such analysis would skew toward the more affluent population. I am not saying that such analyses are useless. And in fact, they can be quite useful if we understand the limitations of data collection and the nature of the bias. But the point is that the coverage matters.

Further, even within multisource databases in the market, the coverage should be examined variable by variable, simply because some data points are really difficult to obtain even by professional data compilers. For example, any information that crosses between the business and the consumer world is sparsely populated in many cases, and the “occupation” variable remains mostly blank or unknown on the consumer side. Similarly, any data related to young children is difficult or even forbidden to collect, so a seemingly simple variable, such as “number of children,” is left unknown for many households. Automobile data used to be abundant on a household level in the past, but a series of laws made sure that the access to such data is forbidden for many users. Again, don’t be impressed with the existence of some variables in the data menu, but look into it to see “how much” is available.

3. Accuracy
In any scientific analysis, a “false positive” is a dangerous enemy. In fact, they are worse than not having the information at all. Many folks just assume that any data coming out a computer is accurate (as in, “Hey, the computer says so!”). But data are not completely free from human errors.

Sheer accuracy of information is hard to measure, especially when the data sources are unique and rare. And the errors can happen in any stage, from data collection to imputation. If there are other known sources, comparing data from multiple sources is one way to ensure accuracy. Watching out for fluctuations in distributions of important variables from update to update is another good practice.

Nonetheless, the overall quality of the data is not just up to the person or department who manages the database. Yes, in this business, the last person who touches the data is responsible for all the mistakes that were made to it up to that point. However, when the garbage goes in, the garbage comes out. So, when there are errors, everyone who touched the database at any point must share in the burden of guilt.

Recently, I was part of a project that involved data collected from retail stores. We ran all kinds of reports and tallies to check the data, and edited many data values out when we encountered obvious errors. The funniest one that I saw was the first name “Asian” and the last name “Tourist.” As an openly Asian-American person, I was semi-glad that they didn’t put in “Oriental Tourist” (though I still can’t figure out who decided that word is for objects, but not people). We also found names like “No info” or “Not given.” Heck, I saw in the news that this refugee from Afghanistan (he was a translator for the U.S. troops) obtained a new first name as he was granted an entry visa, “Fnu.” That would be short for “First Name Unknown” as the first name in his new passport. Welcome to America, Fnu. Compared to that, “Andolini” becoming “Corleone” on Ellis Island is almost cute.

Data entry errors are everywhere. When I used to deal with data files from banks, I found that many last names were “Ira.” Well, it turned out that it wasn’t really the customers’ last names, but they all happened to have opened “IRA” accounts. Similarly, movie phone numbers like 777-555-1234 are very common. And fictitious names, such as “Mickey Mouse,” or profanities that are not fit to print are abundant, as well. At least fake email addresses can be tested and eliminated easily, and erroneous addresses can be corrected by time-tested routines, too. So, yes, maintaining a clean database is not so easy when people freely enter whatever they feel like. But it is not an impossible task, either.

We can also train employees regarding data entry principles, to a certain degree. (As in, “Do not enter your own email address,” “Do not use bad words,” etc.). But what about user-generated data? Search and kill is the only way to do it, and the job would never end. And the meta-table for fictitious names would grow longer and longer. Maybe we should just add “Thor” and “Sponge Bob” to that Mickey Mouse list, while we’re at it. Yet, dealing with this type of “text” data is the easy part. If the database manager in charge is not lazy, and if there is a bit of a budget allowed for data hygiene routines, one can avoid sending emails to “Dear Asian Tourist.”

Numeric errors are much harder to catch, as numbers do not look wrong to human eyes. That is when comparison to other known sources becomes important. If such examination is not possible on a granular level, then median value and distribution curves should be checked against historical transaction data or known public data sources, such as U.S. Census Data in the case of demographic information.

When it’s about the companies’ own data, follow your instincts and get rid of data that look too good or too bad to be true. We all can afford to lose a few records in our databases, and there is nothing wrong with deleting the “outliers” with extreme values. Erroneous names, like “No Information,” may be attached to a seven-figure lifetime spending sum, and you know that can’t be right.

The main takeaways are: (1) Never trust the data just because someone bothered to store them in computers, and (2) Constantly look for bad data in reports and listings, at times using old-fashioned eye-balling methods. Computers do not know what is “bad,” until we specifically tell them what bad data are. So, don’t give up, and keep at it. And if it’s about someone else’s data, insist on data tallies and data hygiene stats.

4. Recency
Outdated data are really bad for prediction or analysis, and that is a different kind of badness. Many call it a “Data Atrophy” issue, as no matter how fresh and accurate a data point may be today, it will surely deteriorate over time. Yes, data have a finite shelf-life, too. Let’s say that you obtained a piece of information called “Golf Interest” on an individual level. That information could be coming from a survey conducted a long time ago, or some golf equipment purchase data from a while ago. In any case, someone who is attached to that flag may have stopped shopping for new golf equipment, as he doesn’t play much anymore. Without a proper database update and a constant feed of fresh data, irrelevant data will continue to drive our decisions.

The crazy thing is that, the harder it is to obtain certain types of data—such as transaction or behavioral data—the faster they will deteriorate. By nature, transaction or behavioral data are time-sensitive. That is why it is important to install time parameters in databases for behavioral data. If someone purchased a new golf driver, when did he do that? Surely, having bought a golf driver in 2009 (“Hey, time for a new driver!”) is different from having purchased it last May.

So-called “Hot Line Names” literally cease to be hot after two to three months, or in some cases much sooner. The evaporation period maybe different for different product types, as one may stay longer in the market for an automobile than for a new printer. Part of the job of a data scientist is to defer the expiration date of data, finding leads or prospects who are still “warm,” or even “lukewarm,” with available valid data. But no matter how much statistical work goes into making the data “look” fresh, eventually the models will cease to be effective.

For decision-makers who do not make real-time decisions, a real-time database update could be an expensive solution. But the databases must be updated constantly (I mean daily, weekly, monthly or even quarterly). Otherwise, someone will eventually end up making a wrong decision based on outdated data.

5. Consistency
No matter how much effort goes into keeping the database fresh, not all data variables will be updated or filled in consistently. And that is the reality. The interesting thing is that, especially when using them for advanced analytics, we can still provide decent predictions if the data are consistent. It may sound crazy, but even not-so-accurate-data can be used in predictive analytics, if they are “consistently” wrong. Modeling is developing an algorithm that differentiates targets and non-targets, and if the descriptive variables are “consistently” off (or outdated, like census data from five years ago) on both sides, the model can still perform.

Conversely, if there is a huge influx of a new type of data, or any drastic change in data collection or in a business model that supports such data collection, all bets are off. We may end up predicting such changes in business models or in methodologies, not the differences in consumer behavior. And that is one of the worst kinds of errors in the predictive business.

Last month, I talked about dealing with missing data (refer to “Missing Data Can Be Meaningful“), and I mentioned that data can be inferred via various statistical techniques. And such data imputation is OK, as long as it returns consistent values. I have seen so many so-called professionals messing up popular models, like “Household Income,” from update to update. If the inferred values jump dramatically due to changes in the source data, there is no amount of effort that can save the targeting models that employed such variables, short of re-developing them.

That is why a time-series comparison of important variables in databases is so important. Any changes of more than 5 percent in distribution of variables when compared to the previous update should be investigated immediately. If you are dealing with external data vendors, insist on having a distribution report of key variables for every update. Consistency of data is more important in predictive analytics than sheer accuracy of data.

6. Connectivity
As I mentioned earlier, there are many types of data. And the predictive power of data multiplies as different types of data get to be used together. For instance, demographic data, which is quite commoditized, still plays an important role in predictive modeling, even when dominant predictors are behavioral data. It is partly because no one dataset is complete, and because different types of data play different roles in algorithms.

The trouble is that many modern datasets do not share any common matching keys. On the demographic side, we can easily imagine using PII (Personally Identifiable Information), such as name, address, phone number or email address for matching. Now, if we want to add some transaction data to the mix, we would need some match “key” (or a magic decoder ring) by which we can link it to the base records. Unfortunately, many modern databases completely lack PII, right from the data collection stage. The result is that such a data source would remain in a silo. It is not like all is lost in such a situation, as they can still be used for trend analysis. But to employ multisource data for one-to-one targeting, we really need to establish the connection among various data worlds.

Even if the connection cannot be made to household, individual or email levels, I would not give up entirely, as we can still target based on IP addresses, which may lead us to some geographic denominations, such as ZIP codes. I’d take ZIP-level targeting anytime over no targeting at all, even though there are many analytical and summarization steps required for that (more on that subject in future articles).

Not having PII or any hard matchkey is not a complete deal-breaker, but the maneuvering space for analysts and marketers decreases significantly without it. That is why the existence of PII, or even ZIP codes, is the first thing that I check when looking into a new data source. I would like to free them from isolation.

7. Delivery Mechanisms
Users judge databases based on visualization or reporting tool sets that are attached to the database. As I mentioned earlier, that is like judging the entire building based just on the window treatments. But for many users, that is the reality. After all, how would a casual user without programming or statistical background would even “see” the data? Through tool sets, of course.

But that is the only one end of it. There are so many types of platforms and devices, and the data must flow through them all. The important point is that data is useless if it is not in the hands of decision-makers through the device of their choice, at the right time. Such flow can be actualized via API feed, FTP, or good, old-fashioned batch installments, and no database should stay too far away from the decision-makers. In my earlier column, I emphasized that data players must be good at (1) Collection, (2) Refinement, and (3) Delivery (refer to “Big Data is Like Mining Gold for a Watch—Gold Can’t Tell Time“). Delivering the answers to inquirers properly closes one iteration of information flow. And they must continue to flow to the users.

8. User-Friendliness
Even when state-of-the-art (I apologize for using this cliché) visualization, reporting or drill-down tool sets are attached to the database, if the data variables are too complicated or not intuitive, users will get frustrated and eventually move away from it. If that happens after pouring a sick amount of money into any data initiative, that would be a shame. But it happens all the time. In fact, I am not going to name names here, but I saw some ridiculously hard to understand data dictionary from a major data broker in the U.S.; it looked like the data layout was designed for robots by the robots. Please. Data scientists must try to humanize the data.

This whole Big Data movement has a momentum now, and in the interest of not killing it, data players must make every aspect of this data business easy for the users, not harder. Simpler data fields, intuitive variable names, meaningful value sets, pre-packaged variables in forms of answers, and completeness of a data dictionary are not too much to ask after the hard work of developing and maintaining the database.

This is why I insist that data scientists and professionals must be businesspeople first. The developers should never forget that end-users are not trained data experts. And guess what? Even professional analysts would appreciate intuitive variable sets and complete data dictionaries. So, pretty please, with sugar on top, make things easy and simple.

9. Cost
I saved this important item for last for a good reason. Yes, the dollar sign is a very important factor in all business decisions, but it should not be the sole deciding factor when it comes to databases. That means CFOs should not dictate the decisions regarding data or databases without considering the input from CMOs, CTOs, CIOs or CDOs who should be, in turn, concerned about all the other criteria listed in this article.

Playing with the data costs money. And, at times, a lot of money. When you add up all the costs for hardware, software, platforms, tool sets, maintenance and, most importantly, the man-hours for database development and maintenance, the sum becomes very large very fast, even in the age of the open-source environment and cloud computing. That is why many companies outsource the database work to share the financial burden of having to create infrastructures. But even in that case, the quality of the database should be evaluated based on all criteria, not just the price tag. In other words, don’t just pick the lowest bidder and hope to God that it will be alright.

When you purchase external data, you can also apply these evaluation criteria. A test-match job with a data vendor will reveal lots of details that are listed here; and metrics, such as match rate and variable fill-rate, along with complete the data dictionary should be carefully examined. In short, what good is lower unit price per 1,000 records, if the match rate is horrendous and even matched data are filled with missing or sub-par inferred values? Also consider that, once you commit to an external vendor and start building models and analytical framework around their its, it becomes very difficult to switch vendors later on.

When shopping for external data, consider the following when it comes to pricing options:

  • Number of variables to be acquired: Don’t just go for the full option. Pick the ones that you need (involve analysts), unless you get a fantastic deal for an all-inclusive option. Generally, most vendors provide multiple-packaging options.
  • Number of records: Processed vs. Matched. Some vendors charge based on “processed” records, not just matched records. Depending on the match rate, it can make a big difference in total cost.
  • Installment/update frequency: Real-time, weekly, monthly, quarterly, etc. Think carefully about how often you would need to refresh “demographic” data, which doesn’t change as rapidly as transaction data, and how big the incremental universe would be for each update. Obviously, a real-time API feed can be costly.
  • Delivery method: API vs. Batch Delivery, for example. Price, as well as the data menu, change quite a bit based on the delivery options.
  • Availability of a full-licensing option: When the internal database becomes really big, full installment becomes a good option. But you would need internal capability for a match and append process that involves “soft-match,” using similar names and addresses (imagine good-old name and address merge routines). It becomes a bit of commitment as the match and append becomes a part of the internal database update process.

Business First
Evaluating a database is a project in itself, and these nine evaluation criteria will be a good guideline. Depending on the businesses, of course, more conditions could be added to the list. And that is the final point that I did not even include in the list: That the database (or all data, for that matter) should be useful to meet the business goals.

I have been saying that “Big Data Must Get Smaller,” and this whole Big Data movement should be about (1) Cutting down on the noise, and (2) Providing answers to decision-makers. If the data sources in question do not serve the business goals, cut them out of the plan, or cut loose the vendor if they are from external sources. It would be an easy decision if you “know” that the database in question is filled with dirty, sporadic and outdated data that cost lots of money to maintain.

But if that database is needed for your business to grow, clean it, update it, expand it and restructure it to harness better answers from it. Just like the way you’d maintain your cherished automobile to get more mileage out of it. Not all databases are created equal for sure, and some are definitely more equal than others. You just have to open your eyes to see the differences.

Who’s Your Scapegoat?

I find it interesting that machines and procedures often become scapegoats for “human” errors. Remember the time when the word “mainframe” was a dirty word? As if those pieces of hardware were contaminated by some failure-inducing agents. Yeah, sure. All your worries will disappear along with those darn mainframes. Or did they?

I find it interesting that machines and procedures often become scapegoats for “human” errors. Remember the time when the word “mainframe” was a dirty word? As if those pieces of hardware were contaminated by some failure-inducing agents. Yeah, sure. All your worries will disappear along with those darn mainframes. Or did they? I don’t know what specific hardware is running behind those intangible “clouds” nowadays, but in the age when anyone can run any operating system on any type of hardware, the fact that such distinctions made so much mayhem in organizations is just ridiculous. I mean really, when most of computing and storage are taken care of in the big cloud, how is the screen that you’re looking at any different than a dummy terminal from the old days? Well, of course they are in (or near) retina display now, but I mean conceptually. The machines were just doing the work that they were designed to do. Someone started blaming the hardware for their own shortcomings, and soon, another dirty word was created.

In some circles of marketers, you don’t want to utter “CRM” either. I wasn’t a big fan of that word even when it was indeed popular. For a while, everything was CRM this or CRM that. Companies spent seven-figure sums on some automated CRM solution packages, or hired a whole bunch of specialists whose titles included the word CRM. Evidently, not every company broke even on that investment, and the very concept “CRM” became the scapegoat in many places. When the procedure itself is the bad guy, I guess fewer heads will roll—unless, of course, one’s title includes that dirty word. But really, how is that “Customer Relationship Management” could be all that bad? Delivering the right products and offers to the right person through the right channel can’t be that wrong, can it? Isn’t that the whole premise of one-to-one marketing, after all?

Now, if someone overinvested on some it-can-walk-on-the-water automated system, or just poorly managed the whole thing, let’s get the record straight. Someone just messed it all up. But the concept of taking care of customers with data-based marketing and sales programs was never the problem. If an unqualified driver creates a major car accident, is that the car’s fault? It would be easier to blame the internal combustion engine for human errors, but it just ain’t fair. Fair or not, however, over-investment or blind investment on anything will inevitably call for a scapegoat. If not now, in the near future. My prediction? The next scapegoat will be “Big Data” if that concept doesn’t create steady revenue streams for investors soon. But more on that later.

I’ve seen some folks who think “analytics” is bad, too. That one is tricky, as the word “analytics” doesn’t mean just one thing. It could be about knowing what is going on around us (like having a dashboard in a car). Or it could be about describing the target (where are the customers and what do they look like?). Or it could be about predicting the future (who is going to buy what and where?). So, when I hear that “analytics” didn’t work out for them, I am immediately thinking someone screwed things up dearly after overspending on that thing called “analytics,” and then started blaming everything else but themselves. But come on, if you bought a $30,000 grand piano for your kids to play chopsticks on it, is that the piano’s fault?

In the field of predictive analytics for marketing, the main goals come down to these two:

  1. To whom should you be talking, and
  2. If you decided to talk to someone, what are you going to offer? (Please don’t tell me “the same thing for everyone”.)

And that’s really it. Sure, we can talk about products and channels too, but those are all part of No. 2.

No. 1 is relatively simple. Let’s say you have an opportunity to talk to 1 million people, and let’s assume it will cost about $1 to talk to each of them. Now, if you can figure out who is more likely to respond to your offer “before” you start talking to them, you can obviously save a lot of money. Even with a rudimentary model with some clunky data, we can safely cut that list down to 1/10 without giving up much opportunity and save you $900,000. Even if your cost is a fraction of that figure, there still is a thing called “opportunity cost,” and you really don’t want to annoy people by over-communicating (as in “You’re spamming me!”). This has been the No. 1 reason why marketers have been employing predictive models, going back to the punchcard age of the ’60s. Of course, there have been carpet-bombers like AOL, but we can agree that such a practice calls for a really deep pocket.

No. 2 gets more interesting. In the age of ubiquitous data and communication channels, it must become the center of attention. Analytics are no longer about marketers deciding to whom to talk, as marketers are no longer the sole dictators of the communication. Now that it is driven by the person behind the screen in real-time, marketers don’t even get to decide whether they should talk to them or not. Yes, in traditional direct marketing or email channels, “selection” may still matter, but the age of “marketers ranking the list of prospects” is being rapidly replaced by “marketers having to match the right product and offer to the person behind the screen in real-time.” If someone is giving you about half a second for you to respond, then you’d best find the most suitable offer in that time, too. It’s all about the buyers now, not the marketers or the channels. And analytics drive such personalization. Without the analytics, everyone who lands on some website or passes by some screen will get the same offer. That is so “1984,” isn’t it?

Furthermore, the analytics that truly drive personalization at this level are not some simple segmentation techniques either. By design, segmentation techniques put millions of people in the same bucket, if a few commonalities are found among them. And such common variables could be as basic as age, income, region and number of children—hardly the whole picture of a person. The trouble with that type of simplistic approach is also very simple: Nobody is one-dimensional. Just because a few million other people in the same segment to which I happen to be assigned are more “likely” to be into outdoor sports, should I be getting camping equipment offers whenever I go to ESPN.com? No siree. Someone can be a green product user, avid golfer, gun owner, children’s product buyer, foreign traveler, frequent family restaurant visitor and conservative investor, all at the same time. And no, he may not even have multiple personalities; and no, don’t label him with this “one” segment name, no matter how cute that name may be.

To deal with this reality, marketers must embrace analytics even more. Yes, we can estimate the likelihood measures of all these human characteristics, and start customizing our products and offers accordingly. Once complex data variables are summarized into the form of “personas” based on model scores, one doesn’t have to be a math genius to know this particular guy would appreciate the discount offer for cruise tickets more than a 10 percent-off coupon for home theater systems.

Often people are afraid of the unknowns. But that’s OK. We all watch TV without really understanding how HD quality pictures show up on it. Let’s embrace the analytics that way, too. Let’s not worry about all the complex techniques and mystiques behind it. Making it easy for the users should be the job of analysts and data scientists, anyway. The only thing that the technical folks would want from the marketers is asking the right questions. That still is the human element in all this, and no one can provide a right answer to a wrong question. Then again, is that how analytics became a dirty word?

Winner of the 2012 Presidential Election: Data

Now that the contentious 2012 election has finally ended, we get a chance to look back and assess what happened and why. Regardless of who you voted for, it’s impossible not to acknowledge that the real winner of the 2012 election was data.

Now that the contentious 2012 election has finally ended, we get a chance to look back and assess what happened and why. Regardless of who you voted for, it’s impossible not to acknowledge that the real winner of the 2012 election was data.

For the first time in history, this election demonstrated the power of using analytics and numbers crunching for politics. What I find most remarkable is the rapid evolution of this change. If you look back just a few years ago, Karl Rove was widely regarded as the political mastermind of the universe. Rove’s primary innovation was the use of highly targeted direct mail campaigns to get out the evangelical and rural vote to win the 2004 election for George W. Bush. Fast-forward a few short years, and not only did Rove’s candidate lose, but the master strategist was reduced to challenging his network’s numbers geeks live on the air, only to be rebuffed.

In every way, the old guard was bested by a new generation of numbers crunchers, nerds and data geeks who leveraged data science, analytics, predictive modeling and a highly sophisticated online marketing campaign to poll, raise money and get out the vote in an unprecedented manner.

On the subject of polling, I was intrigued by Nate Silver’s incredibly accurate FiveThirtyEight blog that used a sophisticated system to synthesize dozens of national polls in a rolling average to predict the actual election results. In the run-up to the election, he even received a lot of flak from various pundits who claimed he was wrong basing on their perception on voter “enthusiasm,” “momentum” and other non-scientific observations. At the end of the day, however, data won out over hot air and punditry big time. Silver’s final tally was absolutely dead on, crushing most other national polls by a wide margin.

I especially love his Nov. 10 post in which Silver analyzes the various polls and shows which ones fared the best and which ones weren’t worth the paper they were printed on. It’s shocking to see that the Gallup Poll—in many people’s mind the oldest and most trusted name in polling—was skewed Republican by a whopping 7.2 points when averaged across all 11 of their polls. Ouch. For an organization that specializes in polling, their long-term viability must be called into question at this point.

One thing I find highly interesting when looking at the various poll results is that when you examine their methodologies, it’s not too surprising that Gallup fell flat on its face, relying on live phone surveys as the primary polling method. When considering that many young, urban and minority voters don’t have a landline and only have a cellphone, it doesn’t take a rocket scientist to conclude any poll that doesn’t include a large number of cellphones in its cohort is going to skew wildly Republican … which is exactly what happened to Gallup, Rasmussen and several other prominent national polls.

Turning to the Obama campaign’s incredible Get Out The Vote (GOTV) machine that turned out more people in more places than anyone could have ever predicted, there’s no doubt in anyone’s mind that for data-driven marketers, the 2012 U.S. election victory was a watershed moment in history.

According to a recent article in Time titled “Inside the Secret World of the Data Crunchers Who Helped Obama Win,” the secret sauce behind Obama’s big win was a massive data effort that helped him raise $1 billion, remade the process of targeting TV ads, and created detailed models of swing-state voters that could be used to increase the effectiveness of everything from phone calls and door-knocks to direct mailings and social media.

What’s especially interesting is that, similarly to a tech company, Obama’s campaign actually had a large in-house team of geeks, data scientists and online marketers. Composed of elite and senior tech talent from Twitter, Google, Facebook, Craigslist and Quora, the program enabled the campaign to turn out more volunteers and donors than it had in 2008, mostly by making it it simpler and easier for anyone to engage with the President’s reelection effort. If you’d like to read more about it, there’s a great article recently published in The Atlantic titled “When the Nerds Go Marching In” that describes the initiative in great detail.

Well, looks like I’m out of space. One thing’s for sure though, I’m going to be very interested to see what happens in coming elections as these practices become more mainstream and the underlying techniques are further refined.

If you have any observations about the use of data and analytics in the election you’d like to share, please let me know in your comments.

—Rio

MDM: Big Data-Slayer

There’s quite a bit of talk about Big Data these days across the Web … it’s the meme that just won’t quit. The reasons why are pretty obvious. Besides a catchy name, Big Data is a real issue faced by virtually every firm in business today. But what’s frequently lost in the shuffle is the fact that Big Data is the problem, not the solution. Big Data is what marketers are facing—mountains of unstructured data accumulating on servers and in stacks, across various SaaS tools, in spreadsheets and everywhere else you look in the firm and on the cloud.

There’s quite a bit of talk about Big Data these days across the Web … it’s the meme that just won’t quit. The reasons why are pretty obvious. Besides a catchy name, Big Data is a real issue faced by virtually every firm in business today.

But what’s frequently lost in the shuffle is the fact that Big Data is the problem, not the solution. Big Data is what marketers are facing—mountains of unstructured data accumulating on servers and in stacks, across various SaaS tools, in spreadsheets and everywhere else you look in the firm and on the cloud. In fact, the actual definition of Big Data is simply a data set that has grown so large it becomes awkward or impossible to work with, or make sense out of, using standard database management tools and techniques.

The solution to the Big Data problem is to implement a system that collects and sifts through those mountains of unstructured data from different buckets across the organization, combines them together into one coherent framework, and shares this business intelligence with different business units, all of which have varying delivery needs, mandates, technologies and KPIs. Needless to say, it’s not an easy task.

The usual refrain most firms chirp about when it comes to tackling Big Data is a bold plan to hire a team of data scientists—essentially a bunch of database administrators or statisticians who have the technical skills to sift through the Xs and 0s and make sense out of them.

This approach is wrong, however, as it misses the forest for the trees. Sure, having the right technology team is essential to long-term success in the data game. But truth be told, if you’re going to go to battle against the Big Data hydra, you need a much more formidable weapon in your arsenal. Your organization needs a Master Data Management (MDM) strategy in order to succeed.

A concept still unknown to many marketers, MDM comprises a set of tools and processes that manage an organization’s information on a macro scale. Essentially, MDM’s objective is to provide processes for collecting, aggregating, matching, consolidating, quality-assuring and distributing data throughout the organization to ensure consistency and control in the ongoing maintenance and application use of this information. No, I didn’t make up that definition myself. Thanks, Wikipedia.

The reason why the let’s-bring-in-the-developers approach is wrong is that it gets it backwards. Having consulted in this space for quite some time, I can tell you that technology is one of the least important pieces in the puzzle when it comes to implementing a successful MDM strategy.

In fact, listing out priorities when it comes to MDM, I put technology far to the end of the decision-tree, after Vision, Scope, Data Governance, Workflow/Process, and definition of Business Unit Needs. As such, besides the CTO or CIO, IT staff should not be brought in until after many preliminary decisions have been made. To support this view, I suggest you read John Radcliffe’s groundbreaking ‘The Seven Building Blocks of MDM: A Framework for Success‘ published by Gartner in 2007. If you haven’t read it yet and you’re interested in MDM, I suggest taking a look. Look up for an excellent chart from it.

You see, Radcliffe places MDM Technology Infrastructure near the end of the process, following Vision, Strategy, Governance and Processes. The crux of the argument is that technology decisions cannot be made until the overall strategy has been mapped out.

The rationale is that at a high-level, MDM architecture can be structured in different ways depending on the underlying business it is supporting. Ultimately, this is what will drive the technology decisions. Once the important strategic decisions have been made, a firm can then bring in the development staff and pull the trigger on any one of a growing number of technology providers’ solutions.

At the end of 2011, Gartner put out an excellent report on the Magic Quadrant for Master Data Management of Customer Data Solutions. This detailed paper identified solutions by IBM, Oracle (Siebel) and Informatica as the clear-cut industry leaders, with SAP, Tibco, DataFlux and VisionWare receiving honorable mention. Though these solutions vary in capability, cost and other factors, I think it’s safe to say that they all present a safe and robust platform for any company that wishes to implement an MDM solution, as all boast strong technology, brand and financial resources, not to mention thousands of MDM customers already on board.

Interestingly, regarding technology there’s been an ongoing debate about whether MDM should be single-domain or multi-domain—a “domain” being a framework for data consolidation. This is important because MDM requires that records be merged or linked together, usually necessitating some kind of master data format as a reference. The diversity of the data sets that are being combined together, as well as the format (or formats) of data outputs required, both drive this decision-making methodology.

For companies selling products, a product-specific approach is usually called for that features a data framework built around product attributes, while on the other hand service businesses tend to gravitate toward a customer-specific architecture. Following that logic, an MDM for a supply chain database would contain records aligned to supplier attributes.

While it is most certainly true that MDM solutions are architected differently for different types of firms, I find the debate to be a red herring. On that note, a fantastic article by my colleague Steve Jones in the UK dispels the entire single-versus-multi domain debate altogether. I agree wholeheartedly with Jones in that, by definition, an MDM is by an MDM regardless of scope. The breadth of data covered is simply a decision that needs to be made by the governance team when the project is in the planning stages—well before a single dollar has been spent on IT equipment or resources. If anything, this reality serves to strengthen the hypothesis of this piece, which is that vision more technology drives the MDM implementation process.

Now, of course, an organization may discover that it’s simply not feasible (or desirable) to combine together customer, product and supplier information in one centralized place, and in one master format. But it’s important to keep in mind that the stated goal of any MDM solution is to make sense out of and standardize the organization’s data—and that’s it.

Of course there’s much more I can cover on this topic, but I realize this is a lot to chew on, so I I’ll end this post here.

Has your firm implemented, or are you in the process of implementing, an MDM solution? If so, what process did you follow, and what solution did you decide upon? I’d love to hear about it, so please let me know in your comments.

The Great Marketing Data Revolution

I think it’s safe to say that “Big Data” is enjoying its 15 minutes of fame. It’s a topic we’ve covered in this blog, as well. In case you missed it, I briefly touched on this topic in a post titled “Deciphering Big Data Is Key to Understanding Buyer’s Journey,” which I published a few weeks back. For those of you who don’t know what it is, Big Data refers to the massive quantities of information, much of it marketing-related, that firms are currently collecting as they do business.

I think it’s safe to say that “Big Data” is enjoying its 15 minutes of fame. It’s a topic we’ve covered in this blog, as well. In case you missed it, I briefly touched on this topic in a post titled “Deciphering Big Data Is Key to Understanding Buyer’s Journey,” which I published a few weeks back.

For those of you who don’t know what it is, Big Data refers to the massive quantities of information, much of it marketing-related, that firms are currently collecting as they do business. Since the data are being stored in different places and many varying formats, for the most part the state they’re in is what we refer to as “unstructured.” Additionally, because Big Data is also stored in different silos within the organization, it’s generally managed by various teams or divisions. With the recent advent of Web 2.0, the volume of data firms are confronted with has quite literally exploded, and many are struggling to store, manage and make sense out of it.

The breadth of data is simply staggering. In fact, according to Teradata, more data have been created in the last three years than in all past 40,000 years of human history combined! And the pace of data is only predicted to continue growing. You see, proliferating channels are providing us with an unprecedented amount of information—too much even to store! In a marketing sense, the term Big Data essentially refers to the collection of unstructured data from across different segments, and the drive to make sense of it all. And it’s not an easy task.

Think about it. How do you compare email opens, clicks and unsubscribes to Facebook “Likes” or Twitter followers, tweets or mentions? How does traffic your main website is receiving relate to the data stored in your CRM? How can you possibly compare the valuable business intelligence you’re tracking in your marketing automation platform you’re using for demand generation, against the detailed customer records you’re storing in your ERP you use for billing and customer service? Now throw in call center data, point of sale (POS) stats … information provided by Value Added Retailers (VARs), distributors and third-party data providers. More importantly, how do they ALL compare and relate together? You get the picture.

Now this begs the next question; which is, namely, what does this mean to marketers and marketing departments. This is where it gets very interesting. You see, unbeknownst to many, there’s an amazing transformation that is just now taking place within many firms as they deal with the endless volumes of unstructured data they are tracking and storing every day across their organization.

What’s happening is firms are rethinking the way they store, manage data and channel data throughout their entire companies. I call it the Great Marketing Data Revolution. It’s essentially a complete repurposing and reprocessing of the tools they use and how they’re used. This wholesale repurposing aims not only to make sense out of this trove of data, but also to break down the walls separating the various silos where the information is stored. As we speak, pioneering companies are just now leading the charge … and will be the first to reap the immense benefits down the road when the revolution is complete.

Ultimately, success in this crucial endeavor demands a holistic approach. This is the case because this drive essentially requires hammering out a better way of doing business by reprocessing across these four major steps: Process Workflow, Human Capital, Technology, and Supply Chain Management. In other words, doing this right way may require a complete rethinking of the direction that data flows within an organization, who manages it, where the information is stored, and what third-party suppliers need to be engaged with to assist in the process. We’re talking a completely new way of looking at marketing process management.

With so many moving parts, not surprisingly there are many obstacles in the way. Those obstacles include legacy IT infrastructure, disparate marketing structure scattered across various departments, limited IT budgets and, of course, sheer inertia. But out of all the obstacles companies face, the most important may be the dearth of data-savvy staff and marketing talent that firms have on staff.

Firms are having a difficult time staffing up in this area because this transformation is actually a hybrid marketing and IT process. Think about it. The data are being created by the firm’s marketing department. As such, only marketing truly understands not only how the data are being generated, but more importantly why they’re important and how this information can be put to actionable use in the future. At the same time, the data are stored within IT’s domain, sitting on servers or stacks, or else stored in the cloud. And because the process involves a complete rethinking and reprocessing, it really needs a new type of talent—basically a hybrid marketer/technologist—to make it happen.

Many are deeming this new role that of a Data Scientist. Not surprisingly, because this is a new role, employees with these skills aren’t exactly a dime a dozen. You can read about that here in this article that appeared on AOL Jobs titled “Data Scientist: The Hottest Job You Haven’t Heard Of.” The article reports that, because they’re in such high demand, Data Scientists can expect to earn decent salaries—ranging from $60,000 to $115,000.

Know any Data Scientists? Are you involved in a similar reprocessing transformation for your firm? If so, I’d love to know in your comments.