Data Geeks Must Learn to Speak to Clients

This piece is for aspiring data scientists, analysts or consultants (or any other cool title du jour in this data and analytics business). Then again, people who spend even a single dime on a data project must remember this, as well: “The main goal of any analytical endeavor is to make differences in business.”

This piece is for aspiring data scientists, analysts or consultants (or any other cool title du jour in this data and analytics business). Then again, people who spend even a single dime on a data project must remember this, as well: “The main goal of any analytical endeavor is to make differences in business.”

To this, some may say “Duh, keep stating the obvious.” But I am stating the obvious, as too many data initiatives are either for the sake of playing with data at hand, or for the “cool factor” among fellow data geeks. One may sustain such a position for a couple of years if he is lucky, but sooner or later, someone who is paying for all of the data stuff will ask where the money is going. In short, no one will pay for all of those servers, analytical tools and analysts’ salaries so that a bunch of geeks have some fun with data. If you just want the fun part, then maybe you should just stay in academia “paying” tuition for such an experience.

Not too long ago, I encountered a promising resume in a deep pile. Seemingly, this candidate had very impressive credentials. A PhD in statistics from a reputable school, hands-on analytics experience in multiple industries (so he claimed), knowledge in multiple types of statistical techniques, and proficiency in various computing languages and toolsets. But the interview couldn’t have gone worse.

When the candidate was going on and on about minute details of his mathematical journey for a rather ordinary modeling project, I interrupted and asked a very simple question: “Why did you build that model?” Unbelievably, he couldn’t answer that question, and kept resorting back to the methodology part. Unfortunately for him, I was not looking for a statistician, but an analytics consultant. There was just no way that I would put such a mechanical person in front of a client without risking losing the deal entirely.

When I interview to fill a client-facing position, I am not just looking for technical skills. What I am really looking for is an ability to break down business challenges into tangible analytics projects to meet tangible business goals.

In fact, in the near future, this will be all that is left for us humans to do: “To define the problem statement in the business context.” Machines will do all of the tedious data prep work and mathematical crunching after that. (Well, with some guidance from humans, but not requiring line-by-line instructions by many.) Now, if number-crunching is the only skill one is selling, well then, he is asking to be replaced by machines sooner than others.

From my experience, I see that the overlap between a business analyst and a statistical analyst is surprisingly small. Further, let me go on and say that most graduates with degrees in statistics are utterly ill-prepared for the real world challenges. Why?

Once I read an article somewhere (I do not recall the name of the publication or the author) that colleges are not really helping future data scientists in a practical manner, as (

  1. all of the datasets for school projects are completely clean and free of missing data, and
  2. professors set the goals and targets of modeling exercises.

I completely agree with this statement, as I have never seen a totally clean dataset since my school days (which was a long time ago in a galaxy far far away), and defining the target of any model is the most difficult challenge in any modeling project. In fact, for most hands-on analysts, data preparation and target definition are the work. If the target is hung on a wrong place, no amount of cool algorithms will save the day.

Yet, kids graduate schools thinking that they are ready to take on such challenges in the real world on Day One. Sorry to break it to them this way, but no, mathematical skills do not directly translate into ability to solve problems in the business world. Such training will definitely give them an upper hand in the job market, though, as no math-illiterate should be called an analyst.

Last summer, my team hired two promising interns, mainly to build a talent pool for the following year. Both were very bright kids, indeed, and we gave them two seemingly straightforward modeling projects. The first assignment was to build a model to proximate customer loyalty in a B2B setting. I don’t remember the second assignment, as they spent the entire summer searching for the definition of a “loyal customer” to go after. They couldn’t even begin the modeling part. So more senior members in the team had to do that fun part after they went back to school. (For more details about this project, refer to “The Secret Sauce for B2B Loyalty Marketing.”)

Of course, we as a team knew what we were doing all along, but I wanted to teach these youngsters how to approach a project from the very beginning, as no client will define the target for consultants and vendors. Technical specs? You’re supposed to write that spec from scratch.

In fact, determining if we even need a model to reach the business goal was a test in itself. Why build a model at all? Because it’s a cool thing on your resume? With what data? For what specific success metrics? If “selling more things by treating valuable customers properly” is the goal, then why not build a customer value model first? Why the loyalty model? Because clients just said so? Why not product propensity models, if there are specific products to push? Why not build multiple models and cover all bases while we’re at it? If so, will we build a one-size-fits-all model in one shot, or should we consider separating the universe for distinct segments in the footprint? If so, how would you determine such segments then? (Ah, that “segmentation of the universe” part was where the interns were stuck.)

Boy, did I wish schools spent more time doing these types of problem-solving exercises with their students. Yes, kids will be uncomfortable as these questions do NOT have clear yes or no answers to them. But in business, there rarely are clear answers to our questions. Converting such ambiguity into measurable and quantifiable answers (such as probability that a certain customer will respond to a certain offer, or sales projection of a particular product line for the next two quarters with limited data) is the required skill. Prescribing the right approach and methodology to solve long- and short-term challenges is the job, not just manipulating data and building algorithms.

In other words, mathematical elegance may be a differentiating factor between a mediocre and excellent analyst, but such is not the end goal. Then what should aspiring analysts keep in mind?

In the business world, the goals of data or analytical work are really clear-cut and simple. We work with the data to (1) increase revenue, (2) decrease cost (hence, maximizing profit), or minimize risks. That’s it.

From that point, a good analyst should:

  • Define clear problem statements (even when ambiguity is all around)
  • Set tangible and attainable goals employing a phased approach (i.e., a series of small successes leading to achievement of long-term goals)
  • Examine quality of available data, and get them ready for advanced analytics (as most datasets are NOT model-ready)
  • Consider specific methodologies best fit to solve goals in each phase (as assumptions and conditions may change drastically for each progression, and one brute-force methodology may not work well in the end)
  • Set the order of operation (as sequence of events does matter in any complex project)
  • Determine success metrics, and think about how to “sell” the results to sponsors of the project (even before any data or math work begins)
  • Go about modeling or any other statistical work (only if the project calls for it)
  • Share knowledge with others and make sure resultant model scores and other findings are available to users through their favorite toolsets (even if the users are non-believers of analytics)
  • Continuously monitor the results and re-fit the models for improvement

As you can see here, even in this simplified list, modeling is just an “optional” step in the whole process. No one should build models because they know how to do it. You’re not in school anymore, where the goal is to get an A at the end of the semester. In the real world (although using this term makes me sound like a geezer), data players are supposed to make money with data, with or without advanced techniques. Methodologies? They are just colors on a palette, and you don’t have to use all of them.

For the folks who are in position to hire math geeks to maximize the value of data, simply ask them “why they would do anything.” If the candidate actually pauses and tries to think from the business perspective, then she is actually displaying some potential to be a business partner in the future. If the candidate keeps dropping technical jargon to this simple question, cut the interview short — unless you have natural curiosity in the mechanics of models and analytics, and your department’s success is just measured in complexity and elegance of solutions. But I highly doubt that such a goal would be above increasing profit for the organization in the end.

Watch the Attitude, Data Geeks

One-dimensional techies will be replaced by machines in the near future. So what if they’re the smartest ones in the room? If decision-makers can’t use data, does the information really exist?

Data Geeks
Data geeks may be the smartest people in the room, but maybe not if decision-makers don’t know what to do with their information.

Data do not exist just for data geeks and nerds. All of these data activities are inevitably funded by people who want to harness business value out of data. Whether it is about increasing revenue or reducing cost, in the end, the data game is about creating tangible value in forms of dollars, pounds, Euros or Yuans.

It really has nothing to do with the coolness of the toolsets or latest technologies, but it is all about the business — plain and simple. In other words, the data and analytics field is not some playground reserved for math or technology geeks, who sometimes think that belonging to exclusive clubs with secret codes and languages is the goal in itself. At the risk of sounding like an unapologetic capitalist, data don’t flow if money stops flowing. If you doubt me, watch where the budgets get cut first when going gets rough.

Data and analytics folks may feel secure, as they may know something in which non-technical people may not be well-versed in the age of Big Data. Maybe their bosses leave techies alone in a corner, as technical details and math jargon give them headaches. Their jobs may indeed be secure, for as long as the financial value coming out of the unit is net positive. Others may tolerate some techie talk, condescending attitudes, or mathematical dramas, for as long as data and analytics help them monetarily. Otherwise? Buh-bye geeks!

I am writing this piece to provide a serious attitude adjustment to some data players. If data and analytics are not for geeks, but for the good of businesses (and all of the decision-makers who may not be technical), what does useful information look like?

Allow me to share some ideas for all the beneficiaries of data, not a selected few who speak the machine language.

  • Data Must Be in Forms That Are Easy to Understand without mathematical or technical expertise. It should be as simple and easy to understand as a weather report. That means all of the data and statistical modeling to fill in the gaps must be done before the information reaches the users.
  • Data Must Be Small, not mounds of unfiltered and unstructured information. Useful data must look like answers to questions, not something that comes with a 500-page data dictionary. Data players should never brag about the size of the data or speed of processing, as users really don’t care about such details.
  • Data Must Be Accurate. Inaccurate information is worse than not having any at all. Users also must remember that not everything that comes out of computers is automatically accurate. Conversely, data players must be responsible to fix all of the previous mistakes that were made to datasets before they even reached them. Not fair, but that’s the job.
  • Data Must Be Consistent. It can be argued that consistency is even more important than sheer accuracy. Often, being consistently off may be more desirable than having large fluctuations, as even a dead clock is completely accurate twice a day. This is especially true for information that is inferred via statistical work.
  • Data Must Be Applicable Most of the Time, not just for limited cases. Too many data are locked in silos serving myopic purposes. Data become more powerful when they are consolidated properly, reaching broader audiences.
  • Data Must Be Accessible to users through devices of their choices. Even good information that fits the above criteria becomes useless if it does not reach decision-makers when needed. Data players’ jobs are not done until data are delivered to the right people in the right format and a timely manner.

Who are these data players who should be responsible for all of this, and where do they belong? They may have titles such as Chief Data Officer (who would be in charge of data governance); Data Strategist or Analytics Strategist: Data Scientist; Statistical Analyst or Program Developer. They may belong to IT, marketing, or a separate data or analytics department. No matter. They must be translators of information for the benefit of users, speaking languages of both business and technology fluently. They should never be just guard dogs of information. Ultimately, they should represent the interests of business first, not waving some fictitious IT or data rules.

So-called specialists, who habitually spit out reasons why certain information must be locked away somewhere and why they should not be available to users in a more user-friendly form, must snap out of their technical, analytical or mathematical comfort zone, pronto.

Techies who are that one-dimensional will be replaced by a machine in the near future.

The future belongs to people who can connect dots among different worlds and paradigms, not to some geeks with limited imaginations and skill sets that could become obsolete soon.

So, if self-preservation is an instinct that techies possess, they should figure out who is paying the bills, including their salaries and benefits, and make it absolutely easy for these end-users in all ways listed here. If not for altruistic reasons, for their own benefit in this results-oriented business world.

If information is not used by decision-makers, does the information really exist?

Don’t Do It Just Because You Can

Don’t do it just because you can. No kidding. … Any geek with moderate coding skills or any overzealous marketer with access to some data can do real damage to real human beings without any superpowers to speak of. Largely, we wouldn’t go so far as calling them permanent damages, but I must say that some marketing messages and practices are really annoying and invasive. Enough to classify them as “junk mail” or “spam.” Yeah, I said that, knowing full-well that those words are forbidden in the industry in which I built my career.

Don’t do it just because you can. No kidding. By the way, I could have gone with Ben Parker’s “With great power comes great responsibility” line, but I didn’t, as it has become an over-quoted cliché. Plus, I’m not much of a fan of “Spiderman.” Actually, I’m kidding this time. (Not the “Spiderman” part, as I’m more of a fan of “Thor.”) But the real reason is any geek with moderate coding skills or any overzealous marketer with access to some data can do real damage to real human beings without any superpowers to speak of. Largely, we wouldn’t go so far as calling them permanent damages, but I must say that some marketing messages and practices are really annoying and invasive. Enough to classify them as “junk mail” or “spam.” Yeah, I said that, knowing full-well that those words are forbidden in the industry in which I built my career.

All jokes aside, I received a call from my mother a few years ago asking me if this “urgent” letter that says her car warranty will expire if she does not act “right now” (along with a few exclamation marks) is something to which she must respond immediately. Many of us by now are impervious to such fake urgencies or outrageous claims (like “You’ve just won $10,000,000!!!”). But I then realized that there still are plenty of folks who would spend their hard-earned dollars based on such misleading messages. What really made me mad, other than the fact that my own mother was involved in that case, was that someone must have actually targeted her based on her age, ethnicity, housing value and, of course, the make and model of her automobile. I’ve been doing this job for too long to be unaware of potential data variables and techniques that must have played a part so that my mother to receive a series of such letters. Basically, some jerk must have created a segment that could be named as “old and gullible.” Without a doubt, this is a classic example of what should not be done just because one can.

One might dismiss it as an isolated case of a questionable practice done by questionable individuals with questionable moral integrity, but can we honestly say that? I, who knows the ins and outs of direct marketing practices quite well, fell into traps more than a few times, where supposedly a one-time order mysteriously turns into a continuity program without my consent, followed by an extremely cumbersome canceling process. Further, when I receive calls or emails from shady merchants with dubious offers, I can very well assume my information changed hands in very suspicious ways, if not through outright illegal routes.

Even without the criminal elements, as data become more ubiquitous and targeting techniques become more precise, an accumulation of seemingly inoffensive actions by innocuous data geeks can cause a big ripple in the offline (i.e., “real”) world. I am sure many of my fellow marketers remember the news about this reputable retail chain a few years ago; that they accurately predicted pregnancy in households based on their product purchase patterns and sent customized marketing messages featuring pregnancy-related products accordingly. Subsequently it became a big controversy, as such a targeted message was the way one particular head of household found out his teenage daughter was indeed pregnant. An unintended consequence? You bet.

I actually saw the presentation of the instigating statisticians in a predictive analytics conference before the whole incident hit the wire. At the time, the presenters were unaware of the consequences of their actions, so they proudly shared employed methodologies with the audience. But when I heard about what they were actually trying to predict, I immediately turned my head to look at the lead statistician in my then-analytical team sitting next to me, and saw that she had a concerned look that I must have had on my face, as well. And our concern was definitely not about the techniques, as we knew how to do the same when provided with similar sets of data. It was about the human consequences that such a prediction could bring, not just to the eventual targets, but also to the predictors and their fellow analysts in the industry who would all be lumped together as evil scientists by the outsiders. In predictive analytics, there is a price for being wrong; and at times, there is a price to pay for being right, too. Like I said, we shouldn’t do things just because we can.

Analysts do not have superpowers individually, but when technology and ample amounts of data are conjoined, the results can be quite influential and powerful, much like the way bombs can be built with common materials available at any hardware store. Ironically, I have been evangelizing that the data and technology should be wielded together to make big and dumb data smaller and smarter all this time. But providing answers to decision-makers in ready-to-be used formats, hence “humanizing” the data, may have its downside, too. Simply, “easy to use” can easily be “easy to abuse.” After all, humans are fallible creatures with ample amounts of greed and ambition. Even without any obvious bad intentions, it is sometimes very difficult to contemplate all angles, especially about those sensitive and squeamish humans.

I talked about the social consequences of the data business last month (refer to “How to Be a Good Data Scientist“), and that is why I emphasized that anyone who is about to get into this data field must possess deep understandings of both technology and human nature. That little sensor in your stomach that tells you “Oh, I have a bad feeling about this” may not come to everyone naturally, but we all need to be equipped with those safeguards like angels on our shoulders.

Hindsight is always 20/20, but apparently, those smart analysts who did that pregnancy prediction only thought about the techniques and the bottom line, but did not consider all the human factors. And they should have. Or, if not them, their manager should have. Or their partners in the marketing department should have. Or their public relations people should have. Heck, “someone” in their organization should have, alright? Just like we do not casually approach a woman on the street who “seems” pregnant and say “You must be pregnant.” Only socially inept people would do that.

People consider certain matters extremely private, in case some data geeks didn’t realize that. If I might add, the same goes for ailments such as erectile dysfunction or constipation, or any other personal business related to body parts that are considered private. Unless you are a doctor in an examining room, don’t say things like “You look old, so you must have hard time having sex, right?” It is already bad enough that we can’t even watch golf tournaments on TV without those commercials that assume that golf fans need help in that department. (By the way, having “two” bathtubs “outside” the house at dusk don’t make any sense either, when the effect of the drug can last for hours for heaven’s sake. Maybe the man lost interest because the tubs were too damn heavy?)

While it may vary from culture to culture, we all have some understanding of social boundaries in casual settings. When you are talking to a complete stranger on a plane ride, for example, you know exactly how much information that you would feel comfortable sharing with that person. And when someone crosses the line, we call that person inappropriate, or “creepy.” Unfortunately, that creepy line is set differently for each person who we encounter (I am sure people like George Clooney or Scarlett Johansson have a really high threshold for what might be considered creepy), but I think we can all agree that such a shady area can be loosely defined at the least. Therefore, when we deal with large amounts of data affecting a great many people, imagine a rather large common area of such creepiness/shadiness, and do not ever cross it. In other words, when in doubt, don’t go for it.

Now, as a lifelong database marketer, I am not advocating some over-the-top privacy zealots either, as most of them do not understand the nature of data work and can’t tell the difference between informed (and mutually beneficial) messages and Big Brother-like nosiness. This targeting business is never about looking up an individual’s record one at a time, but more about finding correlations between users and products and doing some good match-making in mass numbers. In other words, we don’t care what questionable sites anyone visits, and honest data players would not steal or abuse information with bad intent. I heard about waiters who steal credit card numbers from their customers with some swiping devices, but would you condemn the entire restaurant industry for that? Yes, there are thieves in any part of the society, but not all data players are hackers, just like not all waiters are thieves. Statistically speaking, much like flying being the safest from of travel, I can even argue that handing over your physical credit card to a stranger is even more dangerous than entering the credit card number on a website. It looks much worse when things go wrong, as incidents like that affect a great many all at once, just like when a plane crashes.

Years back, I used to frequent a Japanese Restaurant near my office. The owner, who doubled as the head sushi chef, was not a nosy type. So he waited for more than a year to ask me what I did for living. He had never heard anything about database marketing, direct marketing or CRM (no “Big Data” on the horizon at that time). So I had to find a simple way to explain what I do. As a sushi chef with some local reputation, I presumed that he would know personal preferences of many frequently visiting customers (or “high-value customers,” as marketers call them). He may know exactly who likes what kind of fish and types of cuts, who doesn’t like raw shellfish, who is allergic to what, who has less of a tolerance for wasabi or who would indulge in exotic fish roes. When I asked this question, his answer was a simple “yes.” Any diligent sushi chef would care for his or her customers that much. And I said, “Now imagine that you can provide such customized services to millions of people, with the help of computers and collected data.” He immediately understood the benefits of using data and analytics, and murmured “Ah so …”

Now let’s turn the table for a second here. From the customer’s point of view, yes, it is very convenient for me that my favorite sushi chef knows exactly how I like my sushi. Same goes for the local coffee barista who knows how you take your coffee every morning. Such knowledge is clearly mutually beneficial. But what if those business owners or service providers start asking about my personal finances or about my grown daughter in a “creepy” way? I wouldn’t care if they carried the best yellowtail in town or served the best cup of coffee in the world. I would cease all my interaction with them immediately. Sorry, they’ve just crossed that creepy line.

Years ago, I had more than a few chances to sit closely with Lester Wunderman, widely known as “The Father of Direct Marketing,” as the venture called I-Behavior in which I participated as one of the founders actually originated from an idea on a napkin from Lester and his friends. Having previously worked in an agency that still bears his name, and having only seen him behind a podium until I was introduced to him on one cool autumn afternoon in 1999, meeting him at a small round table and exchanging ideas with the master was like an unknown guitar enthusiast having a jam session with Eric Clapton. What was most amazing was that, at the beginning of the dot.com boom, he was completely unfazed about all those new ideas that were flying around at that time, and he was precisely pointing out why most of them would not succeed at all. I do not need to quote the early 21st century history to point out that his prediction was indeed accurate. When everyone was chasing the latest bit of technology for quick bucks, he was at least a decade ahead of all of those young bucks, already thinking about the human side of the equation. Now, I would not reveal his age out of respect, but let’s just say that almost all of the people in his age group would describe occupations of their offspring as “Oh, she just works on a computer all the time …” I can only wish that I will remain that sharp when I am his age.

One day, Wunderman very casually shared a draft of the “Consumer Bill of Rights for Online Engagement” with a small group of people who happened to be in his office. I was one of the lucky souls who heard about his idea firsthand, and I remember feeling that he was spot-on with every point, as usual. I read it again recently just as this Big Data hype is reaching its peak, just like the dot.com boom was moving with a force that could change the world back then. In many ways, such tidal waves do end up changing the world. But lest we forget, such shifts inevitably affect living, breathing human beings along the way. And for any movement guided by technology to sustain its velocity, people who are at the helm of the enabling technology must stay sensitive toward the needs of the rest of the human collective. In short, there is not much to gain by annoying and frustrating the masses.

Allow me to share Lester Wunderman’s “Consumer Bill of Rights for Online Engagement” verbatim, as it appeared in the second edition of his book “Being Direct”:

  1. Tell me clearly who you are and why you are contacting me.
  2. Tell me clearly what you are—or are not—going to do with the information I give.
  3. Don’t pretend that you know me personally. You don’t know me; you know some things about me.
  4. Don’t assume that we have a relationship.
  5. Don’t assume that I want to have a relationship with you.
  6. Make it easy for me to say “yes” and “no.”
  7. When I say “no,” accept that I mean not this, not now.
  8. Help me budget not only my money, but also my TIME.
  9. My time is valuable, don’t waste it.
  10. Make my shopping experience easier.
  11. Don’t communicate with me just because you can.
  12. If you do all of that, maybe we will then have the basis for a relationship!

So, after more than 15 years of the so-called digital revolution, how many of these are we violating almost routinely? Based on the look of my inboxes and sites that I visit, quite a lot and all the time. As I mentioned in my earlier article “The Future of Online is Offline,” I really get offended when even seasoned marketers use terms like “online person.” I do not become an online person simply because I happen to stumble onto some stupid website and forget to uncheck some pre-checked boxes. I am not some casual object at which some email division of a company can shoot to meet their top-down sales projections.

Oh, and good luck with that kind of mindless mass emailing; your base will soon be saturated and you will learn that irrelevant messages are bad for the senders, too. Proof? How is it that the conversion rate of a typical campaign did not increase dramatically during the past 40 years or so? Forget about open or click-through rate, but pay attention to the good-old conversion rate. You know, the one that measures actual sales. Don’t we have superior databases and technologies now? Why is anyone still bragging about mailing “more” in this century? Have you heard about “targeted” or “personalized” messages? Aren’t there lots and lots of toolsets for that?

As the technology advances, it becomes that much easier and faster to offend people. If the majority of data handlers continue to abuse their power, stemming from the data in their custody, the communication channels will soon run dry. Or worse, if abusive practices continue, the whole channel could be shut down by some legislation, as we have witnessed in the downfall of the outbound telemarketing channel. Unfortunately, a few bad apples will make things a lot worse a lot faster, but I see that even reputable companies do things just because they can. All the time, repeatedly.

Furthermore, in this day and age of abundant data, not offending someone or not violating rules aren’t good enough. In fact, to paraphrase comedian Chris Rock, only losers brag about doing things that they are supposed to do in the first place. The direct marketing industry has long been bragging about the self-governing nature of its tightly knit (and often incestuous) network, but as tools get cheaper and sharper by the day, we all need to be even more careful wielding this data weaponry. Because someday soon, we as consumers will be seeing messages everywhere around us, maybe through our retina directly, not just in our inboxes. Personal touch? Yes, in the creepiest way, if done wrong.

Visionaries like Lester Wunderman were concerned about the abusive nature of online communication from the very beginning. We should all read his words again, and think twice about social and human consequences of our actions. Google from its inception encapsulated a similar idea by simply stating its organizational objective as “Don’t be evil.” That does not mean that it will stop pursuing profit or cease to collect data. I think it means that Google will always try to be mindful about the influences of its actions on real people, who may not be in positions to control the data, but instead are on the side of being the subject of data collection.

I am not saying all of this out of some romantic altruism; rather, I am emphasizing the human side of the data business to preserve the forward-momentum of the Big Data movement, while I do not even care for its name. Because I still believe, even from a consumer’s point of view, that a great amount of efficiency could be achieved by using data and technology properly. No one can deny that modern life in general is much more convenient thanks to them. We do not get lost on streets often, we can translate foreign languages on the fly, we can talk to people on the other side of the globe while looking at their faces. We are much better informed about products and services that we care about, we can look up and order anything we want while walking on the street. And heck, we get suggestions before we even think about what we need.

But we can think of many negative effects of data, as well. It goes without saying that the data handlers must protect the data from falling into the wrong hands, which may have criminal intentions. Absolutely. That is like banks having to protect their vaults. Going a few steps further, if marketers want to retain the privilege of having ample amounts of consumer information and use such knowledge for their benefit, do not ever cross that creepy line. If the Consumer’s Bill of Rights is too much for you to retain, just remember this one line: “Don’t be creepy.”

Smart Data – Not Big Data

As a concerned data professional, I am already plotting an exit strategy from this Big Data hype. Because like any bubble, it will surely burst. That inevitable doomsday could be a couple of years away, but I can feel it coming. At the risk of sounding too much like Yoda the Jedi Grand Master, all hypes lead to over-investments, all over-investments lead to disappointments, and all disappointments lead to blames. Yes, in a few years, lots of blames will go around, and lots of heads will roll.

As a concerned data professional, I am already plotting an exit strategy from this Big Data hype. Because like any bubble, it will surely burst. That inevitable doomsday could be a couple of years away, but I can feel it coming. At the risk of sounding too much like Yoda the Jedi Grand Master, all hypes lead to over-investments, all over-investments lead to disappointments, and all disappointments lead to blames. Yes, in a few years, lots of blames will go around, and lots of heads will roll.

So, why would I stay on the troubled side? Well, because, for now, this Big Data thing is creating lots of opportunities, too. I am writing this on my way back from Seoul, Korea, where I presented this Big Data idea nine times in just two short weeks, trotting from large venues to small gatherings. Just a few years back, I used to have a hard time explaining what I do for living. Now, I just have to say “Hey, I do this Big Data thing,” and the doors start to open. In my experience, this is the best “Open Sesame” moment for all data specialists. But it will last only if we play it right.

Nonetheless, I also know that I will somehow continue to make living setting data strategies, fixing bad data, designing databases and leading analytical activities, even after the hype cools down. Just with a different title, under a different banner. I’ve seen buzzwords come and go, and this data business has been carried on by the people who cut through each hype (and gargantuan amount of BS along with it) and create real revenue-generating opportunities. At the end of the day (I apologize for using this cliché), it is all about the bottom line, whether it comes from a revenue increase or cost reduction. It is never about the buzzwords that may have created the business opportunities in the first place; it has always been more about the substance that turned those opportunities into money-making machines. And substance needs no fancy title or buzzwords attached to it.

Have you heard Google or Amazon calling themselves a “Big Data” companies? They are the ones with sick amounts of data, but they also know that it is not about the sheer amount of data, but it is all about the user experience. “Wannabes” who are not able to understand the core values often hang onto buzzwords and hypes. As if Big Data, Cloud Computing or coding language du jour will come and save the day. But they are just words.

Even the name “Big Data” is all wrong, as it implies that bigger is always better. The 3 Vs of Big Data—volume, velocity and variety—are also misleading. That could be a meaningful distinction for existing data players, but for decision-makers, it gives a notion that size and speed are the ultimate quest. But for the users, small is better. They don’t have time to analyze big sets of data. They need small answers in fun size packages. Plus, why is big and fast new? Since the invention of modern computers, has there been any year when the processing speed did not get faster and storage capacity did not get bigger?

Lest we forget, it is the software industry that came up with this Big Data thing. It was created as a marketing tagline. We should have read it as, “Yes, we can now process really large amounts of data, too,” not as, “Big Data will make all your dreams come true.” If you are in the business of selling toolsets, of course, that is how you present your product. If guitar companies keep emphasizing how hard it is to be a decent guitar player, would that help their businesses? It is a lot more effective to say, “Hey, this is the same guitar that your guitar hero plays!” But you don’t become Jeff Beck just because you bought a white Fender Stratocaster with a rosewood neck. The real hard work begins “after” you purchase a decent guitar. However, this obvious connection is often lost in the data business. Toolsets never provide solutions on their own. They may make your life easier, but you’d still have to formulate the question in a logical fashion, and still have to make decisions based on provided data. And harnessing meanings out of mounds of data requires training of your mind, much like the way musicians practice incessantly.

So, before business people even consider venturing into this Big Data hype, they should ask themselves “Why data?” What are burning questions that you are trying to solve with the data? If you can’t answer this simple question, then don’t jump into it. Forget about it. Don’t get into it just because everyone else seems to be getting into it. Yeah, it’s a big party, but why are you going there? Besides, if you formulate the question properly, often you will find that you don’t need Big Data all the time. If fact, Big Data can be a terrible detour if your question can be answered by “small” data. But that happens all the time, because people approach their business questions through the processes set by the toolsets. Big Data should be about the business, not about the IT or data.

Smart Data, Not Big Data
So, how do we get over this hype? All too often, perception rules, and a replacement word becomes necessary to summarize the essence of the concept for the general public. In my opinion, “Big Data” should have been “Smart Data.” Piles of unorganized dumb data aren’t worth a damn thing. Imagine a warehouse full of boxes with no labels, collecting dust since 1943. Would you be impressed with the sheer size of the warehouse? Great, the ark that Indiana Jones procured (or did he?) may be stored in there somewhere. But if no one knows where it is—or even if it can be located, if no one knows what to do with it—who cares?

Then, how do data get smarter? Smart data are bite-sized answers to questions. A thousand variables could have been considered to provide the weather forecast that calls for a “70 percent chance of scattered showers in the afternoon,” but that one line that we hear is the smart piece of data. Not the list of all the variables that went into the formula that created that answer. Emphasizing the raw data would be like giving paints and brushes to a person who wants a picture on the wall. As in, “Hey, here are all the ingredients, so why don’t you paint the picture and hang it on the wall?” Unfortunately, that is how the Big Data movement looks now. And too often, even the ingredients aren’t all that great.

I visit many companies only to find that the databases in question are just messy piles of unorganized and unstructured data. And please do not assume that such disarrays are good for my business. I’d rather spend my time harnessing meanings out of data and creating values, not taking care of someone else’s mess all the time. Really smart data are small, concise, clean and organized. Big Data should only be seen in “Behind the Scenes” types of documentaries for manias, not for everyday decision-makers.

I have been already saying that Big Data must get smaller for some time (refer to “Big Data Must Get Smaller“) and I would repeat it until it becomes a movement on its own. The Big Data movement must be about:

  1. Cutting down the noise
  2. Providing the answers

There is too much noise in the data, and cutting it out is the first step toward making the data smaller and smarter. The trouble is that the definition of “noise” is not static. Rock music that I grew up with was certainly a noise to my parents’ generation. In turn, some music that my kids listen to is pure noise to me. Likewise, “product color,” which is essential for a database designed for an inventory management system, may or may not be noise if the goal is to sell more apparel items. In such cases, more important variables could be style, brand, price range, target gender, etc., but color could be just peripheral information at best, or even noise (as in, “Uh, she isn’t going to buy just red shoes all the time?”). How do we then determine the differences? First, set the clear goals (as in, “Why are we playing with the data to begin with?”), define the goals using logical expressions, and let mathematics take care of it. Now you can drop the noise with conviction (even if it may look important to human minds).

If we continue with that mathematical path, we would reach the second part, which is “providing answers to the question.” And the smart answers are in the forms of yes/no, probability figures or some type of scores. Like in the weather forecast example, the question would be “chance of rain on a certain day” and the answer would be “70 percent.” Statistical modeling is not easy or simple, but it is the essential part of making the data smarter, as models are the most effective way to summarize complex and abundant data into compact forms (refer to “Why Model?”).

Most people do not have degrees in mathematics or statistics, but they all know what to do with a piece of information such as “70 percent chance of rain” on the day of a company outing. Some may complain that it is not a definite yes/no answer, but all would agree that providing information in this form is more humane than dumping all the raw data onto users. Sales folks are not necessarily mathematicians, but they would certainly appreciate scores attached to each lead, as in “more or less likely to close.” No, that is not a definite answer, but now sales people can start calling the leads in the order of relative importance to them.

So, all the Big Data players and data scientists must try to “humanize” the data, instead of bragging about the size of the data, making things more complex, and providing irrelevant pieces of raw data to users. Make things simpler, not more complex. Some may think that complexity is their job security, but I strongly disagree. That is a sure way to bring down this Big Data movement to the ground. We are already living in a complex world, and we certainly do not need more complications around us (more on “How to be a good data scientist” in a future article).

It’s About the Users, Too
On the flip side, the decision-makers must change their attitude about the data, as well.

1. Define the goals first: The main theme of this series has been that the Big Data movement is about the business, not IT or data. But I’ve seen too many business folks who would so willingly take a hands-off approach to data. They just fund the database; do not define clear business goals to developers; and hope to God that someday, somehow, some genius will show up and clear up the mess for them. Guess what? That cavalry is never coming if you are not even praying properly. If you do not know what problems you want to solve with data, don’t even get started; you will get to nowhere really slowly, bleeding lots of money and time along the way.

2. Take the data seriously: You don’t have to be a scientist to have a scientific mind. It is not ideal if someone blindly subscribes anything computers spew out (there are lots of inaccurate information in databases; refer to “Not All Databases Are Created Equal.”). But too many people do not take data seriously and continue to follow their gut feelings. Even if your customer profile coming out of a serious analysis does not match with your preconceived notions, do not blindly reject it; instead, treat it as a newly found gold mine. Gut feelings are even more overrated than Big Data.

3. Be logical: Illogical questions do not lead anywhere. There is no toolset that reads minds—at least not yet. Even if we get to have such amazing computers—as seen on “Star Trek” or in other science fiction movies—you would still have to ask questions in a logical fashion for them to be effective. I am not asking decision-makers to learn how to code (or be like Mr. Spock or his loyal follower, Dr. Sheldon Cooper), but to have some basic understanding of logical expressions and try to learn how analysts communicate with computers. This is not data geek vs. non-geek world anymore; we all have to be a little geekier. Knowing Boolean expressions may not be as cool as being able to throw a curve ball, but it is necessary to survive in the age of information overload.

4. Shoot for small successes: Start with a small proof of concept before fully investing in large data initiatives. Even with a small project, one gets to touch all necessary steps to finish the job. Understanding the flow of information is as important as each specific step, as most breakdowns occur in between steps, due to lack of proper connections. There was Gemini program before Apollo missions. Learn how to dock spaceships in space before plotting the chart to the moon. Often, over-investments are committed when the discussion is led by IT. Outsource even major components in the beginning, as the initial goal should be mastering the flow of things.

5. Be buyer-centric: No customer is bound by the channel of the marketer’s choice, and yet, may businesses act exactly that way. No one is an online person just because she did not refuse your email promotions yet (refer to “The Future of Online is Offline“). No buyer is just one dimensional. So get out of brand-, division-, product- or channel-centric mindsets. Even well-designed, buyer-centric marketing databases become ineffective if users are trapped in their channel- or division-centric attitudes, as in “These email promotions must flow!” or “I own this product line!” The more data we collect, the more chances marketers will gain to impress their customers and prospects. Do not waste those opportunities by imposing your own myopic views on them. Big Data movement is not there to fortify marketers’ bad habits. Thanks to the size of the data and speed of machines, we are now capable of disappointing a lot of people really fast.

What Did This Hype Change?
So, what did this Big Data hype change? First off, it changed people’s attitudes about the data. Some are no longer afraid of large amounts of information being thrown at them, and some actually started using them in their decision-making processes. Many realized that we are surrounded by numbers everywhere, not just in marketing, but also in politics, media, national security, health care and the criminal justice system.

Conversely, some people became more afraid—often with good reasons. But even more often, people react based on pure fear that their personal information is being actively exploited without their consent. While data geeks are rejoicing in the age of open source and cloud computing, many more are looking at this hype with deep suspicions, and they boldly reject storing any personal data in those obscure “clouds.” There are some people who don’t even sign up for EZ Pass and voluntarily stay on the long lane to pay tolls in the old, but untraceable way.

Nevertheless, not all is lost in this hype. The data got really big, and types of data that were previously unavailable, such as mobile and social data, became available to many marketers. Focus groups are now the size of Twitter followers of the company or a subject matter. The collection rate of POS (point of service) data has been increasingly steady, and some data players became virtuosi in using such fresh and abundant data to impress their customers (though some crossed that “creepy” line inadvertently). Different types of data are being used together now, and such merging activities will compound the predictive power even further. Analysts are dealing with less missing data, though no dataset would ever be totally complete. Developers in open source environments are now able to move really fast with new toolsets that would just run on any device. Simply, things that our forefathers of direct marketing used to take six months to complete can be done in few hours, and in the near future, maybe within a few seconds.

And that may be a good thing and a bad thing. If we do this right, without creating too many angry consumers and without burning holes in our budgets, we are currently in a position to achieve great many things in terms of predicting the future and making everyone’s lives a little more convenient. If we screw it up badly, we will end up creating lots of angry customers by abusing sensitive data and, at the same time, wasting a whole lot of investors’ money. Then this Big Data thing will go down in history as a great money-eating hype.

We should never do things just because we can; data is a powerful tool that can hurt real people. Do not even get into it if you don’t have a clear goal in terms of what to do with the data; it is not some piece of furniture that you buy just because your neighbor bought it. Living with data is a lifestyle change, and it requires a long-term commitment; it is not some fad that you try once and give up. It is a continuous loop where people’s responses to marketer’s data-based activities create even more data to be analyzed. And that is the only way it keeps getting better.

There Is No Big Data
And all that has nothing to do with “Big.” If done right, small data can do plenty. And in fact, most companies’ transaction data for the past few years would easily fit in an iPhone. It is about what to do with the data, and that goal must be set from a business point of view. This is not just a new playground for data geeks, who may care more for new hip technologies that sound cool in their little circle.

I recently went to Brazil to speak at a data conference called QIBRAS, and I was pleasantly surprised that the main theme of it was the quality of the data, not the size of the data. Well, at least somewhere in the world, people are approaching this whole thing without the “Big” hype. And if you look around, you will not find any successful data players calling this thing “Big Data.” They just deal with small and large data as part of their businesses. There is no buzzword, fanfare or a big banner there. Because when something is just part of your everyday business, you don’t even care what you call it. You just do. And to those masters of data, there is no Big Data. If Google all of a sudden starts calling itself a Big Data company, it would be so uncool, as that word would seriously limit it. Think about that.

Big Data Must Get Smaller

Like many folks who worked in the data business for a long time, I don’t even like the words “Big Data.” Yeah, data is big now, I get it. But so what? Faster and bigger have been the theme in the computing business since the first calculator was invented. In fact, I don’t appreciate the common definition of Big Data that is often expressed in the three Vs: volume, velocity and variety. So, if any kind of data are big and fast, it’s all good? I don’t think so. If you have lots of “dumb” data all over the place, how does that help you? Well, as much as all the clutter that’s been piled on in your basement since 1971. It may yield some profit on an online auction site one day. Who knows? Maybe some collector will pay good money for some obscure Coltrane or Moody Blues albums that you never even touched since your last turntable (Ooh, what is that?) died on you. Those oversized album jackets were really cool though, weren’t they?

Like many folks who worked in the data business for a long time, I don’t even like the words “Big Data.” Yeah, data is big now, I get it. But so what? Faster and bigger have been the theme in the computing business since the first calculator was invented. In fact, I don’t appreciate the common definition of Big Data that is often expressed in the three Vs: volume, velocity and variety. So, if any kind of data are big and fast, it’s all good? I don’t think so. If you have lots of “dumb” data all over the place, how does that help you? Well, as much as all the clutter that’s been piled on in your basement since 1971. It may yield some profit on an online auction site one day. Who knows? Maybe some collector will pay good money for some obscure Coltrane or Moody Blues albums that you never even touched since your last turntable (Ooh, what is that?) died on you. Those oversized album jackets were really cool though, weren’t they?

Seriously, the word “Big” only emphasizes the size element, and that is a sure way to miss the essence of the data business. And many folks are missing even that little point by calling all decision-making activities that involve even small-sized data “Big Data.” It is entirely possible that this data stuff seems all new to someone, but the data-based decision-making process has been with us for a very long time. If you use that “B” word to differentiate old-fashioned data analytics of yesteryear and ridiculously large datasets of the present day, yes, that is a proper usage of it. But we all know most people do not mean it that way. One side benefit of this bloated and hyped up buzzword is data professionals like myself do not have to explain what we do for living for 20 minutes anymore by simply uttering the word “Big Data,” though that is a lot like a grandmother declaring all her grandchildren work on computers for living. Better yet, that magic “B” word sometimes opens doors to new business opportunities (or at least a chance to grab a microphone in non-data-related meetings and conferences) that data geeks of the past never dreamed of.

So, I guess it is not all that bad. But lest we forget, all hypes lead to overinvestments, and all overinvestments leads to disappointments, and all disappointments lead to purging of related personnel and vendors that bear that hyped-up dirty word in their titles or division names. If this Big Data stuff does not yield significant profit (or reduction in cost), I am certain that those investment bubbles will burst soon enough. Yes, some data folks may be lucky enough to milk it for another two or three years, but brace for impact if all those collected data do not lead to some serious dollar signs. I know how the storage and processing cost decreased significantly in recent years, but they ain’t totally free, and related man-hours aren’t exactly cheap, either. Also, if this whole data business is a new concept to an organization, any money spent on the promise of Big Data easily becomes a liability for the reluctant bunch.

This is why I open up my speeches and lectures with this question: “Have you made any money with this Big Data stuff yet?” Surely, you didn’t spend all that money to provide faster toys and nicer playgrounds to IT folks? Maybe the head of IT had some fun with it, but let’s ask that question to CFOs, not CTOs, CIOs or CDOs. I know some colleagues (i.e., fellow data geeks) who are already thinking about a new name for this—”decision-making activities, based on data and analytics”—because many of us will be still doing that “data stuff” even after Big Data cease to be cool after the judgment day. Yeah, that Gangnam Style dance was fun for a while, but who still jumps around like a horse?

Now, if you ask me (though nobody did yet), I’d say the Big Data should have been “Smart Data,” “Intelligent Data” or something to that extent. Because data must provide insights. Answers to questions. Guidance to decision-makers. To data professionals, piles of data—especially the ones that are fragmented, unstructured and unformatted, no matter what kind of fancy names the operating system and underlying database technology may bear—it is just a good start. For non-data-professionals, unrefined data—whether they are big or small—would remain distant and obscure. Offering mounds of raw data to end-users is like providing a painting kit when someone wants a picture on the wall. Bragging about the size of the data with impressive sounding new measurements that end with “bytes” is like counting grains of rice in California in front of a hungry man.

Big Data must get smaller. People want yes/no answers to their specific questions. If such clarity is not possible, probability figures to such questions should be provided; as in, “There’s an 80 percent chance of thunderstorms on the day of the company golf outing,” “An above-average chance to close a deal with a certain prospect” or “Potential value of a customer who is repeatedly complaining about something on the phone.” It is about easy-to-understand answers to business questions, not a quintillion bytes of data stored in some obscure cloud somewhere. As I stated at the end of my last column, the Big Data movement should be about (1) Getting rid of the noise, and (2) Providing simple answers to decision-makers. And getting to such answers is indeed the process of making data smaller and smaller.

In my past columns, I talked about the benefits of statistical models in the age of Big Data, as they are the best way to compact big and complex information in forms of simple answers (refer to “Why Model?”). Models built to predict (or point out) who is more likely to be into outdoor sports, to be a risk-averse investor, to go on a cruise vacation, to be a member of discount club, to buy children’s products, to be a bigtime donor or to be a NASCAR fan, are all providing specific answers to specific questions, while each model score is a result of serious reduction of information, often compressing thousands of variables into one answer. That simplification process in itself provides incredible value to decision-makers, as most wouldn’t know where to cut out unnecessary information to answer specific questions. Using mathematical techniques, we can cut down the noise with conviction.

In model development, “Variable Reduction” is the first major step after the target variable is determined (refer to “The Art of Targeting“). It is often the most rigorous and laborious exercise in the whole model development process, where the characteristics of models are often determined as each statistician has his or her unique approach to it. Now, I am not about to initiate a debate about the best statistical method for variable reduction (I haven’t met two statisticians who completely agree with each other in terms of methodologies), but I happened to know that many effective statistical analysts separate variables in terms of data types and treat them differently. In other words, not all data variables are created equal. So, what are the major types of data that database designers and decision-makers (i.e., non-mathematical types) should be aware of?

In the business of predictive analytics for marketing, the following three types of data make up three dimensions of a target individual’s portrait:

  1. Descriptive Data
  2. Transaction Data / Behavioral Data
  3. Attitudinal Data

In other words, if we get to know all three aspects of a person, it will be much easier to predict what the person is about and/or what the person will do. Why do we need these three dimensions? If an individual has a high income and is living in a highly valued home (demographic element, which is descriptive); and if he is an avid golfer (behavioral element often derived from his purchase history), can we just assume that he is politically conservative (attitudinal element)? Well, not really, and not all the time. Sometimes we have to stop and ask what the person’s attitude and outlook on life is all about. Now, because it is not practical to ask everyone in the country about every subject, we often build models to predict the attitudinal aspect with available data. If you got a phone call from a political party that “assumes” your political stance, that incident was probably not random or accidental. Like I emphasized many times, analytics is about making the best of what is available, as there is no such thing as a complete dataset, even in this age of ubiquitous data. Nonetheless, these three dimensions of the data spectrum occupy a unique and distinct place in the business of predictive analytics.

So, in the interest of obtaining, maintaining and utilizing all possible types of data—or, conversely, reducing the size of data with conviction by knowing what to ignore, let us dig a little deeper:

Descriptive Data
Generally, demographic data—such as people’s income, age, number of children, housing size, dwelling type, occupation, etc.—fall under this category. For B-to-B applications, “Firmographic” data—such as number of employees, sales volume, year started, industry type, etc.—would be considered as descriptive data. It is about what the targets “look like” and, generally, they are frozen in the present time. Many prominent data compilers (or data brokers, as the U.S. government calls them) collect, compile and refine the data and make hundreds of variables available to users in various industry sectors. They also fill in the blanks using predictive modeling techniques. In other words, the compilers may not know the income range of every household, but using statistical techniques and other available data—such as age, home ownership, housing value, and many other variables—they provide their best estimates in case of missing values. People often have some allergic reaction to such data compilation practices siting privacy concerns, but these types of data are not about looking up one person at a time, but about analyzing and targeting groups (or segments) of individuals and households. In terms of predictive power, they are quite effective and results are very consistent. The best part is that most of the variables are available for every household in the country, whether they are actual or inferred.

Other types of descriptive data include geo-demographic data, and the Census Data by the U.S. Census Bureau falls under this category. These datasets are organized by geographic denominations such as Census Block Group, Census Tract, Country or ZIP Code Tabulation Area (ZCTA, much like postal ZIP codes, but not exactly the same). Although they are not available on an individual or a household level, the Census data are very useful in predictive modeling, as every target record can be enhanced with it, even when name and address are not available, and data themselves are very stable. The downside is that while the datasets are free through Census Bureau, the raw datasets contain more than 40,000 variables. Plus, due to the budget cut and changes in survey methods during the past decade, the sample size (yes, they sample) decreased significantly, rendering some variables useless at lower geographic denominations, such as Census Block Group. There are professional data companies that narrowed down the list of variables to manageable sizes (300 to 400 variables) and filled in the missing values. Because they are geo-level data, variables are in the forms of percentages, averages or median values of elements, such as gender, race, age, language, occupation, education level, real estate value, etc. (as in, percent male, percent Asian, percent white-collar professionals, average income, median school years, median rent, etc.).

There are many instances where marketers cannot pinpoint the identity of a person due to privacy issues or challenges in data collection, and the Census Data play a role of effective substitute for individual- or household-level demographic data. In predictive analytics, duller variables that are available nearly all the time are often more valuable than precise information with limited availability.

Transaction Data/Behavioral Data
While descriptive data are about what the targets look like, behavioral data are about what they actually did. Often, behavioral data are in forms of transactions. So many just call it transaction data. What marketers commonly refer to as RFM (Recency, Frequency and Monetary) data fall under this category. In terms of predicting power, they are truly at the top of the food chain. Yes, we can build models to guess who potential golfers are with demographic data, such as age, gender, income, occupation, housing value and other neighborhood-level information, but if you get to “know” that someone is a buyer of a box of golf balls every six weeks or so, why guess? Further, models built with transaction data can even predict the nature of future purchases, in terms of monetary value and frequency intervals. Unfortunately, many who have access to RFM data are using them only in rudimentary filtering, as in “select everyone who spends more than $200 in a gift category during the past 12 months,” or something like that. But we can do so much more with rich transaction data in every stage of the marketing life cycle for prospecting, cultivating, retaining and winning back.

Other types of behavioral data include non-transaction data, such as click data, page views, abandoned shopping baskets or movement data. This type of behavioral data is getting a lot of attention as it is truly “big.” The data have been out of reach for many decision-makers before the emergence of new technology to capture and store them. In terms of predictability, nevertheless, they are not as powerful as real transaction data. These non-transaction data may provide directional guidance, as they are what some data geeks call “a-camera-on-everyone’s-shoulder” type of data. But we all know that there is a clear dividing line between people’s intentions and their commitments. And it can be very costly to follow every breath you take, every move you make, and every step you take. Due to their distinct characteristics, transaction data and non-transaction data must be managed separately. And if used together in models, they should be clearly labeled, so the analysts will never treat them the same way by accident. You really don’t want to mix intentions and commitments.

The trouble with the behavioral data are, (1) they are difficult to compile and manage, (2) they get big; sometimes really big, (3) they are generally confined within divisions or companies, and (4) they are not easy to analyze. In fact, most of the examples that I used in this series are about the transaction data. Now, No. 3 here could be really troublesome, as it equates to availability (or lack thereof). Yes, you may know everything that happened with your customers, but do you know where else they are shopping? Fortunately, there are co-op companies that can answer that question, as they are compilers of transaction data across multiple merchants and sources. And combined data can be exponentially more powerful than data in silos. Now, because transaction data are not always available for every person in databases, analysts often combine behavioral data and descriptive data in their models. Transaction data usually become the dominant predictors in such cases, while descriptive data play the supporting roles filling in the gaps and smoothing out the predictive curves.

As I stated repeatedly, predictive analytics in marketing is all about finding out (1) whom to engage, and (2) if you decided to engage someone, what to offer to that person. Using carefully collected transaction data for most of their customers, there are supermarket chains that achieved 100 percent customization rates for their coupon books. That means no two coupon books are exactly the same, which is a quite impressive accomplishment. And that is all transaction data in action, and it is a great example of “Big Data” (or rather, “Smart Data”).

Attitudinal Data
In the past, attitudinal data came from surveys, primary researches and focus groups. Now, basically all social media channels function as gigantic focus groups. Through virtual places, such as Facebook, Twitter or other social media networks, people are freely volunteering what they think and feel about certain products and services, and many marketers are learning how to “listen” to them. Sentiment analysis falls under that category of analytics, and many automatically think of this type of analytics when they hear “Big Data.”

The trouble with social data is:

  1. We often do not know who’s behind the statements in question, and
  2. They are in silos, and it is not easy to combine such data with transaction or demographic data, due to lack of identity of their sources.

Yes, we can see that a certain political candidate is trending high after an impressive speech, but how would we connect that piece of information to whom will actually donate money for the candidate’s causes? If we can find out “where” the target is via an IP address and related ZIP codes, we may be able to connect the voter to geo-demographic data, such as the Census. But, generally, personally identifiable information (PII) is only accessible by the data compilers, if they even bothered to collect them.

Therefore, most such studies are on a macro level, citing trends and directions, and types of analysts in that field are quite different from the micro-level analysts who deal with behavioral data and descriptive data. Now, the former provide important insights regarding the “why” part of the equation, which is often the hardest thing to predict; while the latter provide answers to “who, what, where and when.” (“Who” is the easiest to answer, and “when” is the hardest.) That “why” part may dictate a product development part of the decision-making process at the conceptual stage (as in, “Why would customers care for a new type of dishwasher?”), while “who, what, where and when” are more about selling the developed products (as in “Let’s sell those dishwashers in the most effective ways.”). So, it can be argued that these different types of data call for different types of analytics for different cycles in the decision-making processes.

Obviously, there are more types of data out there. But for marketing applications dealing with humans, these three types of data complete the buyers’ portraits. Now, depending on what marketers are trying to do with the data, they can prioritize where to invest first and what to ignore (for now). If they are early in the marketing cycle trying to develop a new product for the future, they need to understand why people want something and behave in certain ways. If signing up as many new customers as possible is the immediate goal, finding out who and where the ideal prospects are becomes the most imminent task. If maximizing the customer value is the ongoing objective, then you’d better start analyzing transaction data more seriously. If preventing attrition is the goal, then you will have to line up the transaction data in time series format for further analysis.

The business goals must dictate the analytics, and the analytics call for specific types of data to meet the goals, and the supporting datasets should be in “analytics-ready” formats. Not the other way around, where businesses are dictated by the limitations of analytics, and analytics are hampered by inadequate data clutters. That type of business-oriented hierarchy should be the main theme of effective data management, and with clear goals and proper data strategy, you will know where to invest first and what data to ignore as a decision-maker, not necessarily as a mathematical analyst. And that is the first step toward making the Big Data smaller. Don’t be impressed by the size of the data, as they often blur the big picture and not all data are created equal.

Winner of the 2012 Presidential Election: Data

Now that the contentious 2012 election has finally ended, we get a chance to look back and assess what happened and why. Regardless of who you voted for, it’s impossible not to acknowledge that the real winner of the 2012 election was data.

Now that the contentious 2012 election has finally ended, we get a chance to look back and assess what happened and why. Regardless of who you voted for, it’s impossible not to acknowledge that the real winner of the 2012 election was data.

For the first time in history, this election demonstrated the power of using analytics and numbers crunching for politics. What I find most remarkable is the rapid evolution of this change. If you look back just a few years ago, Karl Rove was widely regarded as the political mastermind of the universe. Rove’s primary innovation was the use of highly targeted direct mail campaigns to get out the evangelical and rural vote to win the 2004 election for George W. Bush. Fast-forward a few short years, and not only did Rove’s candidate lose, but the master strategist was reduced to challenging his network’s numbers geeks live on the air, only to be rebuffed.

In every way, the old guard was bested by a new generation of numbers crunchers, nerds and data geeks who leveraged data science, analytics, predictive modeling and a highly sophisticated online marketing campaign to poll, raise money and get out the vote in an unprecedented manner.

On the subject of polling, I was intrigued by Nate Silver’s incredibly accurate FiveThirtyEight blog that used a sophisticated system to synthesize dozens of national polls in a rolling average to predict the actual election results. In the run-up to the election, he even received a lot of flak from various pundits who claimed he was wrong basing on their perception on voter “enthusiasm,” “momentum” and other non-scientific observations. At the end of the day, however, data won out over hot air and punditry big time. Silver’s final tally was absolutely dead on, crushing most other national polls by a wide margin.

I especially love his Nov. 10 post in which Silver analyzes the various polls and shows which ones fared the best and which ones weren’t worth the paper they were printed on. It’s shocking to see that the Gallup Poll—in many people’s mind the oldest and most trusted name in polling—was skewed Republican by a whopping 7.2 points when averaged across all 11 of their polls. Ouch. For an organization that specializes in polling, their long-term viability must be called into question at this point.

One thing I find highly interesting when looking at the various poll results is that when you examine their methodologies, it’s not too surprising that Gallup fell flat on its face, relying on live phone surveys as the primary polling method. When considering that many young, urban and minority voters don’t have a landline and only have a cellphone, it doesn’t take a rocket scientist to conclude any poll that doesn’t include a large number of cellphones in its cohort is going to skew wildly Republican … which is exactly what happened to Gallup, Rasmussen and several other prominent national polls.

Turning to the Obama campaign’s incredible Get Out The Vote (GOTV) machine that turned out more people in more places than anyone could have ever predicted, there’s no doubt in anyone’s mind that for data-driven marketers, the 2012 U.S. election victory was a watershed moment in history.

According to a recent article in Time titled “Inside the Secret World of the Data Crunchers Who Helped Obama Win,” the secret sauce behind Obama’s big win was a massive data effort that helped him raise $1 billion, remade the process of targeting TV ads, and created detailed models of swing-state voters that could be used to increase the effectiveness of everything from phone calls and door-knocks to direct mailings and social media.

What’s especially interesting is that, similarly to a tech company, Obama’s campaign actually had a large in-house team of geeks, data scientists and online marketers. Composed of elite and senior tech talent from Twitter, Google, Facebook, Craigslist and Quora, the program enabled the campaign to turn out more volunteers and donors than it had in 2008, mostly by making it it simpler and easier for anyone to engage with the President’s reelection effort. If you’d like to read more about it, there’s a great article recently published in The Atlantic titled “When the Nerds Go Marching In” that describes the initiative in great detail.

Well, looks like I’m out of space. One thing’s for sure though, I’m going to be very interested to see what happens in coming elections as these practices become more mainstream and the underlying techniques are further refined.

If you have any observations about the use of data and analytics in the election you’d like to share, please let me know in your comments.

—Rio