Evangelizing Analytics Through Baseball

What do you think that ERA (Earned Run Average) stands for? If you can paint the quality of a baseball pitcher with a bunch of statistics and indices like that, yes, you do have a basic aptitude to be analytical.

baseballA great many people are simply allergic to mathematics. Maybe even more so than to public speaking. They just hate the subject and the very thought of it gives them a big headache. In many cases though, I just have to blame their math teachers in their youth for not providing enough appreciation for the subject, as the same people have no trouble understanding baseball stats of their favorite teams and players.

What do you think that ERA (Earned Run Average) stands for? It is nothing but an index value made of a numerator and a denominator, multiplied by a factor. If you can paint the quality of a baseball pitcher with a bunch of statistics and indices like that, yes, you do have a basic aptitude to be analytical. Maybe not enough to be a professional analyst, but enough to be a consumer of analytics.

In the near future, that kind of basic aptitude may be all we need to navigate through this complex world weaved in numbers and figures originated from humans, machines and networks that connect them all.

All the headachy equational problems will be taken care of by the smart machines anyway, right? Maybe.

The way this old analyst sees it, the answer could be yes or no. Because there is no way for any machine to provide good answers to illogical questions (like Mr. Spock would point out). And the logical mind comes from mathematical training — with a little help from the DNA with which the subjects were born.

Why do I worry about such things now? Simple. I see too many decision-makers who say they must get more into analytics, and their behaviors tell otherwise. It is unfortunate for them, as the verdict is out already on the effectiveness of good analytics. In fact, the question is no longer about whether an organization must embrace more analytics-based decision-making processes, but about how deep and complex they must get into it. The winners and losers in the business world will not solely be determined by the business models, but by the effectiveness of execution, enhanced and measured by analytics. Gut feelings may have worked well for many in the beginning of the last century, but that won’t be enough when competitors are armed with data and analytical toolsets.

We are undoubtedly living in the complex world now. The differences between people who freely wield technology and toolsets and people who are afraid of the changes won’t be just income levels; it may even form new social classes. And yet, the way many are dealing with perceived and real challenges isn’t much different from the past era. No, you can’t just work hard and hope that everything will be alright. There are people who see what is coming before anyone else does. Not all may be able to “see” it completely per se, but at least some have better future prediction than others. And those who do properly employ predictive analytics will clearly have an edge over those who don’t.

Then, why is it so difficult to “sell” analytics? There are many reasons. The first one, I think, is the fear of unknown (or unfamiliar territory). People hate to spend money and resources on the things that they don’t understand. The majority of the population does not understand how the internal combustion engine works, so the car companies sell coolness and other perceived benefits of their products.

Unfortunately, there is absolutely nothing sexy — for the general population, not for the geeks — about algorithms that may increase sales and reduce costs. So, the analysts must try to emphasize the benefits of it all; yet too many fall into the trap of believing that everyone will appreciate the beauty of the solution they agonized over. Well, most people simply don’t care for the details. That is why engineers don’t sell cars, but salespeople do. Analysts must get to the point fast; possibly within a minute, as most don’t have the patience for anyone’s mathematical journey.

Another reason why selling the concept of analytics is difficult is collective resistance to change. For most people, change is scary; and even if it’s not, it’s terribly inconvenient. All organizations and people in them are accustomed to some existing ways of doing their businesses. Analytics inevitably invoke changes in existing behaviors. It may start with a simple request for some extra data collection (“What do you mean I have to enter more data into my Salesforce account?”), and ultimately move onto changes in decision-making processes (“Oh, now I have to look at those model scores while I’m on the phone with the customer?”).

How to Be a Good Data Scientist

I guess no one wants to be a plain “Analyst” anymore; now “Data Scientist” is the title of the day. Then again, I never thought that there was anything wrong with titles like “Secretary,” “Stewardess” or “Janitor,” either. But somehow, someone decided “Administrative Assistant” should replace “Secretary” completely, and that someone was very successful in that endeavor. So much so that, people actually get offended when they are called “Secretaries.” The same goes for “Flight Attendants.” If you want an extra bag of peanuts or the whole can of soda with ice on the side, do not dare to call any service personnel by the outdated title. The verdict is still out for the title “Janitor,” as it could be replaced by “Custodial Engineer,” “Sanitary Engineer,” “Maintenance Technician,” or anything that gives an impression that the job requirement includes a degree in engineering. No matter. When the inflation-adjusted income of salaried workers is decreasing, I guess the number of words in the job title should go up instead. Something’s got to give, right?

I guess no one wants to be a plain “Analyst” anymore; now “Data Scientist” is the title of the day. Then again, I never thought that there was anything wrong with titles like “Secretary,” “Stewardess” or “Janitor,” either. But somehow, someone decided “Administrative Assistant” should replace “Secretary” completely, and that someone was very successful in that endeavor. So much so that, people actually get offended when they are called “Secretaries.” The same goes for “Flight Attendants.” If you want an extra bag of peanuts or the whole can of soda with ice on the side, do not dare to call any service personnel by the outdated title. The verdict is still out for the title “Janitor,” as it could be replaced by “Custodial Engineer,” “Sanitary Engineer,” “Maintenance Technician,” or anything that gives an impression that the job requirement includes a degree in engineering. No matter. When the inflation-adjusted income of salaried workers is decreasing, I guess the number of words in the job title should go up instead. Something’s got to give, right?

Please do not ask me to be politically correct here. As an openly Asian person in America, I am not even sure why I should be offended when someone addresses me as an “Oriental.” Someone explained it to me a long time ago. The word is reserved for “things,” not for people. OK, then. I will be offended when someone knowingly addresses me as an Oriental, now that the memo has been out for a while. So, do me this favor and do not call me an Oriental (at least in front of my face), and I promise that I will not call anyone an “Occidental” in return.

In any case, anyone who touches data for living now wants to be called a Data Scientist. Well, the title is longer than one word, and that is a good start. Did anyone get a raise along with that title inflation? I highly doubt it. But I’ve noticed the qualifications got much longer and more complicated.

I have seen some job requirements for data scientists that call for “all” of the following qualifications:

  • A master’s degree in statistics or mathematics; able to build statistical models proficiently using R or SAS
  • Strong analytical and storytelling skills
  • Hands-on knowledge in technologies such as Hadoop, Java, Python, C++, NoSQL, etc., being able to manipulate the data any which way, independently
  • Deep knowledge in ETL (extract, transform and load) to handle data from all sources
  • Proven experience in data modeling and database design
  • Data visualization skills using whatever tools that are considered to be cool this month
  • Deep business/industry/domain knowledge
  • Superb written and verbal communication skills, being able to explain complex technical concepts in plain English
  • Etc. etc…

I actually cut this list short, as it is already becoming ridiculous. I just want to see the face of a recruiter who got the order to find super-duper candidates based on this list—at the same salary level as a Senior Statistician (another fine title). Heck, while we’re at it, why don’t we add that the candidate must look like Brad Pitt and be able to tap-dance, too? The long and the short of it is maybe some executive wanted to hire just “1” data scientist with all these skillsets, hoping to God that this mad scientist will be able to make sense out of mounds of unstructured and unorganized data all on her own, and provide business answers without even knowing what the question was in the first place.

Over the years, I have worked with many statisticians, analysts and programmers (notice that they are all one-word titles), dealing with large, small, clean, dirty and, at times, really dirty data (hence the title of this series, “Big Data, Small Data, Clean Data, Messy Data”). And navigating through all those data has always been a team effort.

Yes, there are some exceptional musicians who can write music and lyrics, sing really well, play all instruments, program sequencers, record, mix, produce and sell music—all on their own. But if you insist that only such geniuses can produce music, there won’t be much to listen to in this world. Even Stevie Wonder, who can write and sing, and play keyboards, drums and harmonicas, had close to 100 names on the album credits in his heyday. Yes, the digital revolution changed the music scene as much as the data industry in terms of team sizes, but both aren’t and shouldn’t be one-man shows.

So, if being a “Data Scientist” means being a super businessman/analyst/statistician who can program, build models, write, present and sell, we should all just give up searching for one in the near future within your budget. Literally, we may be able to find a few qualified candidates in the job market on a national level. Too bad that every industry report says we need tens of thousands of them, right now.

Conversely, if it is just a bloated new title for good old data analysts with some knowledge in statistical applications and the ability to understand business needs—yeah, sure. Why not? I know plenty of those people, and we can groom more of them. And I don’t even mind giving them new long-winded titles that are suitable for the modern business world and peer groups.

I have been in the data business for a long time. And even before the datasets became really large, I have always maintained the following division of labor when dealing with complex data projects involving advanced analytics:

  • Business Analysts
  • Programmers/Developers
  • Statistical Analysts

The reason is very simple: It is extremely difficult to be a master-level expert in just one of these areas. Out of hundreds of statisticians who I’ve worked with, I can count only a handful of people who even “tried” to venture into the business side. Of those, even fewer successfully transformed themselves into businesspeople, and they are now business owners of consulting practices or in positions with “Chief” in their titles (Chief Data Officer or Chief Analytics Officer being the title du jour).

On the other side of the spectrum, less than a 10th of decent statisticians are also good at coding to manipulate complex data. But even they are mostly not good enough to be completely independent from professional programmers or developers. The reality is, most statisticians are not very good at setting up workable samples out of really messy data. Simply put, handling data and developing analytical frameworks or models call for different mindsets on a professional level.

The Business Analysts, I think, are the closest to the modern-day Data Scientists; albeit that the ones in the past were less so technicians, due to available toolsets back then. Nevertheless, granted that it is much easier to teach business aspects to statisticians or developers than to convert businesspeople or marketers into coders (no offense, but true), many of these “in-between” people—between the marketing world and technology world, for example—are rooted in the technology world (myself included) or at least have a deep understanding of it.

At times labeled as Research Analysts, they are the folks who would:

  • Understand the business requirements and issues at hand
  • Prescribe suitable solutions
  • Develop tangible analytical projects
  • Perform data audits
  • Procure data from various sources
  • Translate business requirements into technical specifications
  • Oversee the progress as project managers
  • Create reports and visual presentations
  • Interpret the results and create “stories”
  • And present the findings and recommended next steps to decision-makers

Sounds complex? You bet it is. And I didn’t even list all the job functions here. And to do this job effectively, these Business/Research Analysts (or Data Scientists) must understand the technical limitations of all related areas, including database, statistics, and general analytics, as well as industry verticals, uniqueness of business models and campaign/transaction channels. But they do not have to be full-blown statisticians or coders; they just have to know what they want and how to ask for it clearly. If they know how to code as well, great. All the more power to them. But that would be like a cherry on top, as the business mindset should be in front of everything.

So, now that the data are bigger and more complex than ever in human history, are we about to combine all aspects of data and analytics business and find people who are good at absolutely everything? Yes, various toolsets made some aspects of analysts’ lives easier and simpler, but not enough to get rid of the partitions between positions completely. Some third basemen may be able to pitch, too. But they wouldn’t go on the mound as starting pitchers—not on a professional level. And yes, analysts who advance up through the corporate and socioeconomic ladder are the ones who successfully crossed the boundaries. But we shouldn’t wait for the ones who are masters of everything. Like I said, even Stevie Wonder needs great sound engineers.

Then, what would be a good path to find Data Scientists in the existing pool of talent? I have been using the following four evaluation criteria to identify individuals with upward mobility in the technology world for a long time. Like I said, it is a lot simpler and easier to teach business aspects to people with technical backgrounds than the other way around.

So let’s start with the techies. These are the qualities we need to look for:

1. Skills: When it comes to the technical aspect of it, the skillset is the most important criterion. Generally a person has it, or doesn’t have it. If we are talking about a developer, how good is he? Can he develop a database without wasting time? A good coder is not just a little faster than mediocre ones; he can be 10 to 20 times faster. I am talking about the ones who don’t have to look through some manual or the Internet every five minutes, but the ones who just know all the shortcuts and options. The same goes for statistical analysts. How well is she versed in all the statistical techniques? Or is she a one-trick pony? How is her track record? Are her models performing in the market for a prolonged time? The thing about statistical work is that time is the ultimate test; we eventually get to find out how well the prediction holds up in the real world.

2. Attitude: This is a very important aspect, as many techies are locked up in their own little world. Many are socially awkward, like characters in Dilbert or “Big Bang Theory,” and most much prefer to deal with the machines (where things are clean-cut binary) than people (well, humans can be really annoying). Some do not work well with others and do not know how to compromise at all, as they do not know how to look at the world from a different perspective. And there are a lot of lazy ones. Yes, lazy programmers are the ones who are more motivated to automate processes (primarily to support their laissez faire lifestyle), but the ones who blow the deadlines all the time are just too much trouble for the team. In short, a genius with a really bad attitude won’t be able to move to the business or the management side, regardless of the IQ score.

3. Communication: Many technical folks are not good at written or verbal communications. I am not talking about just the ones who are foreign-born (like me), even though most technically oriented departments are full of them. The issue is many technical people (yes, even the ones who were born and raised in the U.S., speaking English) do not communicate with the rest of the world very well. Many can’t explain anything without using technical jargon, nor can they summarize messages to decision-makers. Businesspeople don’t need to hear the life story about how complex the project was or how messy the data sets were. Conversely, many techies do not understand marketers or businesspeople who speak plain English. Some fail to grasp the concept that human beings are not robots, and most mortals often fail to communicate every sentence as a logical expression. When a marketer says “Omit customers in New York and New Jersey from the next campaign,” the coder on the receiving end shouldn’t take that as a proper Boolean logic. Yes, obviously a state cannot be New York “and” New Jersey at the same time. But most humans don’t (or can’t) distinguish such differences. Seriously, I’ve seen some developers who refuse to work with people whose command of logical expressions aren’t at the level of Mr. Spock. That’s the primary reason we need business analysts or project managers who work as translators between these two worlds. And obviously, the translators should be able to speak both languages fluently.

4. Business Understanding: Granted, the candidates in question are qualified in terms of criteria one through three. Their eagerness to understand the ultimate business goals behind analytical projects would truly set them apart from the rest on the path to become a data scientist. As I mentioned previously, many technically oriented people do not really care much about the business side of the deal, or even have slight curiosity about it. What is the business model of the company for which they are working? How do they make money? What are the major business concerns? What are the long- and short-term business goals of their clients? Why do they lose sleep at night? Before complaining about incomplete data, why are the databases so messy? How are the data being collected? What does all this data mean for their bottom line? Can you bring up the “So what?” question after a great scientific finding? And ultimately, how will we make our clients look good in front of “their” bosses? When we deal with technical issues, we often find ourselves at a crossroad. Picking the right path (or a path with the least amount of downsides) is not just an IT decision, but more of a business decision. The person who has a more holistic view of the world, without a doubt, would make a better decision—even for a minor difference in a small feature, in terms of programming. Unfortunately, it is very difficult to find such IT people who have a balanced view.

And that is the punchline. We want data scientists who have the right balance of business and technical acumen—not just jacks of all trades who can do all the IT and analytical work all by themselves. Just like business strategy isn’t solely set by a data strategist, data projects aren’t done by one super techie. What we need are business analysts or data scientists who truly “get” the business goals and who will be able to translate them into functional technical specifications, with an understanding of all the limitations of each technology piece that is to be employed—which is quite different from being able to do it all.

If the career path for a data scientist ultimately leads to Chief Data Officer or Chief Analytics Officer, it is important for the candidates to understand that such “chief” titles are all about the business, not the IT. As soon as a CDO, CAO or CTO start representing technology before business, that organization is doomed. They should be executives who understand the technology and employ it to increase profit and efficiency for the whole company. Movie directors don’t necessarily write scripts, hold the cameras, develop special effects or act out scenes. But they understand all aspects of the movie-making process and put all the resources together to create films that they envision. As soon as a director falls too deep into just one aspect, such as special effects, the resultant movie quickly becomes an unwatchable bore. Data business is the same way.

So what is my advice for young and upcoming data scientists? Master the basics and be a specialist first. Pick a field that fits your aptitude, whether it be programming, software development, mathematics or statistics, and try to be really good at it. But remain curious about other related IT fields.

Then travel the world. Watch lots of movies. Read a variety of books. Not just technical books, but books about psychology, sociology, philosophy, science, economics and marketing, as well. This data business is inevitably related to activities that generate revenue for some organization. Try to understand the business ecosystem, not just technical systems. As marketing will always be a big part of the Big Data phenomenon, be an educated consumer first. Then look at advertisements and marketing campaigns from the promotor’s point of view, not just from an annoyed consumer’s view. Be an informed buyer through all available channels, online or offline. Then imagine how the world will be different in the future, and how a simple concept of a monetary transaction will transform along with other technical advances, which will certainly not stop at ApplePay. All of those changes will turn into business opportunities for people who understand data. If you see some real opportunities, try to imagine how you would create a startup company around them. You will quickly realize answering technical challenges is not even the half of building a viable business model.

If you are already one of those data scientists, live up to that title and be solution-oriented, not technology-oriented. Don’t be a slave to technologies, or be whom we sometimes address as a “data plumber” (who just moves data from one place to another). Be a master who wields data and technology to provide useful answers. And most importantly, don’t be evil (like Google says), and never do things just because you can. Always think about the social consequences, as actions based on data and technology affect real people, often negatively (more on this subject in future article). If you want to ride this Big Data wave for the foreseeable future, try not to annoy people who may not understand all the ins and outs of the data business. Don’t be the guy who spoils it for everyone else in the industry.

A while back, I started to see the unemployment rate as a rate of people who are being left behind during the progress (if we consider technical innovations as progress). Every evolutionary stage since the Industrial Revolution created gaps between supply and demand of new skillsets required for the new world. And this wave is not going to be an exception. It is unfortunate that, in this age of a high unemployment rate, we have such hard times finding good candidates for high tech positions. On one side, there are too many people who were educated under the old paradigm. And on the other side, there are too few people who can wield new technologies and apply them to satisfy business needs. If this new title “Data Scientist” means the latter, then yes. We need more of them, for sure. But we all need to be more realistic about how to groom them, as it would take a village to do so. And if we can’t even agree on what the job description for a data scientist should be, we will need lots of luck developing armies of them.

Smart Data – Not Big Data

As a concerned data professional, I am already plotting an exit strategy from this Big Data hype. Because like any bubble, it will surely burst. That inevitable doomsday could be a couple of years away, but I can feel it coming. At the risk of sounding too much like Yoda the Jedi Grand Master, all hypes lead to over-investments, all over-investments lead to disappointments, and all disappointments lead to blames. Yes, in a few years, lots of blames will go around, and lots of heads will roll.

As a concerned data professional, I am already plotting an exit strategy from this Big Data hype. Because like any bubble, it will surely burst. That inevitable doomsday could be a couple of years away, but I can feel it coming. At the risk of sounding too much like Yoda the Jedi Grand Master, all hypes lead to over-investments, all over-investments lead to disappointments, and all disappointments lead to blames. Yes, in a few years, lots of blames will go around, and lots of heads will roll.

So, why would I stay on the troubled side? Well, because, for now, this Big Data thing is creating lots of opportunities, too. I am writing this on my way back from Seoul, Korea, where I presented this Big Data idea nine times in just two short weeks, trotting from large venues to small gatherings. Just a few years back, I used to have a hard time explaining what I do for living. Now, I just have to say “Hey, I do this Big Data thing,” and the doors start to open. In my experience, this is the best “Open Sesame” moment for all data specialists. But it will last only if we play it right.

Nonetheless, I also know that I will somehow continue to make living setting data strategies, fixing bad data, designing databases and leading analytical activities, even after the hype cools down. Just with a different title, under a different banner. I’ve seen buzzwords come and go, and this data business has been carried on by the people who cut through each hype (and gargantuan amount of BS along with it) and create real revenue-generating opportunities. At the end of the day (I apologize for using this cliché), it is all about the bottom line, whether it comes from a revenue increase or cost reduction. It is never about the buzzwords that may have created the business opportunities in the first place; it has always been more about the substance that turned those opportunities into money-making machines. And substance needs no fancy title or buzzwords attached to it.

Have you heard Google or Amazon calling themselves a “Big Data” companies? They are the ones with sick amounts of data, but they also know that it is not about the sheer amount of data, but it is all about the user experience. “Wannabes” who are not able to understand the core values often hang onto buzzwords and hypes. As if Big Data, Cloud Computing or coding language du jour will come and save the day. But they are just words.

Even the name “Big Data” is all wrong, as it implies that bigger is always better. The 3 Vs of Big Data—volume, velocity and variety—are also misleading. That could be a meaningful distinction for existing data players, but for decision-makers, it gives a notion that size and speed are the ultimate quest. But for the users, small is better. They don’t have time to analyze big sets of data. They need small answers in fun size packages. Plus, why is big and fast new? Since the invention of modern computers, has there been any year when the processing speed did not get faster and storage capacity did not get bigger?

Lest we forget, it is the software industry that came up with this Big Data thing. It was created as a marketing tagline. We should have read it as, “Yes, we can now process really large amounts of data, too,” not as, “Big Data will make all your dreams come true.” If you are in the business of selling toolsets, of course, that is how you present your product. If guitar companies keep emphasizing how hard it is to be a decent guitar player, would that help their businesses? It is a lot more effective to say, “Hey, this is the same guitar that your guitar hero plays!” But you don’t become Jeff Beck just because you bought a white Fender Stratocaster with a rosewood neck. The real hard work begins “after” you purchase a decent guitar. However, this obvious connection is often lost in the data business. Toolsets never provide solutions on their own. They may make your life easier, but you’d still have to formulate the question in a logical fashion, and still have to make decisions based on provided data. And harnessing meanings out of mounds of data requires training of your mind, much like the way musicians practice incessantly.

So, before business people even consider venturing into this Big Data hype, they should ask themselves “Why data?” What are burning questions that you are trying to solve with the data? If you can’t answer this simple question, then don’t jump into it. Forget about it. Don’t get into it just because everyone else seems to be getting into it. Yeah, it’s a big party, but why are you going there? Besides, if you formulate the question properly, often you will find that you don’t need Big Data all the time. If fact, Big Data can be a terrible detour if your question can be answered by “small” data. But that happens all the time, because people approach their business questions through the processes set by the toolsets. Big Data should be about the business, not about the IT or data.

Smart Data, Not Big Data
So, how do we get over this hype? All too often, perception rules, and a replacement word becomes necessary to summarize the essence of the concept for the general public. In my opinion, “Big Data” should have been “Smart Data.” Piles of unorganized dumb data aren’t worth a damn thing. Imagine a warehouse full of boxes with no labels, collecting dust since 1943. Would you be impressed with the sheer size of the warehouse? Great, the ark that Indiana Jones procured (or did he?) may be stored in there somewhere. But if no one knows where it is—or even if it can be located, if no one knows what to do with it—who cares?

Then, how do data get smarter? Smart data are bite-sized answers to questions. A thousand variables could have been considered to provide the weather forecast that calls for a “70 percent chance of scattered showers in the afternoon,” but that one line that we hear is the smart piece of data. Not the list of all the variables that went into the formula that created that answer. Emphasizing the raw data would be like giving paints and brushes to a person who wants a picture on the wall. As in, “Hey, here are all the ingredients, so why don’t you paint the picture and hang it on the wall?” Unfortunately, that is how the Big Data movement looks now. And too often, even the ingredients aren’t all that great.

I visit many companies only to find that the databases in question are just messy piles of unorganized and unstructured data. And please do not assume that such disarrays are good for my business. I’d rather spend my time harnessing meanings out of data and creating values, not taking care of someone else’s mess all the time. Really smart data are small, concise, clean and organized. Big Data should only be seen in “Behind the Scenes” types of documentaries for manias, not for everyday decision-makers.

I have been already saying that Big Data must get smaller for some time (refer to “Big Data Must Get Smaller“) and I would repeat it until it becomes a movement on its own. The Big Data movement must be about:

  1. Cutting down the noise
  2. Providing the answers

There is too much noise in the data, and cutting it out is the first step toward making the data smaller and smarter. The trouble is that the definition of “noise” is not static. Rock music that I grew up with was certainly a noise to my parents’ generation. In turn, some music that my kids listen to is pure noise to me. Likewise, “product color,” which is essential for a database designed for an inventory management system, may or may not be noise if the goal is to sell more apparel items. In such cases, more important variables could be style, brand, price range, target gender, etc., but color could be just peripheral information at best, or even noise (as in, “Uh, she isn’t going to buy just red shoes all the time?”). How do we then determine the differences? First, set the clear goals (as in, “Why are we playing with the data to begin with?”), define the goals using logical expressions, and let mathematics take care of it. Now you can drop the noise with conviction (even if it may look important to human minds).

If we continue with that mathematical path, we would reach the second part, which is “providing answers to the question.” And the smart answers are in the forms of yes/no, probability figures or some type of scores. Like in the weather forecast example, the question would be “chance of rain on a certain day” and the answer would be “70 percent.” Statistical modeling is not easy or simple, but it is the essential part of making the data smarter, as models are the most effective way to summarize complex and abundant data into compact forms (refer to “Why Model?”).

Most people do not have degrees in mathematics or statistics, but they all know what to do with a piece of information such as “70 percent chance of rain” on the day of a company outing. Some may complain that it is not a definite yes/no answer, but all would agree that providing information in this form is more humane than dumping all the raw data onto users. Sales folks are not necessarily mathematicians, but they would certainly appreciate scores attached to each lead, as in “more or less likely to close.” No, that is not a definite answer, but now sales people can start calling the leads in the order of relative importance to them.

So, all the Big Data players and data scientists must try to “humanize” the data, instead of bragging about the size of the data, making things more complex, and providing irrelevant pieces of raw data to users. Make things simpler, not more complex. Some may think that complexity is their job security, but I strongly disagree. That is a sure way to bring down this Big Data movement to the ground. We are already living in a complex world, and we certainly do not need more complications around us (more on “How to be a good data scientist” in a future article).

It’s About the Users, Too
On the flip side, the decision-makers must change their attitude about the data, as well.

1. Define the goals first: The main theme of this series has been that the Big Data movement is about the business, not IT or data. But I’ve seen too many business folks who would so willingly take a hands-off approach to data. They just fund the database; do not define clear business goals to developers; and hope to God that someday, somehow, some genius will show up and clear up the mess for them. Guess what? That cavalry is never coming if you are not even praying properly. If you do not know what problems you want to solve with data, don’t even get started; you will get to nowhere really slowly, bleeding lots of money and time along the way.

2. Take the data seriously: You don’t have to be a scientist to have a scientific mind. It is not ideal if someone blindly subscribes anything computers spew out (there are lots of inaccurate information in databases; refer to “Not All Databases Are Created Equal.”). But too many people do not take data seriously and continue to follow their gut feelings. Even if your customer profile coming out of a serious analysis does not match with your preconceived notions, do not blindly reject it; instead, treat it as a newly found gold mine. Gut feelings are even more overrated than Big Data.

3. Be logical: Illogical questions do not lead anywhere. There is no toolset that reads minds—at least not yet. Even if we get to have such amazing computers—as seen on “Star Trek” or in other science fiction movies—you would still have to ask questions in a logical fashion for them to be effective. I am not asking decision-makers to learn how to code (or be like Mr. Spock or his loyal follower, Dr. Sheldon Cooper), but to have some basic understanding of logical expressions and try to learn how analysts communicate with computers. This is not data geek vs. non-geek world anymore; we all have to be a little geekier. Knowing Boolean expressions may not be as cool as being able to throw a curve ball, but it is necessary to survive in the age of information overload.

4. Shoot for small successes: Start with a small proof of concept before fully investing in large data initiatives. Even with a small project, one gets to touch all necessary steps to finish the job. Understanding the flow of information is as important as each specific step, as most breakdowns occur in between steps, due to lack of proper connections. There was Gemini program before Apollo missions. Learn how to dock spaceships in space before plotting the chart to the moon. Often, over-investments are committed when the discussion is led by IT. Outsource even major components in the beginning, as the initial goal should be mastering the flow of things.

5. Be buyer-centric: No customer is bound by the channel of the marketer’s choice, and yet, may businesses act exactly that way. No one is an online person just because she did not refuse your email promotions yet (refer to “The Future of Online is Offline“). No buyer is just one dimensional. So get out of brand-, division-, product- or channel-centric mindsets. Even well-designed, buyer-centric marketing databases become ineffective if users are trapped in their channel- or division-centric attitudes, as in “These email promotions must flow!” or “I own this product line!” The more data we collect, the more chances marketers will gain to impress their customers and prospects. Do not waste those opportunities by imposing your own myopic views on them. Big Data movement is not there to fortify marketers’ bad habits. Thanks to the size of the data and speed of machines, we are now capable of disappointing a lot of people really fast.

What Did This Hype Change?
So, what did this Big Data hype change? First off, it changed people’s attitudes about the data. Some are no longer afraid of large amounts of information being thrown at them, and some actually started using them in their decision-making processes. Many realized that we are surrounded by numbers everywhere, not just in marketing, but also in politics, media, national security, health care and the criminal justice system.

Conversely, some people became more afraid—often with good reasons. But even more often, people react based on pure fear that their personal information is being actively exploited without their consent. While data geeks are rejoicing in the age of open source and cloud computing, many more are looking at this hype with deep suspicions, and they boldly reject storing any personal data in those obscure “clouds.” There are some people who don’t even sign up for EZ Pass and voluntarily stay on the long lane to pay tolls in the old, but untraceable way.

Nevertheless, not all is lost in this hype. The data got really big, and types of data that were previously unavailable, such as mobile and social data, became available to many marketers. Focus groups are now the size of Twitter followers of the company or a subject matter. The collection rate of POS (point of service) data has been increasingly steady, and some data players became virtuosi in using such fresh and abundant data to impress their customers (though some crossed that “creepy” line inadvertently). Different types of data are being used together now, and such merging activities will compound the predictive power even further. Analysts are dealing with less missing data, though no dataset would ever be totally complete. Developers in open source environments are now able to move really fast with new toolsets that would just run on any device. Simply, things that our forefathers of direct marketing used to take six months to complete can be done in few hours, and in the near future, maybe within a few seconds.

And that may be a good thing and a bad thing. If we do this right, without creating too many angry consumers and without burning holes in our budgets, we are currently in a position to achieve great many things in terms of predicting the future and making everyone’s lives a little more convenient. If we screw it up badly, we will end up creating lots of angry customers by abusing sensitive data and, at the same time, wasting a whole lot of investors’ money. Then this Big Data thing will go down in history as a great money-eating hype.

We should never do things just because we can; data is a powerful tool that can hurt real people. Do not even get into it if you don’t have a clear goal in terms of what to do with the data; it is not some piece of furniture that you buy just because your neighbor bought it. Living with data is a lifestyle change, and it requires a long-term commitment; it is not some fad that you try once and give up. It is a continuous loop where people’s responses to marketer’s data-based activities create even more data to be analyzed. And that is the only way it keeps getting better.

There Is No Big Data
And all that has nothing to do with “Big.” If done right, small data can do plenty. And in fact, most companies’ transaction data for the past few years would easily fit in an iPhone. It is about what to do with the data, and that goal must be set from a business point of view. This is not just a new playground for data geeks, who may care more for new hip technologies that sound cool in their little circle.

I recently went to Brazil to speak at a data conference called QIBRAS, and I was pleasantly surprised that the main theme of it was the quality of the data, not the size of the data. Well, at least somewhere in the world, people are approaching this whole thing without the “Big” hype. And if you look around, you will not find any successful data players calling this thing “Big Data.” They just deal with small and large data as part of their businesses. There is no buzzword, fanfare or a big banner there. Because when something is just part of your everyday business, you don’t even care what you call it. You just do. And to those masters of data, there is no Big Data. If Google all of a sudden starts calling itself a Big Data company, it would be so uncool, as that word would seriously limit it. Think about that.

Why Model?

Why model? Uh, because someone is ridiculously good looking, like Derek Zoolander? No, seriously, why model when we have so much data around? The short answer is because we will never know the whole truth. That would be the philosophical answer. Physicists construct models to make new quantum field theories more attractive theoretically and more testable physically. If a scientist already knows the secrets of the universe, well, then that person is on a first-name basis with God Almighty, and he or she doesn’t need any models to describe things like particles or strings. And the rest of us should just hope the scientist isn’t one of those evil beings in “Star Trek.”

Why model? Uh, because someone is ridiculously good looking, like Derek Zoolander? No, seriously, why model when we have so much data around?

The short answer is because we will never know the whole truth. That would be the philosophical answer. Physicists construct models to make new quantum field theories more attractive theoretically and more testable physically. If a scientist already knows the secrets of the universe, well, then that person is on a first-name basis with God Almighty, and he or she doesn’t need any models to describe things like particles or strings. And the rest of us should just hope the scientist isn’t one of those evil beings in “Star Trek.”

Another answer to “why model?” is because we don’t really know the future, not even the immediate future. If some object is moving toward a certain direction at a certain velocity, we can safely guess where it will end up in one hour. Then again, nothing in this universe is just one-dimensional like that, and there could be a snowstorm brewing up on its path, messing up the whole trajectory. And that weather “forecast” that predicted the snowstorm is a result of some serious modeling, isn’t it?

What does all this mean for the marketers who are not necessarily masters of mathematics, statistics or theoretical physics? Plenty, actually. And the use of models in marketing goes way back to the days of punch cards and mainframes. If you are too young to know what those things are, well, congratulations on your youth, and let’s just say that it was around the time when humans first stepped on the moon using a crude rocket ship equipped with less computing power than an inexpensive passenger car of the modern days.

Anyhow, in that ancient time, some smart folks in the publishing industry figured that they would save tons of money if they could correctly “guess” who the potential buyers were “before” they dropped any expensive mail pieces. Even with basic regression models—and they only had one or two chances to get it right with glacially slow tools before the all-too-important Christmas season came around every year—they could safely cut the mail quantity by 80 percent to 90 percent. The savings added up really fast by not talking to everyone.

Fast-forward to the 21st Century. There is still a beauty of knowing who the potential buyers are before we start engaging anyone. As I wrote in my previous columns, analytics should answer:

1. To whom you should be talking; and
2. What you should offer once you’ve decided to engage someone.

At least the first part will be taken care of by knowing who is more likely to respond to you.

But in the days when the cost of contacting a person through various channels is dropping rapidly, deciding to whom to talk can’t be the only reason for all this statistical work. Of course not. There are plenty more reasons why being a statistician (or a data scientist, nowadays) is one of the best career choices in this century.

Here is a quick list of benefits of employing statistical models in marketing. Basically, models are constructed to:

  • Reduce cost by contacting prospects more wisely
  • Increase targeting accuracy
  • Maintain consistent results
  • Reveal hidden patterns in data
  • Automate marketing procedures by being more repeatable
  • Expand the prospect universe while minimizing the risk
  • Fill in the gaps and summarize complex data into an easy-to-use format—A must in the age of Big Data
  • Stay relevant to your customers and prospects

We talked enough about the first point, so let’s jump to the second one. It is hard to argue about the “targeting accuracy” part, though there still are plenty of non-believers in this day and age. Why are statistical models more accurate than someone’s gut feeling or sheer guesswork? Let’s just say that in my years of dealing with lots of smart people, I have not met anyone who can think about more than two to three variables at the same time, not to mention potential interactions among them. Maybe some are very experienced in using RFM and demographic data. Maybe they have been reasonably successful with choices of variables handed down to them by their predecessors. But can they really go head-to-head against carefully constructed statistical models?

What is a statistical model, and how is it built? In short, a model is a mathematical expression of “differences” between dichotomous groups. Too much of a mouthful? Just imagine two groups of people who do not overlap. They may be buyers vs. non-buyers; responders vs. non-responders; credit-worthy vs. not-credit-worthy; loyal customers vs. attrition-bound, etc. The first step in modeling is to define the target, and that is the most important step of all. If the target is hanging in the wrong place, you will be shooting at the wrong place, no matter how good your rifle is.

And the target should be expressed in mathematical terms, as computers can’t read our minds, not just yet. Defining the target is a job in itself:

  • If you’re going after frequent flyers, how frequent is frequent enough for you? Five times a year or 10 times a year? Or somewhere in between? Or should it remain continuous?
  • What if the target is too small or too large? What then?
  • If you are looking for more valuable prospects, how would you express that? In terms of average spending, lifetime spending or sheer number of transactions?
  • What if there is an inverse relationship between frequency and dollar spending (i.e., high spenders shopping infrequently)?
  • And what would be the borderline number to be “valuable” in all this?

Once the target is set, after much pondering, then the job is to select the variables that describe the “differences” between the two groups. For example, I know how much marketers love to use income variables in various situations. But if that popular variable does not explain the differences between the two groups (target and non-target), the mathematics will mercilessly throw it out. This rigorous exercise of examining hundreds or even thousands of variables is one of the most critical steps, during which many variables go through various types of transformations. Statisticians have different preferences in terms of ideal numbers of variables in a model, while non-statisticians like us don’t need to be too concerned, as long as the resultant model works. Who cares if a cat is white or black, as long as it catches mice?

Not all selected variables are equally important in model algorithms, either. More powerful variables will be assigned with higher weight, and the sum of these weighted values is what we call model score. Now, non-statisticians who have been slightly allergic to math since the third grade only need to know that the higher the score, the more likely the record in question is to be like the target. To make the matter even simpler, let’s just say that you want higher scores over lower scores. If you are a salesperson, just call the high-score prospects first. And would you care how many variables are packed into that score, for as long as you get the good “Glengarry Glen Ross” leads on top?

So, let me ask again. Does this sound like something a rudimentary selection rule with two to three variables can beat when it comes to identifying the right target? Maybe someone can get lucky once or twice, but not consistently.

That leads to the next point, “consistency.” Because models do not rely on a few popular variables, they are far less volatile than simple selection rules or queries. In this age of Big Data, there are more transaction and behavioral data in the mix than ever, and they are far more volatile than demographic and geo-demographic data. Put simply, people’s purchasing behavior and preferences change much faster than family composition or their income, and that volatility factor calls for more statistical work. Plus, all facets of marketing are now more about measurable results (ah, that dreaded ROI, or “Roy,” the way I call it), and the businesses call for consistent hitters over one-hit wonders.

“Revealing hidden patterns in data” is my favorite. When marketers are presented with thousands of variables, I see a majority of them just sticking to a few popular ones all the time. Some basic recency and frequency data are there, and among hundreds of demographic variables, the list often stops after income, age, gender, presence of children, and some regional variables. But seriously, do you think that the difference between a luxury car buyer and an SUV buyer is just income and age? You see, these variables are just the ones that human minds are accustomed to. Mathematics do not have such preconceived notions. Sticking to a few popular variables is like children repeatedly using three favorite colors out of a whole box of crayons.

I once saw a neighborhood-level U.S. Census variable called “% Households with Septic Tanks” in a model built for a high-end furniture catalog. Really, the variable was “percentage of houses with septic tanks in the neighborhood.” Then I realized it made a lot of sense. That variable was revealing how far away that neighborhood was located in comparison to populous city centers. As the percentage of septic tanks increased, the further away the residents were from the city center. And maybe those folks who live in scarcely populated areas were more likely to shop for furniture through catalogs than the folks who live closer to commercial areas.

This is where we all have that “aha” moment. But you and I will never pick that variable in anything that we do, not in million years, no matter how effective it may be in finding the target prospects. The word “septic” may scare some people off at “hello.” In any case, modeling procedures reveal hidden connections like that all of the time, and that is a very important function in data-rich environments. Otherwise, we will not know what to throw out without fear, and the databases will continuously become larger and more unusable.

Moving on to the next points, “Repeatable” and “Expandable” are somewhat related. Let’s say a marketer has been using a very innovative selection logic that she came across almost by accident. In pursuing special types of wealthy people, she stumbled upon a piece of data called “owner of swimming pool.” Now, she may have even had a few good runs with it, too. But eventually, that success will lead to the question of:

1. Having to repeat that success again and again; and
2. Having to expand that universe, when the “known” universe of swimming pool owners become depleted or saturated.

Ah, the chagrin of a one-hit-wonder begins.

Use of statistical models, with help of multiple variables and scalable scoring, would avoid all of those issues. You want to expand the prospect universe? No trouble. Just dial down the scores on the scale a little further. We can even measure the risk of reaching into the lower-scoring groups. And you don’t have to worry about coverage issues related to a few variables, as those won’t be the only ones in the model. Want to automate the selection process? No problem there, as using a score, which is a summary of key predictors, is far simpler than having to carry a long list of data variables into any automated system.

Now, that leads to the next point, “Filling in the gaps and summarizing the complex data into an easy-to-use format.” In the age of ubiquitous and “Big” data, this is the single-most important point, way beyond the previous examples for traditional 1-to-1 marketing applications. We are definitely going through massive data overloads everywhere, and someone better refine the data and provide some usable answers.

As I mentioned earlier, we build models because we will never know the whole truth. I believe that the Big Data movement should be all about:

1. Filtering the noise from valuable information; and
2. Filling the gaps.

“Gaps,” you say? Believe me, there are plenty of gaps in any dataset, big or small.

When information continues to get piled on, the resultant database may look big. And they are physically large. But in marketing, as I repeatedly emphasized in my previous columns, the data must be realigned to “buyer-centric” formats, with every data point describing each individual, as marketing is all about people.

Sure, you may have tons of mobile phone-related data. In fact, it could be quite huge in size. But let me turn that upside down for you (more like sideways-up, in practice). Now, try to describe everyone in your footprint in terms of certain activities. Say, “every smart phone owner who used more than 80 percent of his or her monthly data allowance on the average for the past 12 months, regardless of the carrier.” Hey, don’t blame me for asking these questions just because it’s inconvenient for data handlers to answer them. Some marketers would certainly benefit from information like that, and no one cares about just bits and pieces of data, other than for some interesting tidbits at a party.

Here’s the main trouble when you start asking buyer-related questions like that. Once we try to look at the world from the “buyer-centric” point of view, we will realize there are tons of missing data (i.e., a whole bunch of people with not much information). It may be that you will never get this kind of data from all carriers. Maybe not everyone is tracked this way. In terms of individuals, you may end up with less than 10 percent in the database with mobile information attached to them. In fact, many interesting variables may have less than 1 percent coverage. Holes are everywhere in so-called Big Data.

Models can fill in those blanks for you. For all those data compilers who sell age and income data for every household in the country, do you believe that they really “know” everyone’s age and income? A good majority of the information is based on carefully constructed models. And there is nothing wrong with that.

If you don’t get to “know” something, we can get to a “likelihood” score—of “being like” that something. And in that world, every measurement is on a scale, with no missing values. For example, the higher the score of a model built for a telecommunication company, the more likely that the prospect is going to use a high-speed data plan, or the international long distance services, depending on the purpose of the model. Or the more likely the person will buy sports packages via cable or satellite. Or the person is more likely to subscribe to premium movie channels. Etc., etc. With scores like these, a marketer can initiate the conversation with—not just talking to—a particular prospect with customized product packages in his hand.

And that leads us to the final point in all this, “Staying relevant to your customers and prospects.” That is what Big Data should be all about—at least for us marketers. We know plenty about a lot of people. And they are asking us why we are still so random about marketing messages. With all these data that are literally floating around, marketers can do so much better. But not without statistical models that fill in the gaps and turn pieces of data into marketing-ready answers.

So, why model? Because a big pile of information doesn’t provide answers on its own, and that pile has more holes than Swiss cheese if you look closely. That’s my final answer.