I recently read a report that only about 20% of all analytics projects work turns out to be beneficial to businesses. Such waste. Nonetheless, is that solely the fault of data scientists? After all, even effective medicine renders useless if the patient refuses to take it.
Then again, why would users reject the results of analytics work? At the risk of gross simplification, allow me to break it down into two categories: Cases where project goals do not align with the business goals, and others where good intelligence gets wasted due to lack of capability, procedure, or will to implement follow-up actions. Basically, poor planning in the beginning, and poor execution at the backend.
Results of analytics projects often get ignored if the project goal doesn’t serve the general strategy or specific needs of the business. To put it in a different way, projects stemming from the analyst’s intellectual curiosity may or may not align with business interests. Some math geek may be fascinated by the elegance of mathematical precision or complexity of solutions, but such intrigue rarely translates directly into monetization of data assets.
In business, faster and simpler answers are far more actionable and valuable. If I ask business people if they want an answer with 80% confidence level in next 2 days, or an answer with 95% certainty in 4 weeks, the great majority would choose the quicker but less-than-perfect answer. Why? Because the keyword in all this is “actionable,” not “certainty.”
Analysts who would like to maintain a distance from immediate business needs should instead pursue pure science in the world of academia (a noble cause, without a doubt). In business settings, however, we play with data only to make tangible differences, as in dollars, cents, minutes or seconds. Once such differences in philosophy are accepted and understood by all involved parties, then the real question is: What kind of answers are most needed to improve business results?
Setting Analytics Projects Up for Success
Defining the problem statement is the hardest part for many analysts. Even the ones who are well-trained often struggle with the goal setting process. Why? Because in school, the professor in charge provides the problems to solve, and students submit solutions to them.
In business, analysts must understand the intentions of decision makers (i.e., their clients), deciphering not-so-logical general statements and anecdotes. Yeah, sure, we need to attract more high-value customers, but how would we express such value via mathematical statements? What would the end result look like, and how will it be deployed to make any difference in the end?
If unchecked, many analytics projects move forward purely based on the analysts’ assumptions, or worse, procedural convenience factors. For example, if the goal of the project is to rank a customer list in the order of responsiveness to certain product offers, then to build models like that, one may employ all kinds of transactional, behavioral, response, and demographic data.
All these data types come with different strengths and weaknesses, and even different missing data ratios. In cases like this, I’ve encountered many — too many — analysts who would just omit the whole population with missing demographic data in the development universe. Sometimes such omission adds up to be over 30% of the whole. What, are we never going to reach out to those souls just because they lack some peripheral data points for them?
Good luck convincing the stakeholders who want to use the entire list for various channel promotions. “Sorry, we can provide model scores for only 70% of your valuable list,” is not going to cut it.
More than a few times, I received questions about what analysts should do when they have to reach deep into lower model groups (of response models) to meet the demand of marketers, knowing that the bottom half won’t perform well. My response would be to forget about the model — no matter how elegant it may be — and develop heuristic rules to eliminate obvious non-targets in the prospect universe. If the model gets to be used, it is almost certain that the modeler in charge will be blamed for mediocre or bad performance, anyway.
Then I firmly warn them to ask about typical campaign size “before” one starts building some fancy models. What is the point of building a response model when the emailer would blast emails as much as he wants? To prove that the analyst is well-versed in building complex response models? What difference would it ever make in the “real” world? With that energy, it would be far more prudent to build a series of personas and product affinity models to personalize messages and offers.
Supporting Analytics Results With Marketing
Now, let’s pause for a moment and think about the second major reason why the results of analytics are not utilized. Assume that the analytics team developed a series of personas and product affinity models to customize offers on a personal level. Does the marketing team have the ability to display different offers to different targets? Via email, websites, and/or print media? In other words, do they have capabilities and resources to show “a picture of two wine glasses filled with attractive looking red wine” to people who scored high scores in the “Wine Enthusiast” model?
I’ve encountered too many situations where marketers look concerned — rather than getting excited — when talking about personas for personalization. Not because they care about what analysts must go through to produce a series of models, but because they lack creative assets and technical capabilities to make it all happen.
They often complain about lack of budget to develop multiple versions of creatives, lack of proper digital asset management tools, lack of campaign management tools that allows complex versioning, lack of ability to serve dynamic contents on websites, etc. There is no shortage of reasons why something “cannot” be done.
But, even in a situation like that, it is not the job of a data scientist to suggest increasing investments in various areas, especially when “other” departments have to cough up the money. No one gets to command unlimited resources, and every department has its own priorities. What analytics professionals must do is to figure out all kinds of limitations beyond the little world of analytics, and prioritize the work in terms of actionability.
Consider what can be done with minimal changes in the marketing ecosystem, and for preservation of analytics and marketing departments, what efforts will immediately bring tangible results? Basically, what will we be able to brag about in front of CEOs and CFOs?
When to Put Analytics Projects First
Prioritization of analytics projects should never be done solely based on data availability, ease of data crunching or modeling, or “geek” factors. It should be done in terms of potential value of the result, immediate actionability, and most importantly, alignment with overall business objectives.
The fact that only about 20% of analytics work yields business value means that 80% of the work was never even necessary. Sure, data geeks deserve to have some fun once in a while, but the fun factor doesn’t pay for the systems, toolsets, data maintenance, and salaries.
Without proper problem statements on the front-end and follow-up actions on the back-end, no amount of analytical activities would produce any value for businesses. That is why data and analytics professionals must act as translators between the business world and the technical world. Without that critical consulting layer, it becomes the-luck-of-the-draw when prioritizing projects.
To stay on target, always start with a proper analytics roadmap covering from ideation to applications stages. To be valued and appreciated, data scientists must act as business consultants, as well.