Marketers Get It Wrong, Too
Nonetheless, the consequence of a wrong prediction is definitely not limited to large, highly-publicized cases like this election outcome. Marketers, some of the most casual users of predictive analytics, often pay dearly for erroneous predictions.
For instance, if the prediction of unit sales were off by a mere 5 percent to 6 percent in a billion-dollar company, the error may result in some dire consequences. The company would be in trouble even if its sales projection is underestimated, as such a conservative estimate will lead to under-production, affecting everyone in the supply chain. Conversely, over-projection may lead to some serious inventory problems.
Often, the predictive analytics business is not just about prediction; it must include all of the investigative work in case the projections are off, even by a few percentage points. When there are decent model algorithms in place (i.e., model-based forecasting with a clear list of predictive variables), marketers will have a good place to start the investigation — variable-by-variable, if necessary.
This is one of the benefits of predictive analytics that is rarely discussed. Simply maintaining a list of predictors helps decision-makers narrow down the search, even when the prediction turns out to be less than ideal. Conversely, I’ve seen many marketers going through a series of wild goose chases based on gut feelings and guesswork, when encountering failure of their rudimentary sales projections. Such not-so-scientific investigation methods often lead to blame games, without yielding usable answers.
In 1:1 marketing, which is much more personal than mass marketing, marketers must consider the price of wrong predictions at all times. In a simple case of response modeling, marketers are generally happy with a 1 percent response rate, and may be ecstatic with a 2 percent conversion rate in a campaign. But even in such cases, we should never forget that they were wrong for about 98 percent of contacts.
That is why I always consider the price of being wrong when faced with predicting seemingly sensitive subjects or products. For instance, ailment data is notoriously difficult to obtain. And even with the hard data, pushers of marketing campaigns must be really careful, as they may face questions like “Hey, how did you know that I am diabetic?”
That could be the case even when dealing with known, explicit data. Imagine someone actually built a propensity model, such as “Likely to be diabetic,” and ended up using such a prediction to promote medical products. Going further, how about “Likely to suffer from erectile dysfunction”? Users of such a sensitive-subject projection — even based on sound mathematics — shouldn’t just be counting on the incremental revenue that such predictive modeling would bring to them; they must also think about the kind of backlash from the inevitable wrong prediction. Modeling is about increasing the probability of being right, not having a mirror ball that guarantees 100 percent accuracy.
Yes, it is the analyst’s job to minimize the errors using only available datasets. But predictions can go wrong for all kinds of reasons — inadequate input data often being the No. 1 reason for inaccurate outcomes.
Predictions can be wrong; at times, quite wrong, as we’ve just witnessed. That awareness may save the users from total shock, while some disappointments may not be completely avoidable.
Nonetheless, we should not give up on predictive analytics altogether, as less-than-perfect estimates are still far better than complete obliviousness. We all get soaked in rain thanks to inaccurate weather forecasts once in a while, but would you ever give up on them entirely?