Election Polls and the Price of Being Wrong 

The thing about predictive analytics is that the quality of a prediction is eventually exposed — clearly cut as right or wrong. There are casually incorrect outcomes, like a weather report failing to accurately declare at what time the rain will start, and then there are total shockers, like the outcome of the 2016 presidential election.

Marketers Get It Wrong, Too

Nonetheless, the consequence of a wrong prediction is definitely not limited to large, highly-publicized cases like this election outcome. Marketers, some of the most casual users of predictive analytics, often pay dearly for erroneous predictions.

For instance, if the prediction of unit sales were off by a mere 5 percent to 6 percent in a billion-dollar company, the error may result in some dire consequences. The company would be in trouble even if its sales projection is underestimated, as such a conservative estimate will lead to under-production, affecting everyone in the supply chain. Conversely, over-projection may lead to some serious inventory problems.

Often, the predictive analytics business is not just about prediction; it must include all of the investigative work in case the projections are off, even by a few percentage points. When there are decent model algorithms in place (i.e., model-based forecasting with a clear list of predictive variables), marketers will have a good place to start the investigation — variable-by-variable, if necessary.

This is one of the benefits of predictive analytics that is rarely discussed. Simply maintaining a list of predictors helps decision-makers narrow down the search, even when the prediction turns out to be less than ideal. Conversely, I’ve seen many marketers going through a series of wild goose chases based on gut feelings and guesswork, when encountering failure of their rudimentary sales projections. Such not-so-scientific investigation methods often lead to blame games, without yielding usable answers.

In 1:1 marketing, which is much more personal than mass marketing, marketers must consider the price of wrong predictions at all times. In a simple case of response modeling, marketers are generally happy with a 1 percent response rate, and may be ecstatic with a 2 percent conversion rate in a campaign. But even in such cases, we should never forget that they were wrong for about 98 percent of contacts.

That is why I always consider the price of being wrong when faced with predicting seemingly sensitive subjects or products. For instance, ailment data is notoriously difficult to obtain. And even with the hard data, pushers of marketing campaigns must be really careful, as they may face questions like “Hey, how did you know that I am diabetic?”

That could be the case even when dealing with known, explicit data. Imagine someone actually built a propensity model, such as “Likely to be diabetic,” and ended up using such a prediction to promote medical products. Going further, how about “Likely to suffer from erectile dysfunction”? Users of such a sensitive-subject projection — even based on sound mathematics — shouldn’t just be counting on the incremental revenue that such predictive modeling would bring to them; they must also think about the kind of backlash from the inevitable wrong prediction. Modeling is about increasing the probability of being right, not having a mirror ball that guarantees 100 percent accuracy.

Yes, it is the analyst’s job to minimize the errors using only available datasets. But predictions can go wrong for all kinds of reasons — inadequate input data often being the No. 1 reason for inaccurate outcomes.

Predictions can be wrong; at times, quite wrong, as we’ve just witnessed. That awareness may save the users from total shock, while some disappointments may not be completely avoidable.

Nonetheless, we should not give up on predictive analytics altogether, as less-than-perfect estimates are still far better than complete obliviousness. We all get soaked in rain thanks to inaccurate weather forecasts once in a while, but would you ever give up on them entirely?

Author: Stephen H. Yu

Stephen H. Yu is a world-class database marketer. He has a proven track record in comprehensive strategic planning and tactical execution, effectively bridging the gap between the marketing and technology world with a balanced view obtained from more than 30 years of experience in best practices of database marketing. Currently, Yu is president and chief consultant at Willow Data Strategy. Previously, he was the head of analytics and insights at eClerx, and VP, Data Strategy & Analytics at Infogroup. Prior to that, Yu was the founding CTO of I-Behavior Inc., which pioneered the use of SKU-level behavioral data. “As a long-time data player with plenty of battle experiences, I would like to share my thoughts and knowledge that I obtained from being a bridge person between the marketing world and the technology world. In the end, data and analytics are just tools for decision-makers; let’s think about what we should be (or shouldn’t be) doing with them first. And the tools must be wielded properly to meet the goals, so let me share some useful tricks in database design, data refinement process and analytics.” Reach him at stephen.yu@willowdatastrategy.com.

9 thoughts on “Election Polls and the Price of Being Wrong ”

  1. Great overview explaining the many reasons why pollsters were unable to call this election accurately, not the least of which being that while Trump won the electoral vote, he lost the popular vote so the majority of the pollsters did ‘t get everything wrong.

    With regard to your comment about Nate Silver and his team at FiveThirtyEight, who achieved “guru” status by predicting the 2012 election outcome perfectly, your comment about his boldly posting a confidence level of 70% before the election is off-base. A 70% confidence level, essentially a one standard deviation event, means that the outcome is expected to occur roughly 2/3 of the time with the opposite outcome occurring the other 1/3 of the time. Accordingly, a 70% confidence level is indicative of a prediction that’s not particularly certain.

    When I ran the MIT Blackjack Team, the minimum confidence level we would accept for each venture before raising funds from investors was a two standard deviation event (i.e. a 95% confidence level). In other words, we projected our Return on Investment to investors while allowing for a break-even outcome 5% or less of the time. Anyone with a background in statistics knows that one standard deviation events happen all of the time (i.e. 1/3 of the time) and that two standard deviation events happen in 1 in 20 times, which is still too often to ever feel safe.

    Yes, Nate Silver and his team at FiveThirtyEight, along almost every other pollster, got this election wrong. But at least FiveThirtyEight knew their prediction had a nearly 1/3 chance of being wrong.

    1. It is an honor to get to have an online chat with someone who ran the Blackjack Team! As for the popular vote, yes, I hear that Hillary is now winning by over 2MM votes, but I still standby my statement about “colossal failure” as all the pollsters, including Mr. Silver, posted projections on a state level. And regarding that 70% (which he changed to 68% the day before the election, which is coincidentally much closer to 68.2% or 1 standard deviation), I still think that prediction is considered to be “way off”, looking at the results. And I’m sure most of his followers knew that FiveThirtyEight had 1/3 chance to be wrong, too. But I guess wishful thinking got the better of most us?

      1. If a few pollsters were wrong and a few were right then we’d consider this a near miss. Individually Nate Silver was forced to wxplain his work that way — but hey he has a business to run and there are time you just can’t say you made a mistake (I mean I would not want to be in his shoes). At the same time all these folks must already be scheming and budgeting to over-sampling in rural, low median school year counties so this “fail” won’t happen again. Seems there is a whole new cast of swing states screaming out for phone surveys!!! Marketing campaigns and products bomb. The captains sailing Hillary’s ship fed completely wrong coordinates into their GPS. The message was wrong, and so was the target. They’ll need to dry dock their boat for a total overhaul. Let’s hope they do it quickly and reliably!!! Thanks, Steven

  2. Another excellent piece Stephen and another reminder that when the ‘image’ advertisers bombard us with ‘research’ both quantitative and qualitative, we would do well not to take them too literally or seriously and keep more than a grain of salt at the ready.

    Measurable response with quantifiable data beats ‘research’ every time. We should be glad we are in the accountable marketing business.

    1. Thanks! I remember Lester Wunderman’s quote about “measurement” being the first qualification for being a good direct marketer.

      1. If Lester wasn’t always right, he certainly was right most of the time. He named my book ‘Accountable Marketing’ because as he said simply: that’s what we do.

        1. Indeed! That is such a good way to put it: “Accountable Marketing”. I remember him talking about measuring number of steps that he climbed as a delivery person and the tip amount that he received. He was “measuring” everything even back then! Interesting conclusion was that the correlation looked like a reverse-bell-curve thanks to the income factor for the top floor dwellers. It is amazing that he did all that in his head.

  3. Sorry for the late comment, BUT, we’ve all had some time to think about this. The polling failure might be in how the questions were asked? My gut feeling is many Trump supporters didn’t share their true feelings with pollsters. Why? Because the MSM made it clear Trump was undesirable, or at best completely unfit for office. These voters don’t care what nice people in suits say on TV, they wanted a change. There’s also data that indicates many were voting for the first time in their lives, so that group may not be on polling databases?

Leave a Reply

Your email address will not be published. Required fields are marked *