Prescriptive Analytics at All Stages

I don’t know who or which analytics company started it, but we often hear that “prescriptive analytics” is at the top of the analytical food chain. I’m sure anyone who is even remotely involved in marketing analytics has seen that pyramid chart, where BI and dashboard reports

Creating a meaningful set of reports out of datasets, which are often unstructured, unorganized and unclean, is already a monumental task. Heck, just merging disparate datasets into one place is challenging enough, requiring political savviness on top of data manipulation skills. Knowing what to report based business requirements is not some simple skill one learns in statistical courses, either. Deriving insights out of a few-hundred-page report and presenting it on “1” page with less than five bullet points? That really is a special talent. But, after all that, they would still have the “next steps” page to fill out.

In the interest of breaking down the process of deriving insights out of data or large reports, allow me to share this five-step approach. Data hygiene and consolidation steps are skipped here, as they were discussed in detail in this series already (refer to “Is Your Database Marketing Ready?” and “It’s All About Ranking”).

I. Create Small and Concise Versions of Large Reports based on business goals. If multiple teams are subscribing to the reports, develop separate views for each constituent. Any report with a seemingly excessive number of pages or columns should be revisited. There simply is no easy way to derive any insight out of 70 column reports with 700 rows, and we must summarize them intelligently first. Such long reports are more like a data dictionary than a report. It is not impossible to profile regions by examining their phone books line by line, but that doesn’t make it a good practice.

2. Reflecting on the business challenges — not just data or technical challenges – Develop New Metrics that were not previously considered in forms of, but not limited to:

    • Index Values: Against the previous year or any given time period, other divisions, channels, overall business or even industry metrics (if available). Any number on the report should be translated into “good, bad or ugly.” If the clickthrough rate went down by 2.6 percent compared to the last period (whatever that may be), is that bad? If so, how bad? Create index values for critical measurements against comparison groups (figuring that out is part of analytics), and highlight any value of more than 120 or less than 80. That will serve as a good starting point.
    • Average Values: Not just total counts and dollars, but average amounts and “per x” values. Some segments are naturally “big,” but average values, such as $ per customer, $ per conversion, $ per click, etc. would shed totally different lights. If the figures turn out to be too small, multiply them by 1,000, such as “$ per 1,000 emails,” “$ per 1,000 clicks,” etc. Even large revenue segments may reveal “value” or “conversion rate” issues.
    • Create New Dimensions on the Report. Consider divisions, regions, different types of time periods and other major components first. Conversely, if the original reports contain more than two combinations of complex dimensions, summarize them up (i.e., perform single variable analysis). End-users and even analysts often have a hard time understanding more than two dimensions at a time. To list a few more factors, consider List/Data Source, Campaign Code/Type, Time Period, Model Group, Offer, Creative, Targeting Criteria, Channel (in and outbound), Ad Server, Publisher/Domain, Keyword, Script, Day-part, etc. But do not go crazy combining them all, as the resultant cells may not yield any statistically significant figure.
    • Data Beyond the Original Starting Point. For example, typical digital domain data. Customer journey is never complete at the open, click and conversion. Marketers must close the loop and measure channel and overall marketing effectiveness counting “repeat” purchases (or churns) on a customer level. Continuously search for more data and find ways to connect them all, even at a summary level. Marketing should never be channel-driven, but most marketing organizations are. We should break that chain and rise above such limitations.
    • Consider Third-party Data and Other Vendor Data (e.g., campaign history data), even if they may not be complete and filled with missing values. Different types of data, when used together, often reveal patterns previously undetected.

3. Make the Reports Look Pretty. Create charts and graphs, but only if it helps in telling a story.

    • Every Chart Must Tell a Story. Meaningless “pretty” charts are worse than not having any at all. That is why we should not even consider creating graphs before answering question Nos. 1 and 2 here. No problem will be solved by graphics packages alone, no matter how mighty they are in displaying slick charts. Knowing what to show and in what format is much more difficult than wielding graphics packages. Hence, this stepwise process.
    • Create Complex Variables ‘Before’ Getting to the Graphics Software. It will force us to think about more variables and metrics to display, and most graphics packages are not even designed for heavy calculation and variable creation. In other words, storyboard it first, like movie-makers do before they actually shoot scenes.
    • Do Not ‘Lie’ With Graphics, as explained by Edward Tufte in his bestseller “The Visual Display of Quantitative Information.”
    • Make It Action-driven by displaying where the identical positions should be. It must be easy for end-users to see “Ah, we need to be there, but we are here now.”
    • Keep It Simple, as much as possible. Bubble charts with five dimensions may be helpful for advanced users, but such things are not for everyone’s daily consumption, though they may look darn impressive at first glance.

4. Create Stories. Good insights are in the form of good stories.

    • Never State the Obvious. Anyone can read the numbers and percentage values off of the chart. Never write English sentences out of one-dimensional figures and call them “analytics,” as they are not. In fact, that is exactly how analysts get bad names.
    • Find ‘Business’ Challenges. Imagine analysts are the doctors who must diagnose patients who cannot elaborate on what they are suffering. For example, do as we as a marketing organization have (again, the list is not limited to this):

• Conversion problems? Overall or channel specific?

• Average customer value is going down, or fluctuating erratically?

• Too sensitive to seasonality? Any patterns behind it? Are customers conditioned to a predictable sales cycle?

• Too many one-time buyers with no repeat purchase?

• Multiple segments being treated the same?

• How are the segments defined, anyway? Statistically or rule-based? Any disproportionate segment sizes found (as they are sources for reporting biases)?

• Not meeting goals? In terms of clicks, opens, abandonment, conversions, repeat, total dollars, average values or what else? Each will tell a different story.

• How were the goals set? With what type of forecasting techniques? With what set of variables? Have those variables been stable? Sometimes we must question the goals themselves.

• Any changes in the business model or channel strategy, big or small, in any part of the organization? Or websites, or the way the email campaigns were managed? Are we seeing the result of marketer’s actions, or customers’ reactions to them?

• Is the problem prolonging, or for a specific periods? Should we ignore known peak periods to create level playing fields?

• Any regional biases? At what geographic level?

• If samples are used for analysis, how were they created? Surely, taking the first 10,000 records out of a data table wouldn’t qualify as scientific sampling.

  • Make Connections that are not obvious among multiple reports. There is no such thing as an isolated problem. Dig deep as an investigator.

Author: Stephen H. Yu

Stephen H. Yu is a world-class database marketer. He has a proven track record in comprehensive strategic planning and tactical execution, effectively bridging the gap between the marketing and technology world with a balanced view obtained from more than 30 years of experience in best practices of database marketing. Currently, Yu is president and chief consultant at Willow Data Strategy. Previously, he was the head of analytics and insights at eClerx, and VP, Data Strategy & Analytics at Infogroup. Prior to that, Yu was the founding CTO of I-Behavior Inc., which pioneered the use of SKU-level behavioral data. “As a long-time data player with plenty of battle experiences, I would like to share my thoughts and knowledge that I obtained from being a bridge person between the marketing world and the technology world. In the end, data and analytics are just tools for decision-makers; let’s think about what we should be (or shouldn’t be) doing with them first. And the tools must be wielded properly to meet the goals, so let me share some useful tricks in database design, data refinement process and analytics.” Reach him at stephen.yu@willowdatastrategy.com.

Leave a Reply

Your email address will not be published. Required fields are marked *