Data Analytics Projects Only Benefit Marketers When Properly Applied

A recent report shared that only about 20% of all analytics projects work turns out to be beneficial to businesses. Such waste. Nonetheless, is that solely the fault of data scientists? After all, even effective medicine renders useless if the patient refuses to take it.

I recently read a report that only about 20% of all analytics projects work turns out to be beneficial to businesses. Such waste. Nonetheless, is that solely the fault of data scientists? After all, even effective medicine renders useless if the patient refuses to take it.

Then again, why would users reject the results of analytics work? At the risk of gross simplification, allow me to break it down into two categories: Cases where project goals do not align with the business goals, and others where good intelligence gets wasted due to lack of capability, procedure, or will to implement follow-up actions. Basically, poor planning in the beginning, and poor execution at the backend.

Results of analytics projects often get ignored if the project goal doesn’t serve the general strategy or specific needs of the business. To put it in a different way, projects stemming from the analyst’s intellectual curiosity may or may not align with business interests. Some math geek may be fascinated by the elegance of mathematical precision or complexity of solutions, but such intrigue rarely translates directly into monetization of data assets.

In business, faster and simpler answers are far more actionable and valuable. If I ask business people if they want an answer with 80% confidence level in next 2 days, or an answer with 95% certainty in 4 weeks, the great majority would choose the quicker but less-than-perfect answer. Why? Because the keyword in all this is “actionable,” not “certainty.”

Analysts who would like to maintain a distance from immediate business needs should instead pursue pure science in the world of academia (a noble cause, without a doubt). In business settings, however, we play with data only to make tangible differences, as in dollars, cents, minutes or seconds. Once such differences in philosophy are accepted and understood by all involved parties, then the real question is: What kind of answers are most needed to improve business results?

Setting Analytics Projects Up for Success

Defining the problem statement is the hardest part for many analysts. Even the ones who are well-trained often struggle with the goal setting process. Why? Because in school, the professor in charge provides the problems to solve, and students submit solutions to them.

In business, analysts must understand the intentions of decision makers (i.e., their clients), deciphering not-so-logical general statements and anecdotes. Yeah, sure, we need to attract more high-value customers, but how would we express such value via mathematical statements? What would the end result look like, and how will it be deployed to make any difference in the end?

If unchecked, many analytics projects move forward purely based on the analysts’ assumptions, or worse, procedural convenience factors. For example, if the goal of the project is to rank a customer list in the order of responsiveness to certain product offers, then to build models like that, one may employ all kinds of transactional, behavioral, response, and demographic data.

All these data types come with different strengths and weaknesses, and even different missing data ratios. In cases like this, I’ve encountered many — too many — analysts who would just omit the whole population with missing demographic data in the development universe. Sometimes such omission adds up to be over 30% of the whole. What, are we never going to reach out to those souls just because they lack some peripheral data points for them?

Good luck convincing the stakeholders who want to use the entire list for various channel promotions. “Sorry, we can provide model scores for only 70% of your valuable list,” is not going to cut it.

More than a few times, I received questions about what analysts should do when they have to reach deep into lower model groups (of response models) to meet the demand of marketers, knowing that the bottom half won’t perform well. My response would be to forget about the model — no matter how elegant it may be — and develop heuristic rules to eliminate obvious non-targets in the prospect universe. If the model gets to be used, it is almost certain that the modeler in charge will be blamed for mediocre or bad performance, anyway.

Then I firmly warn them to ask about typical campaign size “before” one starts building some fancy models. What is the point of building a response model when the emailer would blast emails as much as he wants? To prove that the analyst is well-versed in building complex response models? What difference would it ever make in the “real” world? With that energy, it would be far more prudent to build a series of personas and product affinity models to personalize messages and offers.

Supporting Analytics Results With Marketing

Now, let’s pause for a moment and think about the second major reason why the results of analytics are not utilized. Assume that the analytics team developed a series of personas and product affinity models to customize offers on a personal level. Does the marketing team have the ability to display different offers to different targets? Via email, websites, and/or print media? In other words, do they have capabilities and resources to show “a picture of two wine glasses filled with attractive looking red wine” to people who scored high scores in the “Wine Enthusiast” model?

I’ve encountered too many situations where marketers look concerned — rather than getting excited — when talking about personas for personalization. Not because they care about what analysts must go through to produce a series of models, but because they lack creative assets and technical capabilities to make it all happen.

They often complain about lack of budget to develop multiple versions of creatives, lack of proper digital asset management tools, lack of campaign management tools that allows complex versioning, lack of ability to serve dynamic contents on websites, etc. There is no shortage of reasons why something “cannot” be done.

But, even in a situation like that, it is not the job of a data scientist to suggest increasing investments in various areas, especially when “other” departments have to cough up the money. No one gets to command unlimited resources, and every department has its own priorities. What analytics professionals must do is to figure out all kinds of limitations beyond the little world of analytics, and prioritize the work in terms of actionability.

Consider what can be done with minimal changes in the marketing ecosystem, and for preservation of analytics and marketing departments, what efforts will immediately bring tangible results? Basically, what will we be able to brag about in front of CEOs and CFOs?

When to Put Analytics Projects First

Prioritization of analytics projects should never be done solely based on data availability, ease of data crunching or modeling, or “geek” factors. It should be done in terms of potential value of the result, immediate actionability, and most importantly, alignment with overall business objectives.

The fact that only about 20% of analytics work yields business value means that 80% of the work was never even necessary. Sure, data geeks deserve to have some fun once in a while, but the fun factor doesn’t pay for the systems, toolsets, data maintenance, and salaries.

Without proper problem statements on the front-end and follow-up actions on the back-end, no amount of analytical activities would produce any value for businesses. That is why data and analytics professionals must act as translators between the business world and the technical world. Without that critical consulting layer, it becomes the-luck-of-the-draw when prioritizing projects.

To stay on target, always start with a proper analytics roadmap covering from ideation to applications stages. To be valued and appreciated, data scientists must act as business consultants, as well.

 

3 Google Analytics Tips for E-Commerce

There’s a lot more to Google Analytics than looking at basic traffic metrics. These tips will help you make improvements to drive more e-commerce sales from your different marketing channels. 

Many businesses using Google Analytics are only scratching the surface of what Google Analytics can do. By not taking advantage of the platform’s more powerful features, they lose out on getting a lot of valuable insights about their marketing and how to make the most of their budgets.

Covering every aspect of Google Analytics would require an e-book. So in this article, I’ll walk through three steps to get you started and more familiar with Google Analytics.

1. Base Your Website Objectives on Specific Business Needs

You can use Google Analytics to measure how well your website performs in helping you hit your company’s target KPIs. Do not rely on the defaults set up in Google Analytics. Those are meant to cover a broad range of companies, and some of them are not applicable to your business needs.

Instead, take the time to define the important KPIs that your website should be hitting. For example, in addition to online sales, is your goal to generate quote requests for larger/bulk orders? Is another goal to collect email addresses by offering a free report? Where do visitors need to go on your website if they are interested in your products or services?

As you think through these goals, you’ll start to identify conversions that you need to set up in the Google Analytics admin area. This is a critical step that will allow you to monitor the performance of all of your different marketing channels. For example, if your goal is to generate quote requests, then you’ll need to set up a conversion to measure quote requests. Once that’s done, you’ll be able to run reports to see how many quote requests were generated from SEO vs. Google Ads vs. Facebook, or any other marketing channel you’re using.

We also recommend using the audience reporting views to see if your website visitors are actually your ideal customers. You can create customized segments for tracking important demographic points, like age, gender, and location.

Reviewing the information on your visitors may give your more perspective. Maybe your company needs to change its marketing strategy or website layout to resonate more with your target market.

2. Use E-Commerce Tracking

Google Analytics offers a feature called Enhanced E-Commerce. You should see it when setting up your Google Analytics account. Here are a few ways you can use the feature to get a better understanding of the customer journey through your website and shopping portal.

  1. You can track the shopping and checkout behavior of each visitor to your site. That includes product page-views, shopping cart additions and removals, abandoned items, and completed transactions.
  2. You can view metrics, like revenue generated, average transaction quantity, conversion rates for specific products, and how quickly products get added to a shopping cart. You can see what point a customer loses interest in the shopping experience. That lets you focus on tactics that keep them engaged and encourage them to complete a purchase.
  3. You can measure the success of various internal and external marketing efforts meant to encourage shopping and checkouts by visitors. For example, you can see whether the new product banner put up increased conversion rates.

The various reports give you a clear view of the path customers take as they shop on your website.

3. Sync Google Analytics With Your E-Commerce Platform

Many e-commerce platforms, like Shopify, have the ability to quickly sync with Google Analytics. This can save you and your team a lot of time and frustration trying to set everything up manually.

For example, the e-commerce analytics reporting mentioned above requires knowledge of Javascript, if you want to set it up yourself. Always check with the support team for your e-commerce platform to see if they have already synced up with Google Analytics. If they have, then you could be set up in a matter of minutes.

Look Beyond Surface Data

There’s a lot more to Google Analytics than looking at basic traffic metrics. These tips should allow you to gain a better understanding of where you can make improvements to drive more e-commerce sales from your different marketing channels.

  • First, identify your business goals and set up conversions in the Google Analytics admin area.
  • Second, set up enhanced e-commerce analytics either manually or by syncing your e-commerce platform with Google Analytics.
  • And third, review all the e-commerce reports to see which marketing channels can be improved to increase your sales.

Want more tips on how to use Google Analytics? Click here to grab a copy of our “Ultimate Google Analytics Checklist.”

 

Here’s a Website Performance Checklist to Kick 2020 Off Right

Reviewing your website’s security practices, privacy policies, accessibility, and analytics can help improve performance over the course of the year. You can still pledge to get the most from your website. This website performance checklist can help.

No need to abandon all hope if your New Year’s resolutions have already fallen by the wayside. You can still pledge to get the most from your website in 2020. This website performance checklist can help.

None of these topics are particularly sexy. Nor are they likely to have the kind of top-line impact (read: massive increases in revenue) that lead to promotions and bonuses. But they can save you a ton of pain and regret throughout the year. And without a doubt, they will make those revenue-spiking initiatives that much more successful.

Security Review

Having your domain blacklisted is nobody’s idea of fun. Because there’s no “Undo” button, once you’re in trouble, it’s time-consuming to get out. So, it is well worth reviewing your site’s security to ensure that no evil lurks in the heart of your coding.

Check your traffic logs and firewall settings to make sure you’re still keeping as much malicious activity off your site as possible.

If your site is custom coded, confirm with your developers that the code base is being updated regularly to guard against malware and other attacks. (Even fully customized sites generally rely on code libraries or frameworks that can be the target of attacks.)

If you use a commercial CMS, do a similar check with the vendor. It can be helpful to also do a web search for “[my CMS name] vulnerabilities” and other phrases to find reports of attacks.

An open-source CMS requires a similar review:

  • Do you have the most recent version installed?
  • Are all of the plugins, modules, widgets, and other helper programs up to date?

In all of these cases, you should be on a regularly scheduled maintenance plan with your development team. Now is the time to make sure you have the most appropriate level of protection.

Don’t forget the basics. A quick review is all that should be required to make sure that your registrar and hosting accounts are secure and your domain name and SSL certificate are in order and not at risk of cancellation. If you host internally, review server access to eliminate the chance of former employees making mischief.

Privacy Review

If GDPR and CCPA sound like alphabet soup to you, it’s definitely time to review your site’s privacy policy and things like data retention. This is now true even for non-transactional sites. GDPR may apply only to those of us who work with E.U. residents, but CCPA applies to most firms who interact with California residents. The Shield law applies to every firm in New York State.

That’s a lot to keep track of and understanding your responsibilities can be overwhelming. Given the potential fines involved, this is not an area where you want to take all of your advice from a marketer, coder, or (ahem) digital strategist. Be sure to have a knowledgeable lawyer review your privacy policies and practices.

Accessibility Review

Making websites accessible to people with disabilities is an area that has grown in importance over the past 18 months or so because of an increase in legal actions, even though the relevant regulations aren’t new.

The good news is that building new websites to be accessible isn’t particularly difficult, nor is maintaining that accessibility as new content is added. Both require an understanding of the requirements and a shift in approach.

The story is not quite as rosy for bringing existing sites into compliance, which tends to be more labor-intensive. Adjustments may include changes to branding and in-depth review of content (image alt tags, for example), as well as less visible coding changes.

There are a number of excellent assessment tools that can help you get an understanding of the effort required to make the site compliant. But a deeper, manual scan will also be required to uncover everything.

Analytics Review

Finally, don’t forget to review your analytics. This is one area that just may uncover insights that can lead to revenue growth that and a move closer to the corner office, though more likely those improvements will be incremental.

  • Compare statistics year-over-year to see where you’ve improved and where performance has fallen off.
  • Determine whether your mobile audience is growing or holding steady. (It’s probably not shrinking.)
  • Review traffic sources to see how visitors are finding you. That can guide adjustments to your marketing efforts.

You may be doing quite a bit of this on a monthly or quarterly basis as part of your marketing efforts. Still, it’s worth it to expand beyond that scope to look at broader performance and strive for continual improvement throughout 2020 and beyond.

Navigating Martech Amid the Land of Shiny Solutions

The marketing technology landscape has seen explosive growth the last couple of decades, but even when the field was a bit smaller, it was a challenge for marketers to clearly understand what all the solutions did.

The martech landscape has seen explosive growth the last couple of decades, but even when the field was a bit smaller, it was a challenge for marketers to clearly understand what all the solutions did.

Firms like CabinetM and others, as well as Scott Brinker’s Chief Marketing Technologist Blog, have tracked the growth of marketing technology solutions, with CabinetM cataloging more than 8,000 products across over 300 categories. And the growth doesn’t show signs of slowing or stopping.

This proposes a major problem, as marketers must decide where to expend their limited time and energy. Even after categorizing martech solutions by function, the job can feel impossible — because there are several hundred solutions per category.

The pressure to keep up with competitors and fear of missing out are strong impediments to developing a successful martech strategy. But rest assured, there is a method to getting through the madness. Let’s first review two steps any marketer needs to take when considering their marketing technology needs, and then dive into some key categories that marketers should be considering first when it comes to martech investments.

Step 1: Square Away Customer Strategy

The first step is to develop a technology-agnostic, but technology-aware customer strategy.

Knowing what technology to invest in really begins by thinking about what your customer strategy is and what it aspires to be. With thousands of solutions in the market, martech is the land of shiny objects. There are really cool innovations, such as augmented reality, geo beacons, IOT, AI, etc.

It’s natural to be attracted to these innovative solutions. However, investing in solutions based primarily on their cool factor generally results in a confusing customer strategy and poor ROI.

The world of retailer apps is a good example: There are countless innovative and helpful branded mobile apps available for download. According to Statista, however, only a handful of apps are used with any real frequency, and most are deleted within 30 days. This is not to say that brands can’t have success with apps. However, solutions also need to be compelling and well-thought-out components of a larger winning customer strategy.

Target’s app, for example, helps drive a better physical in-store experience by helping you find what you need and informing you of relevant sales. Target could have added VR games or other gimmicks, but it chose to stay focused on improving the shopping experience.

By thinking about the brand, customer strategy, and customer pain points first, the martech universe becomes significantly easier to navigate.

Step 2: Decide on Investment vs. Outsource

The next step is to decide what tech solutions you want to invest in and which ones you will outsource. There are three questions to ask:

  • Is the solution essential to my customer strategy? In other words, would your brand be fundamentally
    impacted by the solution? Customer experience solutions would be prime examples, because customer experience has a straight-line relationship to how your brand is perceived today.
  • Does the solution require intense domain expertise? Some capabilities are constantly in flux. SEO, for example, is always a moving target. Staying ahead of search engine algorithms and how digital assistants — such as Alexa and Google Assistant — find information for their users takes some focused dedication.
  • Do I have or can I hire the appropriate talent? This can sometimes be the ultimate arbiter when deciding to invest time and energy on a solution. For example, while analytics and measurement solutions would qualify as essential to customer strategy, the ability to hire, retain, and manage an analytics capability can be very difficult. As a result, brands frequently outsource at least some of their analytical solutions.

Martech Categories Marketers Must Consider

While working through those steps can help to guide martech investments, there are four (plus one) solution categories that merit near-universal attention from marketers.

These solutions not only dominate tech-driven marketing, but also are constantly integrating more specialized solutions under their umbrella to provide end-to-end capabilities. (That said, even these dominant categories do not play in distinct sandboxes, and often overlap.)

Investing time and energy on these larger solutions is a great way to begin forming the foundation of a good marketing technology stack.

Customer Relationship Management (CRM)
This should be the central repository of important customer information and behavioral data. Most CRM
solutions also integrate modules that help make customer decisions based on the data. Some CRM solutions, such as Salesforce, have so many modules that it’s nearly impossible for one person to understand the full ecosystem. Nevertheless, understanding how to manage and utilize CRM systems will continue to be the foundation of managing brands well.

Customer Experience (CX)
These solutions help connect, measure, and improve the customer journey. Today, most brands are defined by their customer experience and less by what they advertise. Most CX solutions enable highly personalized interactions with customers and increase loyalty, making CX tech a critical investment for marketers. What’s more, each interaction increases knowledge of customer preferences and behaviors to be applied in future experiences.

Sales Automation
These solutions are focused on helping marketers complete time-consuming and repetitive tasks, such as sending communications or selecting the next offer based on customer behavior. Today, sales automation solutions make intelligent decisions on millions of marketing interactions at the individual customer level. This is also the technology segment most likely to make certain marketing jobs obsolete. For marketers worried about job security, developing skills in managing and executing automation software will be valuable insurance.

Analytics and Reporting
Data-driven marketing decisions are now the norm, along with measurement and ROI. Most martech solutions have a strong data foundation and generate appropriate reports automatically. That said, there is still a need to understand the larger analytical story and solutions, such as web and social analytics, data visualization, and BI tools, provide a critical view into marketing success. All marketers do not need a degree in data science. However, all marketers should understand the role of analytical solutions in driving marketing decisions from content to budget allocations.

Adtech (the Plus-One)
This category is purposefully separated from the other four. It contains ad buying solutions for programmatic display, search, social, mobile, and digital video advertising. Some large internal marketing departments may choose to invest in building this capability and there are real cost benefits involved. However, the digital ad industry is complex, in constant flux and highly algorithmic. While in-house marketers should be familiar with adtech trends, they should consider adtech investments carefully. In many cases, adtech is probably best left to digital ad agencies.

Navigating the Martech Landscape

By focusing on the dominant martech categories, there are many valuable solutions left on the table: such as content and asset management, SEO, geo and proximity-based marketing, social management, and chatbots. They all have an important role to play but are more likely to be integrated into larger solutions, over time. Unless these solutions are mission-critical to your customer strategy, it is better to outsource solution expertise.

Billions of venture capital dollars have been invested in martech this decade, and most industry insiders agree that there are too many solutions. The expectation is that the landscape will eventually shrink as winners separate from losers, but there is no sign of this happening soon.

Nevertheless, the overwhelming landscape can’t be a deterrent to jumping in and getting comfortable with marketing technology. It is being used by most marketers today and will only grow in influence.
What is important is to keep focused and not let the land of shiny objects distract you from executing your customer strategy.

Stop Expecting Data Scientists to Be Magical: Analytics Is a Team Sport

Many organizations put unreasonable expectations on data scientists. Their job descriptions and requirements are often at a super-human level. “They” say — and who are they? — that modern-day data scientists must be good at absolutely everything. Okay, then, what’s “everything,” in this case?

Many organizations put unreasonable expectations on data scientists. Their job descriptions and requirements are often at a super-human level. “They” say — and who are they? — that modern-day data scientists must be good at absolutely everything. Okay, then, what’s “everything,” in this case?

First, data scientists have to have a deep understanding in mathematics and statistics, covering regression models, machine learning, decision trees, clustering, forecasting, optimization, etc. Basically, if you don’t have a post-graduate degree in statistics, you will fail at “hello.” The really bad news is that even people with statistics degrees are not well-versed in every technique and subject matter. They all have their specialties, like medical doctors.

Then data scientists have to have advanced programming skills and deep knowledge in database technologies. They must be fluent in multiple computer languages in any setting, easily handling all types of structured and unstructured databases and files in any condition. This alone is a full-time job, requiring expert-level experience, as most databases are NOT in analytics-ready form. It is routinely quoted that most data scientists spend over 80% of their time fixing the data. I am certain that these folks didn’t get an advanced degree in statistics to do data plumbing and hygiene work all of the time. But that is how it is, as they won’t see what we call a “perfect” dataset outside schools.

Data scientists also have to have excellent communication and data visualization skills, being able to explain complex ideas in plain English. It is hard enough to derive useful insights out of mounds of data; now they have to construct interesting stories out of them, filled with exciting punchlines and actionable recommendations at the end. Because most mortals don’t understand technical texts and numbers very well — many don’t even try, and some openly say they don’t want to think — data scientists must develop eye-popping charts and graphs, as well, using the popular visualization tool du jour. (Whatever that tool is, they’d better learn it fast).

Finally, to construct the “right” data strategies and solutions for the business in question, the data scientist should have really deep domain and industry knowledge, at a level of a management and/or marketing consultant. On top of all of that, most job requirements also mention soft skills — as “they” don’t want some data geeks with nerdy attitudes. In other words, data scientists must come with kind and gentle bedside manners, while being passionate about the business and boring stuff like mathematics. Some even ask for child-like curiosity and ability to learn things extremely fast. At the same time, they must carry authority like a professor, being able to influence non-believers and evangelize the mind-numbing subject of analytics. This last part about business acumen, by the way, is the single-most important factor that divides excellent data scientists who add value every time they touch data, and data plumbers who just move data around all day long. It is all about being able to give the right type of homework to themselves.

Now, let me ask you: Do you know anyone like this, having all of these skills and qualities in “one” body? If you do, how many of them do you personally know? I am asking this question in the sincerest manner (though I am quite sarcastic, by nature), as I keep hearing that we need tens of thousands of such data scientists, right now.

There are musicians who can write music and lyrics, determine the musical direction as a producer, arrange the music, play all necessary instruments, sing the song, record, mix and master it, publish it, and promote the product, all by themselves. It is not impossible to find such talents. But if you insist that only such geniuses can enter the field of music, there won’t be much music to listen to. The data business is the same way.

So, how do we divide the task up? I have been using this three-way division of labor — as created by my predecessors — for a long time, as it has been working very well in any circumstance:

  • A Statistical Analyst will have deep knowledge in statistical modeling and machine learning. They would be at the core of what we casually call analytics, which goes way beyond some rule-based decision-making. But these smart people need help.
  • A Master Data Manipulator will have excellent coding skills. These folks will provide analytics-ready datasets on silver platters for the analysts. They will essentially take care of all of the “before” and “after” steps around statistical modeling and other advanced analytics. It is important to remember that most projects go wrong in data preparation and post-analytics application stages.
  • A Business Analyst will need to have a deep understanding of business challenges and the industry landscape, as well as functional knowledge in modeling and database technologies. These are the folks who will prescribe solutions to business challenges, create tangible projects out of vague requests, evaluate data sources and data quality, develop model specifications, apply the results to businesses, and present all of this in the form of stories, reports, and data visualization.

Now, achieving master-level expertise in one of these areas is really difficult. People who are great in two of these three areas are indeed rare, and they will already have “chief” or “head” titles somewhere, or have their own analytics practices. If you insist only procuring data scientists who are great at everything? Good luck to you.

Too many organizations that are trying to jump onto this data bandwagon hire just one or two data scientists, dump all kinds of unorganized and unstructured data on them, and ask them to produce something of value, all on their own. Figuring out what type of data or analytics activity will bring monetary value to the organization isn’t a simple task. Many math geeks won’t be able to jump that first hurdle by themselves. Most business goals are not in the form of logical expressions, and the majority of data they will encounter in that analytics journey won’t be ready for analytics, either.

Then again, strategic consultants who develop a data and analytics roadmap may not be well-versed in actual modeling, machine learning implementation, or database constructs. But such strategists should operate on a different plane, by design. Evaluating them based on coding or math skills would be like judging an architect based on his handling of building materials. Should they be aware of values and limitations of data-related technologies and toolsets? Absolutely. But that is not the same as being hands-on, at a professional level, in every area.

Analytics has always been a team sport. It was like that when the datasets were smaller and the computers were much slower, and it is like that when databases are indeed huge and computing speed is lightning fast. What remains constant is that, in data play, someone must see through the business goals and data assets around them to find the best way to create business value. In executing such plans, they will inevitably encounter many technical challenges and, of course, they will need expert-level technicians to plow through data firsthand.

Like any creative work, such as music producing or movie-making, data and analytics work must start with a vision, tangible business goals, and project specifications. If these elements are misaligned, no amount of mathematical genius will save the day. Even the best rifles will be useless if the target is hung in a wrong place.

Technical aspects of the work matter only when all stakeholders share the idea of what the project is all about. Simple statements like “maximizing the customer value” need a translation by a person who knows both business and technology, as the value can be expressed in dollars, visits, transactions, dates, intervals, status, and any combination of these variables. These seemingly simple decisions must be methodically made with a clear purpose, as a few wrong assumptions by the analyst at-hand — who may have never met the end-user — can easily derail the project toward a wrong direction.

Yes, there are people who can absolutely see through everything and singlehandedly take care of them all. But if your business plan requires such superheroes and nothing but such people, you must first examine your team development roadmap, org chart, and job descriptions. Keep on pushing those poor and unfortunate recruiters who must find unicorns within your budget won’t get you anywhere; that is not how you’re supposed to play this data game in the first place.

How to Use Sentiment Analysis to Transform Your Digital Marketing Strategy

The goal of sentiment analysis is to increase customer acquisition, retention, and satisfaction. Moreover, it helps put the right brand messaging in front of the most interested eyes.

Sentiment analysis is a fascinating concept.

Brands use it to better understand customer reactions, behaviors, and opinions toward their products, services, reputation, and more. The goal of sentiment analysis is to increase customer acquisition, retention, and satisfaction. Moreover, it helps put the right brand messaging in front of the most interested eyes.

Before the digital age, gauging and understanding sentiment was an incredibly cumbersome process. It typically involved sending out surveys manually, going to the streets and asking people, or gathering focus groups in one place at one time. The big data-infused model of sentiment analysis we know today hit its stride on the political scene in 2010. Since then, it has morphed into a key tactic in marketing plans. These days, most of the grunt work is automated.

However, even with all of the advances in areas like martech, voice search, conversational commerce on social media, virtual assistants, and big data analytics, understanding how to actually use sentiment analysis to improve the bottom line is a complicated task.

Here are a few key approaches to help you get the value you need.

Know the Terms and Phrases That Indicate Intent

Most businesses today (hopefully) don’t even begin their digital branding and marketing efforts without a list of keywords relevant to their industry and a plan on how to target their audiences. You should have a good idea of the terms and variations that bring you traffic to your website, when used in conjunction with your brand and products. If you run an auto repair shop, people are likely finding you on the web through terms such as: body shop near me, auto repair, replace brake pads, etc.

Google Search Console gives you a great, fairly accurate idea of what’s bringing people to your website:

google search console
Credit: Author’s own

In terms of sentiment analysis, to gain actionable insight, you need to know how people are using these keywords in a way that indicates interest and engagement potential. Now, this is perhaps the biggest gray area in sentiment analysis, because not all positive sentiment equates to sales. Just because there are a lot of positive words around luxury cars doesn’t necessarily mean people are about to buy.

However, there are certain terms and phrases that signal people have entered your buyer’s journey. Let’s say you run an SEO agency and one of the terms you’re tracking for sentiment analysis is “Google update.” If you notice that a lot of people are searching for things like “what to do after a google algorithm update?” or “how to recover from a google penalty?” it’s a good indicator that they might need your services at the moment; you should target them accordingly.

Spot Patterns in Product Reviews

At its core, sentiment analysis is a game of pinpointing patterns and reading between the lines. Simply put, the more genuine and meaningful feedback you get on your product, the better insights you will gain into your customers.

Of course, gathering such high-quality feedback is easier planned than executed; especially for newer or smaller companies. Only 10% of customers will review or rate a business after a purchase, while half of consumers will leave a review only some of the time. However, the number of reviews jump significantly to 68% when a company asks the customer directly to leave one.

In order to find fruitful, up-to-date patterns, you need to make it a marketing process to consistently seek out new reviews. Then, you’ll want to start by searching for common adjectives. These should include words like:

  • great, simple, easy,
  • or awful, difficult, poor, etc.
trustpilot review
Credit: Capterra.com

In the above image, there are a good amount of reviews that include the word “great” for this product. Looking at the context around this term, we notice recurring patterns around components, like features and usability, and “not so” great opinions on customer service.

Finding recurring themes in customer sentiment will give you a better picture into the positive and negative aspects of your business or product. These can indicate the level of trust people have in your brand and how likely they are to give you a recommendation. When you are looking for patterns, try to come up with several adjectives that shed light on both sides of the spectrum.

  • What words are commonly used to describe their experience?
  • Is there an issue that forces multiple people to leave negative reviews?
  • What part delights them the most?
  • What’s preventing you from solving common problems?
  • Which products or solutions are users comparing yours to?

The answers to these important questions can help you understand user sentiment better and build a customer-focused marketing strategy.

Look to Social Media for Unabashed (Unfiltered) Opinions

Oftentimes, social media is one of the best places to get raw opinions, where people don’t hold back —  both in positive and negative lights. Knowing how people feel in an unfiltered environment can be a great way to tell which parts of your business are working very well —  and not so well.

A social listening platform is an important tool to keep in your portfolio for monitoring online mentions and gathering important datasets. Tools like Mention, Talkwalker, and Brand24, not only keep an ear on social mentions, but also turn these comments and hashtags into valuable customer analytics to help your marketing team understand your customers even better.

For instance, the online gaming developer Wargaming used brand monitoring techniques to analyze its customer’s desires and see which products performed best. The company tracked its users’ social media conversations to see what they were looking for, what parts of the games they liked or disliked, and any suggestions they offered for improvements.

Similarly, you can use a social listening tool to combine all your brand mentions into one database, giving your marketing team a bird’s eye view of audience sentiment on social platforms and identify areas to work on.

talkwalker
Credit: Talkwalker.com

While gathering this sentiment is good, the most important thing is knowing what to do with it. About 83% of customers who make a social mention of a brand —  specifically, a negative one —  expect a response within a day, and 18% want one immediately. Unfortunately, a majority of these mentions go unanswered, which can really impact a brand’s image. By utilizing an effective real-time social listening program, you can not only stay on top of social buzz, you can intervene and reply to any negative sentiment right away.

Some of the next steps will be fairly obvious, especially when you’re dealing with negative feedback. For instance, if your customer sentiment from social listening reveals that people are having trouble updating their software or there are issues with the product itself, this indicates that some redesign is necessary. However, don’t get too comfortable when you are getting positive reactions —  these tend to trick companies into thinking that no improvements are needed.

This kind of feedback can support a stronger marketing strategy. Let’s say your business sells pool supplies. While your customers may not be tweeting about your great chlorine chemicals, they are more likely talking about the fun pool floaties and games your website sells. Therefore, it would be helpful to highlight these fun accessories, as well, by listing them more prominently on your page and even including UGC to promote them.

poolfloatz
Credit: Instagram

Use Predictive Analysis to Spot Trends and Automate Actions

Now that you have all these valuable insights, you need to know how you can use them to shape your current and future business strategies.

Plugging your sentiment analysis into a predictive model is crucial for spotting trends, getting a feel for how opinions are progressing, and determining your next steps. Predictive analytics use machine learning and AI technology to not only gather, but analyze loads of consumer data and make accurate projections. These systems gauge historical behavioral data to help determine the best plan of action in the future.

In fact, customer segmentation and targeting (which is the logical next step after you analyze your audience’s sentiments) is one of the areas where applying AI and predictive analytics has the highest chance of working well for business.

applications of AI
Credit: Emerj.com

In order to develop an optimal predictive model for sentiment analysis, ask yourself:

  • What do you want to know?
  • What is the expected outcome? What do you think your customers are thinking?
  • What actions will you take to improve overall sentiment when you get the answers? How will you automate these actions?
  • What are the success metrics for these actions?

The Wrap

Chances are, your customers are already telling you what you need to make improvements to your business. By gathering as much data as possible on customer sentiment, your marketing team can understand just what needs to be done to provide a better experience, tweak campaigns accordingly, and acquire and retain more customers in the process.

Be sure you know what to data to collect, how to mine it, and how to apply it to keep raking in the revenue.

When You Fail, Don’t Blame Data Scientists First — or Models

The first step in analytics should be “formulating a question,” not data-crunching. I can even argue formulating the question is so difficult and critical, that it is the deciding factor dividing analysts into seasoned data scientists and junior number-crunchers.

Last month, I talked about ways marketing automation projects go south (refer to “Why Many Marketing Automation Projects Go South”). This time, let’s be more specific about modeling, which is an essential element in converting mounds of data into actionable solutions to challenges.

Without modeling, all automation efforts would remain at the level of rudimentary rules. And that is one of the fastest routes to automate wrong processes, leading to disappointing results in the name of marketing automation.

Nonetheless, when statistically sound models are employed, users to tend to blame the models first when the results are less than satisfactory. As a consultant, I often get called in when clients suspect the model performance. More often than not, however, I find that the model in question was the only thing that was done correctly in a series of long processes from data manipulation and target setting to model scoring and deployment. I guess it is just easier to blame some black box, but most errors happen before and after modeling.

A model is nothing but an algorithmic expression measuring likelihood of an object resembling — or not resembling — the target. As in, “I don’t know for sure, but that household is very likely to purchase high-end home electronics products,” only based on the information that we get to have. Or on a larger scale, “How many top-line TV sets over 65 inches will we sell during the Christmas shopping season this year?” Again, only based on past sales history, current marcom spending, some campaign results, and a few other factors — like seasonality and virality rate.

These are made-up examples, of course, but I tried to make them as specific and realistic as possible here. Because when people think that a model went wrong, often it is because a wrong question was asked in the first place. Those “dumb” algorithms, unfortunately, only provide answers to specific questions. If a wrong question is presented? The result would seem off, too.

That is why the first step in analytics should be “formulating a question,” not data-crunching. Jumping into a data lake — or any other form of data depository, for that matter — without a clear definition of goals and specific targets is often a shortcut to demise of the initiative itself. Imagine a case where one starts building a house without a blueprint. Just as a house is not a random pile of building materials, a model is not an arbitrary combination raw data.

I can even argue formulating the question is so difficult and critical, that it is the deciding factor dividing analysts into seasoned data scientists and junior number-crunchers. Defining proper problem statements is challenging, because:

  • business goals are often far from perfectly constructed logical statements, and
  • available data are mostly likely incomplete or inadequate for advanced analytics.

Basically, good data players must be able to translate all those wishful marketing goals into mathematical expressions, only using the data handed to them. Such skill is far beyond knowledge in regression models or machine learning.

That is why we must follow these specific steps for data-based solutioning:

data scientists use this roadmap
Credit: Stephen H. Yu
  1. Formulating Questions: Again, this is the most critical step of all. What are the immediate issues and pain points? For what type of marketing functions, and in what context? How will the solution be applied and how will they be used by whom, through what channel? What are the specific domains where the solution is needed? I will share more details on how to ask these questions later in this series, but having a specific set of goals must be the first step. Without proper goal-setting, one can’t even define success criteria against which the results would be measured.
  2. Data Discovery: It is useless to dream up a solution with data that are not even available. So, what is available, and what kind of shape are they in? Check the inventory of transaction history; third-party data, such as demographic and geo-demographic data; campaign history and response data (often not in one place); user interaction data; survey data; marcom spending and budget; product information, etc. Now, dig through everything, but don’t waste time trying to salvage everything, either. Depending on the goal, some data may not even be necessary. Too many projects get stuck right here, not moving forward an inch. The goal isn’t having a perfect data depository — CDP, Data Lake, or whatever — but providing answers to questions posed in Step 1.
  3. Data Transformation: You will find that most data sources are NOT “analytics-ready,” no matter how clean and organized they may seem (there are often NOT well-organized, either). Disparate data sources must be merged and consolidated, inconsistent data must be standardized and categorized, different levels of information must be summarized onto the level of prediction (e.g., product, email, individual, or household levels), and intelligent predictors must be methodically created. Otherwise, the modelers would spend majority of their time fixing and massaging the data. I often call this step creating an “Analytics Sandbox,” where all “necessary” data are in pristine condition, ready for any type of advanced analytics.
  4. Analytics/Model Development: This is where algorithms are created, considering all available data. This is the highlight of this analytics journey, and key to proper marketing automation. Ironically, this is the easiest part to automate, in comparison to previous steps and post-analytics steps. But only if the right questions — and right targets — are clearly defined, and data are ready for this critical step. This is why one shouldn’t just blame the models or modelers when the results aren’t good enough. There is no magic algorithm that can save ill-defined goals and unusable messy data.
  5. Knowledge Share: The models may be built, but the game isn’t over yet. It is one thing to develop algorithms with a few hundred thousand record samples, and it’s quite another to apply them to millions of live data records. There are many things that can go wrong here. Even slight differences in data values, categorization rules, or even missing data ratio will make well-developed models render ineffective. There are good reasons why many vendors charge high prices for model scoring. Once the scoring is done and proven correct, resultant model scores must be shared with all relevant systems, through which decisions are made and campaigns are deployed.
  6. Application of Insights: Just because model scores are available, it doesn’t mean that decision-makers and campaign managers will use them. They may not even know that such things are available to them; or, even if they do, they may not know how to use them. For instance, let’s say that there is a score for “likely to respond to emails with no discount offer” (to weed out habitual bargain-seekers) for millions of individuals. What do those scores mean? The lower the better, or the higher the better? If 10 is the best score, is seven good enough? What if we need to mail to the whole universe? Can we differentiate offers, depending on other model scores — such as, “likely to respond to free-shipping offers”? Do we even have enough creative materials to do something like that? Without proper applications, no amount of mathematical work will seem useful. This is why someone in charge of data and analytics must serve as an “evangelist of analytics,” continually educating and convincing the end-users.
  7. Impact Analysis: Now, one must ask the ultimate question, “Did it work?” And “If it did, what elements worked (and didn’t work)?” Like all scientific approaches, marketing analytics and applications are about small successes and improvements, with continual hypothesizing and learning from past trials and mistakes. I’m sure you remember the age-old term “Closed-loop” marketing. All data and analytics solutions must be seen as continuous efforts, not some one-off thing that you try once or twice and forget about. No solution will just double your revenue overnight; that is more like a wishful thinking than a data-based solution.

As you can see, there are many “before” and “after” steps around modeling and algorithmic solutioning. This is why one should not just blame the data scientist when things don’t work out as expected, and why even casual users must be aware of basic ins and outs of analytics. Users must understand that they should not employ models or solutions outside of their original design specifications, either. There simply is no way to provide answers to illogical questions, now or in the future.

Why Many Marketing Automation Projects Go South

There are so many ways to mess up data or analytics projects, may they be CDP, Data Lake, Digital Transformation, Marketing Automation, or whatever sounds cool these days. First off, none of these items are simple to develop, or something that you just buy off the shelf.

As a data and analytics consultant, I often get called in when things do not work out as planned or expected. I guess my professional existence is justified by someone else’s problems. If everyone follows the right path from the beginning and everything goes smoothly all of the time, I would not have much to clean up after.

In that sense, maybe my role model should be Mr. Wolf in the movie “Pulp Fiction.” Yeah, that guy who thinks fast and talks fast to help his clients get out of trouble pronto.

So, I get to see all kinds of data, digital, and analytical messes. The keyword in the title of this series “Big Data, Small Data, Clean Data, Messy Data” is definitely not “Big” (as you might have guessed already), but “Messy.” When I enter the scene, I often see lots of bullet holes created by blame games and traces of departed participants of the projects. Then I wonder how things could have gone so badly.

There are so many ways to mess up data or analytics projects, may they be CDP, Data Lake, Digital Transformation, Marketing Automation, or whatever sounds cool these days. First off, none of these items are simple to develop, or something that you just buy off the shelf. Even if you did, someone would have to tweak more than a few buttons to customize the toolset to meet your unique requirements.

What did I say about those merchants of buzzwords? I don’t remember the exact phrase, but I know I wouldn’t have used those words.

Like a veteran cop, I’ve developed some senses to help me figure out what went wrong. So, allow me to share some common traps that many marketing organizations fall into.

No Clear Goal or Blueprint

Surprisingly, a great ,many organizations get into complex data or analytics projects only with vague ideas or wish lists. Imagine building a building without any clear purpose or a blueprint. What is the building for? For whom, and for what purpose? Is it a residential building, an office building, or a commercial property?

Just like a building is not just a simple sum of raw materials, databases aren’t sums of random piles of data, either. But do you know how many times I get to sit in on a meeting where “putting every data source together in one place” is the goal in itself? I admit that would be better than data scattered all over the place, but the goal should be defined much more precisely. How they are going to be used, by whom, for what, through what channel, using what types of toolsets, etc. Otherwise, it just becomes a monster that no one wants to get near.

I’ve even seen so-called data-oriented companies going out of business thanks to monstrous data projects. Like any major development project, what you don’t put in is as important as what you put in. In other words, the summary of absolutely everyone’s wish list is no blueprint at all, but the first step toward inevitable demise of the project. The technical person in charge must be business–oriented, and be able to say “no” to some requests, looking 10 steps down the line. Let’s just say that I’ve seen too many projects that hopelessly got stuck, thanks to features that would barely matter in practice (as in “You want what in real-time?!”). Might as well design a car that flies, as well.

No Predetermined Success Metrics

Sometimes, the project goes well, but executives and colleagues still define it as a failure. For instance, a predictive model, no matter how well it is constructed mathematically, cannot single-handedly overcome bad marketing. Even with effective marketing messages, it cannot just keep doubling the performance level indefinitely. Huge jumps in KPI (e.g., doubling the response rate) may be possible for the very first model ever (as it would be, compared to the previous campaigns without any precision targeting), but no one can expect such improvement year after year.

Before a single bite of data is manipulated, project champions must determine the success criteria for the project. In terms of coverage, accuracy, speed of execution, engagement level, revenue improvement (by channel), etc. Yes, it would be hard to sell the idea with lots of disclaimers attached to the proposal, but maybe not starting the project at all would be better than being called a failure after spending lots of precious time and money.

Some goals may be in conflict with each other, too. For instance, response rate is often inversely related to the value of the transaction. So, if the blame game starts, how are you going to defend the predictive model that is designed primarily to drive the response rate, not necessarily the revenue per transaction? Set the clear goals in numeric format, and more importantly, share the disclaimer upfront. Otherwise, “something” would look wrong to someone.

But what if your scary boss wants to boost rate of acquisition, customer value, and loyalty all at the same time, no matter what? Maybe you should look for an exit.

Top-Down Culture

By nature, analytics-oriented companies are flatter and less hierarchical in structure. In such places, data and empirical evidences win the argument, not organizational rank of the speaker. It gets worse when the highest-ranking officer has very little knowledge in data or analytics, in general. In a top-down culture, no one would question that C-level executive in a nice suit. Foremost, the executive wouldn’t question his own gut feelings, as those gut-feelings put him in that position in the first place. How can he possibly be wrong?

Trouble is that the world is rapidly changing around any organization. And monitoring the right data from the right place is the best way to keep informed and take actions preemptively. I haven’t encountered any gut-feeling — including my own — that stood the test of time better than data-based decision-making.

Now sometimes, the top-down culture is a good thing, though. If the organizational goals are clearly set, and if the top executive does not launch blame games and support a big data project (no pun intended here). Then, an indefinite amount of inter-departmental conflicts will be mitigated upfront (as in, “Hey, everyone, we are doing this, alright?).

Conflicts Among Teams — No Buy-in, No Use

But no amount of executive force can eliminate all infighting that easily. Some may say “Yeah, yeah, yeah” in front of the CEO or CMO, but sabotage the whole project behind the scene. In fact, I’ve seen many IT departments get in the way of the noble idea of “Customer-360.”

Why? It could be the data ownership issue, security concerns, or lack of understanding of 1:1 marketing or advanced analytics. Maybe they just want the status quo, or see any external influence on data-related matters as a threat. In any case, imagine the situation where the very people who hold the key to the of source data are NOT cooperating with data or analytics projects for the benefit of other departments. Or worse, maybe you have “seen” such cases, as they are so common.

Another troublesome example would be on the user side. Imagine a situation where sales or marketing personnel do not buy into any new way of doing things, such as using model scores to understand the target better. Maybe they got burned by bad models in the past. Or maybe they just don’t want to change things around, like those old school talent scouts in the movie “Moneyball.” Regardless, no buy-in, no use. So much for that shiny marketing automation project that sucked up seven-figure numbers to develop and deploy.

Every employee puts their prolonged employment status over any dumb or smart project. Do not underestimate the people’s desire to keep their jobs with minimal changes.

Players Haven’t Seen Really Messy Situations Before

As you can see, data or analytics projects are not just about technologies or mathematics. Further, data themselves can be a hindrance. I’ve written many articles about “good” data, but they are indeed quite rare in real life. Data must be accurate, consistent, up-to-date, and applicable in most cases, without an excessive amount of missing values. And keeping them that way is a team sport, not something a lone tech genius can handle.

Unfortunately, most graduates with degrees in computer science or statistics don’t get to see a real bloody mess before they get thrown into a battlefield. In school, problems are nicely defined by the professors, and the test data are always in pristine conditions. But I don’t think I have seen such clean and error-free data since school days, which was indeed a lifetime ago.

Dealing with organizational conflicts, vague instructions, and messy data is the part of the job of any data professional. It requires quite a balancing act to provide “the least wrong answers” consistently to all constituents who have vastly different interests. If the balance is even slightly off, you may end up with a technically sound solution that no one adopts into their practices. Forget about full automation of anything in that situation.

Already Spent Money on Wrong Things

This one is a heart-breaker for me, personally. I get onto the scene, examine the case, and provide step-by-step solutions to get to the goal, only to find out that the client company spent money on the wrong things already and has no budget left to remedy the situation. We play with data to make money, but playing with data and technology costs money, too.

There are so many snake oil salespeople out there, over-promising left and right with lots of sweet-to-the-ears buzzwords. Yeah, if you buy this marketing automation toolset armed with state-of-the-art machine-learning features, you will get actionable insights out of any kind of data in any form through any channel. Sounds too good to be true?

Marketing automation is really about the “combination” of data, analytics, digital content, and display technologies (for targeted messaging). It is not just one thing, and there is no silver bullet. Even if some other companies may have found one, will it be applicable to your unique situation, as is? I highly doubt it.

The Last Word on How to Do Marketing Automation Right

There are so many reasons why marketing automation projects go south (though I don’t understand why going “south” is a bad thing). But one thing is for sure. Marketing automation — or any data-related project — is not something that one or two zealots in an organization can achieve single-handedly with some magic toolset. It requires organizational commitment to get it done, get it utilized, and get improved over time. Without understanding what it should be about, you will end up automating the wrong things. And you definitely don’t want to get to the wrong answer any faster.

Even AI Needs Clean Data in Order to Be the Shiny Object

Users are quickly realizing that investing in AI is not the end of the road. Then again, in this analytics journey, there really is no end anyway; much like the scientific journey, it is a constant series of hypothesis, testing, and course corrections.

Users are quickly realizing that investing in AI is not the end of the road. Then again, in this analytics journey, there really is no end anyway; much like the scientific journey, it is a constant series of hypothesis, testing, and course corrections. And now, I’ll explain why that means even AI needs clean data.

If there is a book out there — many have asked me about it — it would look more like a long series of case studies, not some definitive roadmap for all. Why? Because prescribing analytics is much like a doctor’s work. It depends as much on the unique situation of the patient as on the list of solutions.

That is the main reason why one cannot just install AI and call it a day. Who’d give it a purpose, guide it, and constantly fine-tune it? Not itself, for sure.

Then there is a question about what goes into it. AI — or any type of analytics tool, for that matter — depends on clean and error-free data. If the data are dirty and unusable, you may end up automating inadequate decision-making processes, getting wrong answers really fast. I’d say that would be worse than not having any answer at all.

So far, you may say I am just stating the obvious here. Of course, AI or machine learning require clean and error free data. The real trouble is that such data preparation often takes up as much as 80% (if not more) of the whole process of applying data-based intelligence to decision-making. In fact, users are finding out that the algorithmic part of the equation is the simplest to automate. The data refinement process is far more complicated than that, as it really depends on the shape of the available data. And some are really messy (hence, the title of my series in this fine publication, “Big Data, Small Data, Clean Data, Messy Data”).

So, why aren’t data readily usable?

  • Data Are in Silos: This is so common that “siloed data” is actually a term that we commonly use in meeting rooms. Simply, if the data are locked up somewhere, they won’t be much of use for anyone. Worse, each silo may be on a unique platform, with incompatible data formats from others.
  • Data Are in One Place, But Not Connected: Putting the data in one place isn’t enough, if they are not properly connected. Let’s say an organization is pursuing the coveted “Customer 360” (or more properly, “360-degree view of a customer”) for personalized marketing. The first thing to do is to define what a “person” means, in the eyes of the machine and algorithms. It could be any form of PII or even biometrics data, through which all related data would be merged and consolidated. If the online and offline shopping history of a person aren’t connected properly, algorithms will treat them as two separate entities, devaluating the target customer. This is just one example; all kinds of analytics — whether they be forecasting, segmentation, or product analysis — perform better with more than one type of data, and they should be in one place to be useful.
  • Data Are Connected, But Many Fields Are Wrong or Empty: So what if the data are merged in one place? If data are mostly empty or incorrect, they will be worse than not having any at all. Good luck forecasting or predicting anything with data fields with really low fill rates. Unfortunately, we encounter tons of missing values in the case of “Customer 360.” What we call Big Data have lots of holes in them, when everything is lined up around the target (i.e., it is nearly impossible to know everything about everyone). Plus, remember that most modern databases record and maintain what are available; but in predictive analytics, what we don’t know is equally important.
  • Data Are There, But They Are Not Readily Usable, as They Are in Free-Form Formats: You may have the data, but they may need some serious standardization, refinement, categorization, and transformation processes to be useful. Many times I encountered hundreds, at time over a thousand, offer and promotion codes. To find out “what marketing efforts worked,” we would have to go through some serious data categorization to make them useful. (Refer to “The Art of Data Categorization”) This is just one example of many. Too often, analytics work is stuck in the middle of too much free-form, unstructured data.
  • Data Are Usable, But They Are One-Dimensional: Bits and pieces of data, even if they are clean and accurate, do not provide a holistic portrait of target individuals (if the job is about 1:1 marketing). Most predictive analytics work requires diverse data of a different nature, and only after proper data consolidation and summarization, we can obtain a multi-dimensional view. So-called relational databases and unstructured databases do not provide such a perspective without data summarization (or de-normalization) processes, as entities of such databases are just lists of events and transactions (e.g., on such and such date, this individual clicked some email link and bought a particular item for how much).
  • Data Are Cleaned, Consolidated, and Summarized, But There Is No Built-in Intelligence: To predict what the target individual is interested in, data players must rearrange the data to describe the person, not just events or transactions. Why do you think even large retailers, like Amazon, treat you like you are only about the very last transaction, sending the “likes” of the last item you purchased, ignoring years of interaction history? Because their data are not describing “you” as a target. And you are not just a sum of past transactions, either. For instance, your days in between purchases in the home electronics category may be far greater than those in the apparel category, yet showing higher average spending in the first category. This type of insight only comes out when the data are summarized properly to describe the buyer, not each transaction. Further, summarized data should be in the form of answers to questions, acting as building blocks of predictive scores. Intelligent variables always increase the predictive power of models, machine-based or not.
  • Data Variables Include Intelligence, But It Is Still Difficult to Derive Insights: Lists of intelligent variables are just basic necessities for advanced analytics, which would lead us to deeper and actionable insights. Even statisticians and analysts require a long training period to derive meanings out of seemingly beautiful charts and effectively develop stories around them. Yes, we can see that certain product sales went down, even with heavy promotion. But what does that really mean, and what should we do about it? For a machine to catch up with that level of storytelling, the data best be on silver platters in pristine condition first. Because changing assumptions based on “what is not there” or “what looks suspicious” is still in the realm of human intuition. Machines, for now, will read the results as if every bit of input data is correct and carries equal weight.

There are schools of thought that machines should be able to take raw data in any form, and somehow spit out answers for us mortals. But I do not subscribe to such a brute-force approach. Even if there is no human intervention in the data refinement process, machines will have to clean data in steps, like we have been doing. Simply put, a machine that is really good at identifying target individuals will be separately trained from the one that is designed for prediction of any kind.

So, what does clean and useful data mean? Just reverse the list above. In summary, good data must be:

  • Free from silos
  • Properly connected, if coming from disparate sources
  • Free from errors and too many missing values (i.e., must have good coverage)
  • Readily usable by non-specialists without having to manipulate them extensively
  • Multi-dimensional as a result of proper data summarization
  • In forms of variables with built-in intelligence
  • Presented in ways that provide insights, beyond a simple list of data points

Then, what are the steps of data refinement process? Again, if I may summarize the key steps out of the list above:

  1. Data collection (from various sources)
  2. Data consolidation (around the key object, such as individual target)
  3. Data hygiene and standardization
  4. Data categorization
  5. Data summarization
  6. Creation of intelligent variables
  7. Data visualization and/or modeling for business insights

Conclusion

I have covered all of these steps in detail through this column over the years. Nevertheless, I just wanted to share these steps on a high level again, as the list will serve as a checklist, of sorts. Why? Because I see too many organizations — even the advanced ones — that miss the whole category of necessary activities. How many times have I seen unstructured and uncategorized data, and how many times have I seen very clean data but only on an event and transaction level? How can anyone predict the target individual’s future behavior that way, with or without the help of machines?

The No. 1 reason why AI or machine learning do not reach their full potential is inadequate input data. Imagine putting unrefined oil as fuel or lubricant for a brand new Porsche. If the engine stalls, is that the car’s fault? To that point, please remember that even the machines require clean and organized data. And if you are about to have machines do the clean-up, also remember that machines are not that smart (yet), and they work better when trained for a specific task, such as pattern recognition (for data categorization).

One last parting thought: I am not at all saying that one must wait for a perfect set of data. Such a day will never come. Errors are inevitable, and some data will be missing. There will be all kinds of collection problems, and the limitation in data collection mechanisms cannot be fully overcome, thanks to those annoying humans who don’t comply well with the system. Or, it could be that the target individual simply did not create an event for the category yet (i.e., data will be missing for the Home Electrics category, if the buyer in question simply did not do anything in that category).

So, collect and clean the data as much as possible, but don’t pursuit 100% either. Analytics — with or without machines — always have been making the most of what we have. Leave it at “good enough,” though machine wouldn’t understand what that means.

How to Employ Segmentation to Improve Your Content Marketing

Evaluating your content marketing specifically for each audience segment will yield insights that a program-wide analysis won’t capture. Audience segmentation isn’t just good for reaching the right people with the right message. Done well, it can help you learn more about your audience.

Audience segmentation isn’t just good for reaching the right people with the right message. Done well, it can help you learn more about your audience. And learn more about how better to meet their needs.Your content marketing should be a part of that process.

  • First, if you are’t creating content specifically for different audience segments, please start doing so now.
  • Second, if you aren’t creating your audience segments based on their attributes and behavior, that’s another change you should make immediately. (“People who buy Product A from us” is not an effective audience segment.)

Assuming you do have useful audience segmentation in place, here’s how you can use it to learn more about your audience.

All Content Is Not Created Equal

Begin by evaluating your content marketing efforts on their own. Identify the 20% of your content that performs best and the 20% that performs worst. (We’ll come back to those bottom-of-the-barrel content elements in a bit.)

Your evaluation can be based on key performance metrics, running the gamut from page views to revenue generated. But you should include a range of process metrics and outcomes metrics.

We define process metrics as those data points, like page views, time on page, CTRs, etc., that can provide valuable insight into your audience’s interests, but don’t measure actual business performance. Outcomes metrics are those that relate to revenue generation, lead quality, lead volume, and so on.

You may be tempted to lean more heavily on outcomes metrics, if you have them. They are clearly more important in the long run. Page likes don’t pay the bills, after all. But our goal is to evaluate the health and effectiveness of our entire content marketing program. So, understanding how well we’re doing with early-stage prospects is important. The data points from early funnel activity are almost always going to be process oriented — content consumption, measures of engagement, micro-conversion numbers.

Cross-Referencing Your Results

With your raw performance information in-hand, look at these numbers again — broken down by audience segment. Here’s where you’ll see real value. If you can identify what content is resonating best with each audience segment, you can tailor programs to those audience segments, based on what they’re most interested in.

Rather than simply trying to double down on your best-performing content, you can provide content that performs best for each audience segment.

What About Your Underperforming Content?

You may also find that, rather than eliminating your less effective content, you can tailor it to a specific audience segment and have it perform much more effectively. This will likely require a deeper dive into whether any of the laggards are weak overall, despite being strong in one particular area.

Some of this work may require updates to your coding or your analytics reporting. Discuss what that investment is going to be with your tech team. Chances are, costs will be recouped quickly.

Once you get the hang of this approach, you’ll see benefits beyond your immediate results. This kind of deeper dive into your analytics data can help you evaluate information from disparate parts of your marketing efforts and yield insights that can impact your broader marketing effectiveness.