How to Outsource Analytics

In this series, I have been emphasizing the importance of statistical modeling in almost every article. While there are plenty of benefits of using statistical models in a more traditional sense (refer to “Why Model?”), in the days when “too much” data is the main challenge, I would dare to say that the most important function of statistical models is that they summarize complex data into simple-to-use “scores.”

In this series, I have been emphasizing the importance of statistical modeling in almost every article. While there are plenty of benefits of using statistical models in a more traditional sense (refer to “Why Model?”), in the days when “too much” data is the main challenge, I would dare to say that the most important function of statistical models is that they summarize complex data into simple-to-use “scores.”

The next important feature would be that models fill in the gaps, transforming “unknowns” to “potentials.” You see, even in the age of ubiquitous data, no one will ever know everything about everybody. For instance, out of 100,000 people you have permission to contact, only a fraction will be “known” wine enthusiasts. With modeling, we can assign scores for “likelihood of being a wine enthusiast” to everyone in the base. Sure, models are not 100 percent accurate, but I’ll take “70 percent chance of afternoon shower” over not knowing the weather forecast for the day of the company picnic.

I’ve already explained other benefits of modeling in detail earlier in this series, but if I may cut it really short, models will help marketers:

1. In deciding whom to engage, as they cannot afford to spam the world and annoy everyone who can read, and

2. In determining what to offer once they decide to engage someone, as consumers are savvier than ever and they will ignore and discard any irrelevant message, no matter how good it may look.

OK, then. I hope you are sold on this idea by now. The next question is, who is going to do all that mathematical work? In a country where jocks rule over geeks, it is clear to me that many folks are more afraid of mathematics than public speaking; which, in its own right, ranks higher than death in terms of the fear factor for many people. If I may paraphrase “Seinfeld,” many folks are figuratively more afraid of giving a eulogy than being in the coffin at a funeral. And thanks to a sub-par math education in the U.S. (and I am not joking about this, having graduated high school on foreign soil), yes, the fear of math tops them all. Scary, heh?

But that’s OK. This is a big world, and there are plenty of people who are really good at mathematics and statistics. That is why I purposefully never got into the mechanics of modeling techniques and related programming issues in this series. Instead, I have been emphasizing how to formulate questions, how to express business goals in a more logical fashion and where to invest to create analytics-ready environments. Then the next question is, “How will you find the right math geeks who can make all your dreams come true?”

If you have a plan to create an internal analytics team, there are a few things to consider before committing to that idea. Too many organizations just hire one or two statisticians, dump all the raw data onto them, and hope to God that they will figure some ways to make money with data, somehow. Good luck with that idea, as:

1. I’ve seen so many failed attempts like that (actually, I’d be shocked if it actually worked), and

2. I am sure God doesn’t micromanage statistical units.

(Similarly, I am almost certain that she doesn’t care much for football or baseball scores of certain teams, either. You don’t think God cares more for the Red Sox than the Yankees, do ya?)

The first challenge is locating good candidates. If you post any online ad for “Statistical Analysts,” you will receive a few hundred resumes per day. But the hiring process is not that simple, as you should ask the right questions to figure out who is a real deal, and who is a poser (and there are many posers out there). Even among qualified candidates with ample statistical knowledge, there are differences between the “Doers” and “Vendor Managers.” Depending on your organizational goal, you must differentiate the two.

Then the next challenge is keeping the team intact. In general, mathematicians and statisticians are not solely motivated by money; they also want constant challenges. Like any smart and creative folks, they will simply pack up and leave, if “they” determine that the job is boring. Just a couple of modeling projects a year with some rudimentary sets of data? Meh. Boring! Promises of upward mobility only work for a fraction of them, as the majority would rather deal with numbers and figures, showing no interest in managing other human beings. So, coming up with interesting and challenging projects, which will also benefit the whole organization, becomes a job in itself. If there are not enough challenges, smart ones will quit on you first. Then they need constant mentoring, as even the smartest statisticians will not know everything about challenges associated with marketing, target audiences and the business world, in general. (If you stumble into a statistician who is even remotely curious about how her salary is paid for, start with her.)

Further, you would need to invest to set up an analytical environment, as well. That includes software, hardware and other supporting staff. Toolsets are becoming much cheaper, but they are not exactly free yet. In fact, some famous statistical software, such as SAS, could be quite expensive year after year, although there are plenty of alternatives now. And they need an “analytics-ready” data environment, as I emphasized countless times in this series (refer to “Chicken or the Egg? Data or Analytics?” and “Marketing and IT; Cats and Dogs”). Such data preparation work is not for statisticians, and most of them are not even good at cleaning up dirty data, anyway. That means you will need different types of developers/programmers on the analytics team. I pointed out that analytical projects call for a cohesive team, not some super-duper analyst who can do it all (refer to “How to Be a Good Data Scientist”).

By now you would say “Jeez Louise, enough already,” as all this is just too much to manage to build just a few models. Suddenly, outsourcing may sound like a great idea. Then you would realize there are many things to consider when outsourcing analytical work.

First, where would you go? Everyone in the data industry and their cousins claim that they can take care of analytics. But in reality, it is a scary place where many who have “analytics” in their taglines do not even touch “predictive analytics.”

Analytics is a word that is abused as much as “Big Data,” so we really need to differentiate them. “Analytics” may mean:

  • Business Intelligence (BI) Reporting: This is mostly about the present, such as the display of key success metrics and dashboard reporting. While it is very important to know about the current state of business, much of so-called “analytics” unfortunately stops right here. Yes, it is good to have a dashboard in your car now, but do you know where you should be going?
  • Descriptive Analytics: This is about how the targets “look.” Common techniques such as profiling, segmentation and clustering fall under this category. These techniques are mainly for describing the target audience to enhance and optimize messages to them. But using these segments as a selection mechanism is not recommended, while many dare to do exactly that (more on this subject in future articles).
  • Predictive Modeling: This is about answering the questions about the future. Who would be more likely to behave certain ways? What communication channels will be most effective for whom? How much is the potential spending level of a prospect? Who is more likely to be a loyal and profitable customer? What are their preferences? Response models, various of types of cloning models, value models, and revenue models, attrition models, etc. all fall under this category, and they require hardcore statistical skills. Plus, as I emphasized earlier, these model scores compact large amounts of complex data into nice bite-size packages.
  • Optimization: This is mostly about budget allocation and attribution. Marketing agencies (or media buyers) generally deal with channel optimization and spending analysis, at times using econometrics models. This type of statistical work calls for different types of expertise, but many still insist on calling it simply “analytics.”

Let’s say that for the purpose of customer-level targeting and personalization, we decided to outsource the “predictive” modeling projects. What are our options?

We may consider:

  • Individual Consultants: In-house consultants are dedicated to your business for the duration of the contract, guaranteeing full access like an employee. But they are there for you only temporarily, with one foot out the door all the time. And when they do leave, all the knowledge walks away with them. Depending on the rate, the costs can add up.
  • Standalone Analytical Service Providers: Analytical work is all they do, so you get focused professionals with broad technical and institutional knowledge. Many of them are entrepreneurs, but that may work against you, as they could often be understaffed and stretched thin. They also tend to charge for every little step, with not many freebies. They are generally open to use any type of data, but the majority of them do not have secure sources of third-party data, which could be essential for certain types of analytics involving prospecting.
  • Database Service Providers: Almost all data compilers and brokers have statistical units, as they need to fill in the gap within their data assets with statistical techniques. (You didn’t think that they knew everyone’s income or age, did you?) For that reason, they have deep knowledge in all types of data, as well as in many industry verticals. They provide a one-stop shop environment with deep resource pools and a variety of data processing capabilities. However, they may not be as agile as smaller analytical shops, and analytics units may be tucked away somewhere within large and complex organizations. They also tend to emphasize the use of their own data, as after all, their main cash cows are their data assets.
  • Direct Marketing Agencies: Agencies are very strategic, as they touch all aspects of marketing and control creative processes through segmentation. Many large agencies boast full-scale analytical units, capable of all types of analytics that I explained earlier. But some agencies have very small teams, stretched really thin—just barely handling the reporting aspect, not any advanced analytics. Some just admit that predictive analytics is not part of their core competencies, and they may outsource such projects (not that it is a bad thing).

As you can see here, there is no clear-cut answer to “with whom you should you work.” Basically, you will need to check out all types of analysts and service providers to determine the partner best suitable for your long- and short-term business purposes, not just analytical goals. Often, many marketers just go with the lowest bidder. But pricing is just one of many elements to be considered. Here, allow me to introduce “10 Essential Items to Consider When Outsourcing Analytics.”

1. Consulting Capabilities: I put this on the top of the list, as being a translator between the marketing and the technology world is the most important differentiator (refer to “How to Be a Good Data Scientist”). They must understand the business goals and marketing needs, prescribe suitable solutions, convert such goals into mathematical expressions and define targets, making the best of available data. If they lack strategic vision to set up the data roadmap, statistical knowledge alone will not be enough to achieve the goals. And such business goals vary greatly depending on the industry, channel usage and related success metrics. Good consultants always ask questions first, while sub-par ones will try to force-fit marketers’ goals into their toolsets and methodologies.

Translating marketing goals into specific courses of action is a skill in itself. A good analytical partner should be capable of building a data roadmap (not just statistical steps) with a deep understanding of the business impact of resultant models. They should be able to break down larger goals into smaller steps, creating proper phased approaches. The plan may call for multiple models, all kinds of pre- and post-selection rules, or even external data acquisition, while remaining sensitive to overall costs.

The target definition is the core of all these considerations, which requires years of experience and industry knowledge. Simply, the wrong or inadequate targeting decision leads to disastrous results, no matter how sound the mathematical work is (refer to “Art of Targeting”).

Another important quality of a good analytical partner is the ability to create usefulness out of seemingly chaotic and unstructured data environments. Modeling is not about waiting for the perfect set of data, but about making the best of available data. In many modeling bake-offs, the winners are often decided by the creative usage of provided data, not just statistical techniques.

Finally, the consultative approach is important, as models do not exist in a vacuum, but they have to fit into the marketing engine. Be aware of the ones who want to change the world around their precious algorithms, as they are geeks not strategists. And the ones who understand the entire marketing cycle will give advice on what the next phase should be, as marketing efforts must be perpetual, not transient.

So, how will you find consultants? Ask the following questions:

  • Are they “listening” to you?
  • Can they repeat “your” goals in their own words?
  • Do their roadmaps cover both short- and long-term goals?
  • Are they confident enough to correct you?
  • Do they understand “non-statistical” elements in marketing?
  • Have they “been there, done that” for real, or just in theories?

2. Data Processing Capabilities: I know that some people look down upon the word “processing.” But data manipulation is the most important key step “before” any type of advanced analytics even begins. Simply, “garbage-in, garbage out.” And unfortunately, most datasets are completely unsuitable for analytics and modeling. In general, easily more than 80 percent of model development time goes into “fixing” the data, as most are unstructured and unrefined. I have been repeatedly emphasizing the importance of a “model-ready” (or “analytics-ready”) environment for that reason.

However, the reality dictates that the majority of databases are indeed NOT model-ready, and most of them are not even close to it. Well, someone has to clean up the mess. And in this data business, the last one who touches the dataset becomes responsible for all the errors and mistakes made to it thus far. I know it is not fair, but that is why we need to look at the potential partner’s ability to handle large and really messy data, not just the statistical savviness displayed in glossy presentations.

Yes, that dirty work includes data conversion, edit/hygiene, categorization/tagging, data summarization and variable creation, encompassing all kinds of numeric, character and freeform data (refer to “Beyond RFM Data” and “Freeform Data Aren’t Exactly Free”). It is not the most glorious part of this business, but data consistency is the key to successful implementation of any advanced analytics. So, if a model-ready environment is not available, someone had better know how to make the best of whatever is given. I have seen too many meltdowns in “before” and “after” modeling steps due to inconsistencies in databases.

So, grill the candidates with the following questions:

  • If they support file conversions, edit, categorization and summarization
  • How big of a dataset is too big, and how many files/tables are too many for them
  • How much free-form data are too much for them
  • Ask for sample model variables that they have created in the past

3. Track Records in the Industry: It can be argued that industry knowledge is even more crucial for the success than statistical know-how, as nuances are often “Lost in Translation” without relevant industry experience. In fact, some may not even be able to carry on a proper conversation with a client without it, leading to all kinds of wrong assumptions. I have seen a case where “real” rocket scientists messed up models for credit card campaigns.

The No. 1 reason why industry experience is important is everyone’s success metrics are unique. Just to name a few, financial services (banking, credit card, insurance, investment, etc.), travel and hospitality, entertainment, packaged goods, online and offline retail, catalogs, publication, telecommunications/utilities, non-profit and political organizations all call for different types of analytics and models, as their business models and the way they interact with target audiences are vastly different. For example, building a model (or a database, for that matter) for businesses where they hand over merchandise “before” they collect money is fundamentally different than the ones where exchange happens simultaneously. Even a simple concept of payment date or transaction date cannot be treated the same way. For retailers, recent dates could be better for business, but for subscription business, older dates may carry more weight. And these are just some examples with “dates,” before touching any dollar figures or other fun stuff.

Then the job gets even more complicated, if we further divide all of these industries by B-to-B vs. B-to-C, where available data do not even look similar. On top of that, divisional ROI metrics may be completely different, and even terminology and culture may play a role in all of this. When you are a consultant, you really don’t want to stop the flow of a meeting to clarify some unfamiliar acronyms, as you are supposed to know them all.

So, always demand specific industry references and examine client roasters, if allowed. (Many clients specifically ask vendors not to use their names as references.) Basically, watch out for the ones who push one-size-fits-all cookie-cutter solutions. You deserve way more than that.

4. Types of Models Supported: Speaking of cookie-cutter stuff, we need to be concerned with types of models that the outsourcing partner would support. Sure, nobody employs every technique, and no one can be good at everything. But we need to watch out for the “One-trick Ponies.”

This could be a tricky issue, as we are going into a more technical domain. Plus, marketers should not self-prescribe with specific techniques, instead of clearly stating their business goals (refer to “Marketing and IT; Cats and Dogs”). Some of the modeling goals are:

  • Rank and select prospect names
  • Lead scoring
  • Cross-sell/upsell
  • Segment the universe for messaging strategy
  • Pinpoint the attrition point
  • Assign lifetime values for prospects and customers
  • Optimize media/channel spending
  • Create new product packages
  • Detect fraud
  • Etc.

Unless you have successfully dealt with the outsourcing partner in the past (or you have a degree in statistics), do not blurt out words like Neural-net, CHAID, Cluster Analysis, Multiple Regression, Discriminant Function Analysis, etc. That would be like demanding specific medication before your new doctor even asks about your symptoms. The key is meeting your business goals, not fulfilling buzzwords. Let them present their methodology “after” the goal discussion. Nevertheless, see if the potential partner is pushing one or two specific techniques or solutions all the time.

5. Speed of Execution: In modern marketing, speed to action is the king. Speed wins, and speed gains respect. However, when it comes to modeling or other advanced analytics, you may be shocked by the wide range of time estimates provided by each outsourcing vendor. To be fair they are covering themselves, mainly because they have no idea what kind of messy data they will receive. As I mentioned earlier, pre-model data preparation and manipulation are critical components, and they are the most time-consuming part of all; especially when available data are in bad shape. Post-model scoring, audit and usage support may elongate the timeline. The key is to differentiate such pre- and post-modeling processes in the time estimate.

Even for pure modeling elements, time estimates vary greatly, depending on the complexity of assignments. Surely, a simple cloning model with basic demographic data would be much easier to execute than the ones that involve ample amounts of transaction- and event-level data, coming from all types of channels. If time-series elements are added, it will definitely be more complex. Typical clustering work is known to take longer than regression models with clear target definitions. If multiple models are required for the project, it will obviously take more time to finish the whole job.

Now, the interesting thing about building a model is that analysts don’t really finish it, but they just run out of time—much like the way marketers work on PowerPoint presentations. The commonality is that we can basically tweak models or decks forever, but we have to stop at some point.

However, with all kinds of automated tools and macros, model development time has decreased dramatically in past decades. We really came a long way since the first application of statistical techniques to marketing, and no one should be quoting a 1980s timeline in this century. But some still do. I know vendors are trained to follow the guideline “always under-promise and over-deliver,” but still.

An interesting aspect of this dilemma is that we can negotiate the timeline by asking for simpler and less sophisticated versions with diminished accuracy. If, hypothetically, it takes a week to be 98 percent accurate, but it only takes a day to be 90 percent accurate, what would you pick? That should be the business decision.

So, what is a general guideline? Again, it really depends on many factors, but allow me to share a version of it:

  • Pre-modeling Processing

– Data Conversions: from half a day to weeks

– Data Append/Enhancement: between overnight and two days

– Data Edit and Summarization: Data-dependent

  • Modeling: Ranges from half a day to weeks

– Depends on type, number of models and complexity

  • Scoring: from half a day to one week

– Mainly depends on number of records and state of the database to be scored

I know these are wide ranges, but watch out for the ones that routinely quote 30 days or more for simple clone models. They may not know what they are doing, or worse, they may be some mathematical perfectionists who don’t understand the marketing needs.

6. Pricing Structure: Some marketers would put this on top of the checklist, or worse, use the pricing factor as the only criterion. Obviously, I disagree. (Full disclosure: I have been on the service side of the fence during my entire career.) Yes, every project must make an economic sense in the end, but the budget should not and cannot be the sole deciding factor in choosing an outsourcing partner. There are many specialists under famous brand names who command top dollars, and then there are many data vendors who throw in “free” models, disrupting the ecosystem. Either way, one should not jump to conclusions too fast, as there is no free lunch, after all. In any case, I strongly recommend that no one should start the meeting with pricing questions (hence, this article). When you get to the pricing part, ask what the price includes, as the analytical journey could be a series of long and winding roads. Some of the biggest factors that need to be considered are:

  • Multiple Model Discounts—Less for second or third models within a project?
  • Pre-developed (off-the-shelf) Models—These can be “much” cheaper than custom models, while not custom-fitted.
  • Acquisition vs. CRM—Employing client-specific variables certainly increases the cost.
  • Regression Models vs. Other Types—At times, types of techniques may affect the price.
  • Clustering and Segmentations—They are generally priced much higher than target-specific models.

Again, it really depends on the complexity factor more than anything else, and the pre- and post-modeling process must be estimated and priced separately. Non-modeling charges often add up fast, and you should ask for unit prices and minimum charges for each step.

Scoring charges in time can be expensive, too, so negotiate for discounts for routine scoring of the same models. Some may offer all-inclusive package pricing for everything. The important thing is that you must be consistent with the checklist when shopping around with multiple candidates.

7. Documentation: When you pay for a custom model (not pre-developed, off-the-shelf ones), you get to own the algorithm. Because algorithms are not tangible items, the knowledge is to be transformed in model documents. Beware of the ones who offer “black-box” solutions with comments like, “Oh, it will work, so trust us.”

Good model documents must include the following, at the minimum:

  • Target and Comparison Universe Definitions: What was the target variable (or “dependent” variable) and how was it defined? How was the comparison universe defined? Was there any “pre-selection” for either of the universes? These are the most important factors in any model—even more than the mechanics of the model itself.
  • List of Variables: What are the “independent” variables? How were they transformed or binned? From where did they originate? Often, these model variables describe the nature of the model, and they should make intuitive sense.
  • Model Algorithms: What is the actual algorithm? What are the assigned weight for each independent variable?
  • Gains Chart: We need to examine potential effectiveness of the model. What are the “gains” for each model group, from top to bottom (e.g., 320 percent gain at the top model group in comparison to the whole universe)? How fast do such gains decrease as we move down the scale? How do the gains factors compare against the validation sample? A graphic representation would be nice, too.

For custom models, it is customary to have a formal model presentation, full documentation and scoring script in designated programming languages. In addition, if client files are provided, ask for a waterfall report that details input and output counts of each step. After the model scoring, it is also customary for the vendor to provide a scored universe count by model group. You will be shocked to find out that many so-called analytical vendors do not provide thorough documentation. Therefore, it is recommended to ask for sample documents upfront.

8. Scoring Validation: Models are built and presented properly, but the job is not done until the models are applied to the universe from which the names are ranked and selected for campaigns. I have seen too many major meltdowns at this stage. Simply, it is one thing to develop models with a few hundred thousand record samples, but it is quite another to apply the algorithm to millions of records. I am not saying that the scoring job always falls onto the developers, as you may have an internal team or a separate vendor for such ongoing processes. But do not let the model developer completely leave the building until everything checks out.

The model should have been validated against the validation sample by then, but live scoring may reveal all kinds of inconsistencies. You may also want to back-test the algorithms with past campaign results, as well. In short, many things go wrong “after” the modeling steps. When I hear customers complaining about models, I often find that the modeling is the only part that was done properly, and “before” and “after” steps were all messed up. Further, even machines misunderstand each other, as any differences in platform or scripting language may cause discrepancies. Or, maybe there was no technical error, but missing values may have caused inconsistencies (refer to “Missing Data Can Be Meaningful”). Nonetheless, the model developers would have the best insight as to what could have gone wrong, so make sure that they are available for questions after models are presented and delivered.

9. Back-end Analysis: Good analytics is all about applying learnings from past campaigns—good or bad—to new iterations of efforts. We often call it “closed-loop marketing—while many marketers often neglect to follow up. Any respectful analytics shop must be aware of it, while they may classify such work separately from modeling or other analytical projects. At the minimum, you need to check out if they even offer such services. In fact, so-called “match-back analysis” is not as simple as just matching campaign files against responders in this omnichannel environment. When many channels are employed at the same time, allocation of credit (i.e., “what worked?”) may call for all kinds of business rules or even dedicated models.

While you are at it, ask for a cheaper version of “canned” reports, as well, as custom back-end analysis can be even more costly than the modeling job itself, over time. Pre-developed reports may not include all the ROI metrics that you’re looking for (e.g., open, clickthrough, conversion rates, plus revenue and orders-per-mailed, per order, per display, per email, per conversion. etc.). So ask for sample reports upfront.

If you start breaking down all these figures by data source, campaign, time series, model group, offer, creative, targeting criteria, channel, ad server, publisher, keywords, etc., it can be unwieldy really fast. So contain yourself, as no one can understand 100-page reports, anyway. See if the analysts can guide you with such planning, as well. Lastly, if you are so into ROI analysis, get ready to share the “cost” side of the equation with the selected partner. Some jobs are on the marketers.

10. Ongoing Support: Models have a finite shelf life, as all kinds of changes happen in the real world. Seasonality may be a factor, or the business model or strategy may have changed. Fluctuations in data availability and quality further complicate the matter. Basically assumptions like “all things being equal” only happen in textbooks, so marketers must plan for periodic review of models and business rules.

A sure sign of trouble is decreasing effectiveness of models. When in doubt, consult the developers and they may recommend a re-fit or complete re-development of models. Quarterly reviews would be ideal, but if the cost becomes an issue, start with 6-month or yearly reviews, but never go past more than a year without any review. Some vendors may offer discounts for redevelopment, so ask for the price quote upfront.

I know this is a long list of things to check, but picking the right partner is very important, as it often becomes a long-term relationship. And you may find it strange that I didn’t even list “technical capabilities” at all. That is because:

1. Many marketers are not equipped to dig deep into the technical realm anyway, and

2. The difference between the most mathematically sound models and the ones from the opposite end of the spectrum is not nearly as critical as other factors I listed in this article.

In other words, even the worst model in the bake-off would be much better than no model, if these other business criterion are well-considered. So, happy shopping with this list, and I hope you find the right partner. Employing analytics is not an option when living in the sea of data.

Don’t Do It Just Because You Can

Don’t do it just because you can. No kidding. … Any geek with moderate coding skills or any overzealous marketer with access to some data can do real damage to real human beings without any superpowers to speak of. Largely, we wouldn’t go so far as calling them permanent damages, but I must say that some marketing messages and practices are really annoying and invasive. Enough to classify them as “junk mail” or “spam.” Yeah, I said that, knowing full-well that those words are forbidden in the industry in which I built my career.

Don’t do it just because you can. No kidding. By the way, I could have gone with Ben Parker’s “With great power comes great responsibility” line, but I didn’t, as it has become an over-quoted cliché. Plus, I’m not much of a fan of “Spiderman.” Actually, I’m kidding this time. (Not the “Spiderman” part, as I’m more of a fan of “Thor.”) But the real reason is any geek with moderate coding skills or any overzealous marketer with access to some data can do real damage to real human beings without any superpowers to speak of. Largely, we wouldn’t go so far as calling them permanent damages, but I must say that some marketing messages and practices are really annoying and invasive. Enough to classify them as “junk mail” or “spam.” Yeah, I said that, knowing full-well that those words are forbidden in the industry in which I built my career.

All jokes aside, I received a call from my mother a few years ago asking me if this “urgent” letter that says her car warranty will expire if she does not act “right now” (along with a few exclamation marks) is something to which she must respond immediately. Many of us by now are impervious to such fake urgencies or outrageous claims (like “You’ve just won $10,000,000!!!”). But I then realized that there still are plenty of folks who would spend their hard-earned dollars based on such misleading messages. What really made me mad, other than the fact that my own mother was involved in that case, was that someone must have actually targeted her based on her age, ethnicity, housing value and, of course, the make and model of her automobile. I’ve been doing this job for too long to be unaware of potential data variables and techniques that must have played a part so that my mother to receive a series of such letters. Basically, some jerk must have created a segment that could be named as “old and gullible.” Without a doubt, this is a classic example of what should not be done just because one can.

One might dismiss it as an isolated case of a questionable practice done by questionable individuals with questionable moral integrity, but can we honestly say that? I, who knows the ins and outs of direct marketing practices quite well, fell into traps more than a few times, where supposedly a one-time order mysteriously turns into a continuity program without my consent, followed by an extremely cumbersome canceling process. Further, when I receive calls or emails from shady merchants with dubious offers, I can very well assume my information changed hands in very suspicious ways, if not through outright illegal routes.

Even without the criminal elements, as data become more ubiquitous and targeting techniques become more precise, an accumulation of seemingly inoffensive actions by innocuous data geeks can cause a big ripple in the offline (i.e., “real”) world. I am sure many of my fellow marketers remember the news about this reputable retail chain a few years ago; that they accurately predicted pregnancy in households based on their product purchase patterns and sent customized marketing messages featuring pregnancy-related products accordingly. Subsequently it became a big controversy, as such a targeted message was the way one particular head of household found out his teenage daughter was indeed pregnant. An unintended consequence? You bet.

I actually saw the presentation of the instigating statisticians in a predictive analytics conference before the whole incident hit the wire. At the time, the presenters were unaware of the consequences of their actions, so they proudly shared employed methodologies with the audience. But when I heard about what they were actually trying to predict, I immediately turned my head to look at the lead statistician in my then-analytical team sitting next to me, and saw that she had a concerned look that I must have had on my face, as well. And our concern was definitely not about the techniques, as we knew how to do the same when provided with similar sets of data. It was about the human consequences that such a prediction could bring, not just to the eventual targets, but also to the predictors and their fellow analysts in the industry who would all be lumped together as evil scientists by the outsiders. In predictive analytics, there is a price for being wrong; and at times, there is a price to pay for being right, too. Like I said, we shouldn’t do things just because we can.

Analysts do not have superpowers individually, but when technology and ample amounts of data are conjoined, the results can be quite influential and powerful, much like the way bombs can be built with common materials available at any hardware store. Ironically, I have been evangelizing that the data and technology should be wielded together to make big and dumb data smaller and smarter all this time. But providing answers to decision-makers in ready-to-be used formats, hence “humanizing” the data, may have its downside, too. Simply, “easy to use” can easily be “easy to abuse.” After all, humans are fallible creatures with ample amounts of greed and ambition. Even without any obvious bad intentions, it is sometimes very difficult to contemplate all angles, especially about those sensitive and squeamish humans.

I talked about the social consequences of the data business last month (refer to “How to Be a Good Data Scientist“), and that is why I emphasized that anyone who is about to get into this data field must possess deep understandings of both technology and human nature. That little sensor in your stomach that tells you “Oh, I have a bad feeling about this” may not come to everyone naturally, but we all need to be equipped with those safeguards like angels on our shoulders.

Hindsight is always 20/20, but apparently, those smart analysts who did that pregnancy prediction only thought about the techniques and the bottom line, but did not consider all the human factors. And they should have. Or, if not them, their manager should have. Or their partners in the marketing department should have. Or their public relations people should have. Heck, “someone” in their organization should have, alright? Just like we do not casually approach a woman on the street who “seems” pregnant and say “You must be pregnant.” Only socially inept people would do that.

People consider certain matters extremely private, in case some data geeks didn’t realize that. If I might add, the same goes for ailments such as erectile dysfunction or constipation, or any other personal business related to body parts that are considered private. Unless you are a doctor in an examining room, don’t say things like “You look old, so you must have hard time having sex, right?” It is already bad enough that we can’t even watch golf tournaments on TV without those commercials that assume that golf fans need help in that department. (By the way, having “two” bathtubs “outside” the house at dusk don’t make any sense either, when the effect of the drug can last for hours for heaven’s sake. Maybe the man lost interest because the tubs were too damn heavy?)

While it may vary from culture to culture, we all have some understanding of social boundaries in casual settings. When you are talking to a complete stranger on a plane ride, for example, you know exactly how much information that you would feel comfortable sharing with that person. And when someone crosses the line, we call that person inappropriate, or “creepy.” Unfortunately, that creepy line is set differently for each person who we encounter (I am sure people like George Clooney or Scarlett Johansson have a really high threshold for what might be considered creepy), but I think we can all agree that such a shady area can be loosely defined at the least. Therefore, when we deal with large amounts of data affecting a great many people, imagine a rather large common area of such creepiness/shadiness, and do not ever cross it. In other words, when in doubt, don’t go for it.

Now, as a lifelong database marketer, I am not advocating some over-the-top privacy zealots either, as most of them do not understand the nature of data work and can’t tell the difference between informed (and mutually beneficial) messages and Big Brother-like nosiness. This targeting business is never about looking up an individual’s record one at a time, but more about finding correlations between users and products and doing some good match-making in mass numbers. In other words, we don’t care what questionable sites anyone visits, and honest data players would not steal or abuse information with bad intent. I heard about waiters who steal credit card numbers from their customers with some swiping devices, but would you condemn the entire restaurant industry for that? Yes, there are thieves in any part of the society, but not all data players are hackers, just like not all waiters are thieves. Statistically speaking, much like flying being the safest from of travel, I can even argue that handing over your physical credit card to a stranger is even more dangerous than entering the credit card number on a website. It looks much worse when things go wrong, as incidents like that affect a great many all at once, just like when a plane crashes.

Years back, I used to frequent a Japanese Restaurant near my office. The owner, who doubled as the head sushi chef, was not a nosy type. So he waited for more than a year to ask me what I did for living. He had never heard anything about database marketing, direct marketing or CRM (no “Big Data” on the horizon at that time). So I had to find a simple way to explain what I do. As a sushi chef with some local reputation, I presumed that he would know personal preferences of many frequently visiting customers (or “high-value customers,” as marketers call them). He may know exactly who likes what kind of fish and types of cuts, who doesn’t like raw shellfish, who is allergic to what, who has less of a tolerance for wasabi or who would indulge in exotic fish roes. When I asked this question, his answer was a simple “yes.” Any diligent sushi chef would care for his or her customers that much. And I said, “Now imagine that you can provide such customized services to millions of people, with the help of computers and collected data.” He immediately understood the benefits of using data and analytics, and murmured “Ah so …”

Now let’s turn the table for a second here. From the customer’s point of view, yes, it is very convenient for me that my favorite sushi chef knows exactly how I like my sushi. Same goes for the local coffee barista who knows how you take your coffee every morning. Such knowledge is clearly mutually beneficial. But what if those business owners or service providers start asking about my personal finances or about my grown daughter in a “creepy” way? I wouldn’t care if they carried the best yellowtail in town or served the best cup of coffee in the world. I would cease all my interaction with them immediately. Sorry, they’ve just crossed that creepy line.

Years ago, I had more than a few chances to sit closely with Lester Wunderman, widely known as “The Father of Direct Marketing,” as the venture called I-Behavior in which I participated as one of the founders actually originated from an idea on a napkin from Lester and his friends. Having previously worked in an agency that still bears his name, and having only seen him behind a podium until I was introduced to him on one cool autumn afternoon in 1999, meeting him at a small round table and exchanging ideas with the master was like an unknown guitar enthusiast having a jam session with Eric Clapton. What was most amazing was that, at the beginning of the dot.com boom, he was completely unfazed about all those new ideas that were flying around at that time, and he was precisely pointing out why most of them would not succeed at all. I do not need to quote the early 21st century history to point out that his prediction was indeed accurate. When everyone was chasing the latest bit of technology for quick bucks, he was at least a decade ahead of all of those young bucks, already thinking about the human side of the equation. Now, I would not reveal his age out of respect, but let’s just say that almost all of the people in his age group would describe occupations of their offspring as “Oh, she just works on a computer all the time …” I can only wish that I will remain that sharp when I am his age.

One day, Wunderman very casually shared a draft of the “Consumer Bill of Rights for Online Engagement” with a small group of people who happened to be in his office. I was one of the lucky souls who heard about his idea firsthand, and I remember feeling that he was spot-on with every point, as usual. I read it again recently just as this Big Data hype is reaching its peak, just like the dot.com boom was moving with a force that could change the world back then. In many ways, such tidal waves do end up changing the world. But lest we forget, such shifts inevitably affect living, breathing human beings along the way. And for any movement guided by technology to sustain its velocity, people who are at the helm of the enabling technology must stay sensitive toward the needs of the rest of the human collective. In short, there is not much to gain by annoying and frustrating the masses.

Allow me to share Lester Wunderman’s “Consumer Bill of Rights for Online Engagement” verbatim, as it appeared in the second edition of his book “Being Direct”:

  1. Tell me clearly who you are and why you are contacting me.
  2. Tell me clearly what you are—or are not—going to do with the information I give.
  3. Don’t pretend that you know me personally. You don’t know me; you know some things about me.
  4. Don’t assume that we have a relationship.
  5. Don’t assume that I want to have a relationship with you.
  6. Make it easy for me to say “yes” and “no.”
  7. When I say “no,” accept that I mean not this, not now.
  8. Help me budget not only my money, but also my TIME.
  9. My time is valuable, don’t waste it.
  10. Make my shopping experience easier.
  11. Don’t communicate with me just because you can.
  12. If you do all of that, maybe we will then have the basis for a relationship!

So, after more than 15 years of the so-called digital revolution, how many of these are we violating almost routinely? Based on the look of my inboxes and sites that I visit, quite a lot and all the time. As I mentioned in my earlier article “The Future of Online is Offline,” I really get offended when even seasoned marketers use terms like “online person.” I do not become an online person simply because I happen to stumble onto some stupid website and forget to uncheck some pre-checked boxes. I am not some casual object at which some email division of a company can shoot to meet their top-down sales projections.

Oh, and good luck with that kind of mindless mass emailing; your base will soon be saturated and you will learn that irrelevant messages are bad for the senders, too. Proof? How is it that the conversion rate of a typical campaign did not increase dramatically during the past 40 years or so? Forget about open or click-through rate, but pay attention to the good-old conversion rate. You know, the one that measures actual sales. Don’t we have superior databases and technologies now? Why is anyone still bragging about mailing “more” in this century? Have you heard about “targeted” or “personalized” messages? Aren’t there lots and lots of toolsets for that?

As the technology advances, it becomes that much easier and faster to offend people. If the majority of data handlers continue to abuse their power, stemming from the data in their custody, the communication channels will soon run dry. Or worse, if abusive practices continue, the whole channel could be shut down by some legislation, as we have witnessed in the downfall of the outbound telemarketing channel. Unfortunately, a few bad apples will make things a lot worse a lot faster, but I see that even reputable companies do things just because they can. All the time, repeatedly.

Furthermore, in this day and age of abundant data, not offending someone or not violating rules aren’t good enough. In fact, to paraphrase comedian Chris Rock, only losers brag about doing things that they are supposed to do in the first place. The direct marketing industry has long been bragging about the self-governing nature of its tightly knit (and often incestuous) network, but as tools get cheaper and sharper by the day, we all need to be even more careful wielding this data weaponry. Because someday soon, we as consumers will be seeing messages everywhere around us, maybe through our retina directly, not just in our inboxes. Personal touch? Yes, in the creepiest way, if done wrong.

Visionaries like Lester Wunderman were concerned about the abusive nature of online communication from the very beginning. We should all read his words again, and think twice about social and human consequences of our actions. Google from its inception encapsulated a similar idea by simply stating its organizational objective as “Don’t be evil.” That does not mean that it will stop pursuing profit or cease to collect data. I think it means that Google will always try to be mindful about the influences of its actions on real people, who may not be in positions to control the data, but instead are on the side of being the subject of data collection.

I am not saying all of this out of some romantic altruism; rather, I am emphasizing the human side of the data business to preserve the forward-momentum of the Big Data movement, while I do not even care for its name. Because I still believe, even from a consumer’s point of view, that a great amount of efficiency could be achieved by using data and technology properly. No one can deny that modern life in general is much more convenient thanks to them. We do not get lost on streets often, we can translate foreign languages on the fly, we can talk to people on the other side of the globe while looking at their faces. We are much better informed about products and services that we care about, we can look up and order anything we want while walking on the street. And heck, we get suggestions before we even think about what we need.

But we can think of many negative effects of data, as well. It goes without saying that the data handlers must protect the data from falling into the wrong hands, which may have criminal intentions. Absolutely. That is like banks having to protect their vaults. Going a few steps further, if marketers want to retain the privilege of having ample amounts of consumer information and use such knowledge for their benefit, do not ever cross that creepy line. If the Consumer’s Bill of Rights is too much for you to retain, just remember this one line: “Don’t be creepy.”

Data Deep Dive: The Art of Targeting

Even if you own a sniper rifle (and I’m not judging), if you aim at the wrong place, you will never hit the target. Obvious, right? But that happens all the time in the world of marketing, even when advanced analytics and predictive modeling techniques are routinely employed. How is that possible? Well, the marketing world is not like an Army shooting range where the silhouette of the target is conveniently hung at the predetermined location, but it is more like the “Twilight Zone,” where things are not what they seem. Marketers who failed to hit the real target often blame the guns, which in this case are targeting tools, such as models and segmentations. But let me ask, was the target properly defined in the first place?

Even if you own a sniper rifle (and I’m not judging), if you aim at the wrong place, you will never hit the target. Obvious, right? But that happens all the time in the world of marketing, even when advanced analytics and predictive modeling techniques are routinely employed. How is that possible? Well, the marketing world is not like an Army shooting range where the silhouette of the target is conveniently hung at the predetermined location, but it is more like the “Twilight Zone,” where things are not what they seem. Marketers who failed to hit the real target often blame the guns, which in this case are targeting tools, such as models and segmentations. But let me ask, was the target properly defined in the first place?

In my previous columns, I talked about the importance of predictive analytics in modern marketing (refer to “Why Model?”) for various reasons, such as targeting accuracy, consistency, deeper use of data, and most importantly in the age of Big Data, concise nature of model scores where tons of data are packed into ready-for-use formats. Now, even the marketers who bought into these ideas often make mistakes by relinquishing the important duty of target definition solely to analysts and statisticians, who do not necessarily possess the power to read the marketers’ minds. Targeting is often called “half-art and half-science.” And it should be looked at from multiple angles, starting with the marketer’s point of view. Therefore, even marketers who are slightly (or, in many cases, severely) allergic to mathematics should come one step closer to the world of analytics and modeling. Don’t be too scared, as I am not asking you to be a rifle designer or sniper here; I am only talking about hanging the target in the right place so that others can shoot at it.

Let us start by reviewing what statistical models are: A model is a mathematical expression of “differences” between dichotomous groups; which, in marketing, are often referred to as “targets” and “non-targets.” Let’s say a marketer wants to target “high-value customers.” To build a model to describe such targets, we also need to define “non-high-value customers,” as well. In marketing, popular targets are often expressed as “repeat buyers,” “responders to certain campaigns,” “big-time spenders,” “long-term, high-value customers,” “troubled customers,” etc. for specific products and channels. Now, for all those targets, we also need to define “bizarro” or “anti-” versions of them. One may think that they are just the “remainders” of the target. But, unfortunately, it is not that simple; the definition of the whole universe should be set first to even bring up the concept of the remainders. In many cases, defining “non-buyers” is much more difficult than defining “buyers,” because lack of purchase information does not guarantee that the individual in question is indeed a non-buyer. Maybe the data collection was never complete. Maybe he used a different channel to respond. Maybe his wife bought the item for him. Maybe you don’t have access to the entire pool of names that represent the “universe.”

Remember T, C, & M
That is why we need to examine the following three elements carefully when discussing statistical models with marketers who are not necessarily statisticians:

  1. Target,
  2. Comparison Universe, and
  3. Methodology.

I call them “TCM” in short, so that I don’t leave out any element in exploratory conversations. Defining proper target is the obvious first step. Defining and obtaining data for the comparison universe is equally important, but it could be challenging. But without it, you’d have nothing against which you compare the target. Again, a model is an algorithm that expresses differences between two non-overlapping groups. So, yes, you need both Superman and Bizarro-Superman (who always seems more elusive than his counterpart). And that one important variable that differentiates the target and non-target is called “Dependent Variable” in modeling.

The third element in our discussion is the methodology. I am sure you may have heard of terms like logistic regression, stepwise regression, neural net, decision trees, CHAID analysis, genetic algorithm, etc., etc. Here is my advice to marketers and end-users:

  • State your goals and usages cases clearly, and let the analyst pick proper methodology that suites your goals.
  • Don’t be a bad patient who walks into a doctor’s office demanding a specific prescription before the doctor even examines you.

Besides, for all intents and purposes, the methodology itself matters the least in comparison with an erroneously defined target and the comparison universes. Differences in methodologies are often measured in fractions. A combination of a wrong target and wrong universe definition ends up as a shotgun, if not an artillery barrage. That doesn’t sound so precise, does it? We should be talking about a sniper rifle here.

Clear Goals Leading to Definitions of Target and Comparison
So, let’s roll up our sleeves and dig deeper into defining targets. Allow me to use an example, as you will be able to picture the process better that way. Let’s just say that, for general marketing purposes, you want to build a model targeting “frequent flyers.” One may ask for business or for pleasure, but let’s just say that such data are hard to obtain at this moment. (Finding the “reasons” is always much more difficult than counting the number of transactions.) And it was collectively decided that it would be just beneficial to know who is more likely to be a frequent flyer, in general. Such knowledge could be very useful for many applications, not just for the travel industry, but for other affiliated services, such as credit cards or publications. Plus, analytics is about making the best of what you’ve got, not waiting for some perfect datasets.

Now, here is the first challenge:

  • When it comes to flying, how frequent is frequent enough for you? Five times a year, 10 times, 20 times or even more?
  • Over how many years?
  • Would you consider actual miles traveled, or just number of issued tickets?
  • How large are the audiences in those brackets?

If you decided that five times a year is a not-so-big or not-so-small target (yes, sizes do matter) that also fits the goal of the model (you don’t want to target only super-elites, as they could be too rare or too distinct, almost like outliers), to whom are they going to be compared? Everyone who flew less than five times last year? How about people who didn’t fly at all last year?

Actually, one option is to compare people who flew more than five times against people who didn’t fly at all last year, but wouldn’t that model be too much like a plain “flyer” model? Or, will that option provide more vivid distinction among the general population? Or, one analyst may raise her hand and say “to hell with all these breaks and let’s just build a model using the number of times flown last year as the continuous target.” The crazy part is this: None of these options are right or wrong, but each combination of target and comparison will certainly yield very different-looking models.

Then what should a marketer do in a situation like this? Again, clearly state the goal and what is more important to you. If this is for general travel-related merchandizing, then the goal should be more about distinguishing more likely frequent flyers out of the general population; therefore, comparing five-plus flyers against non-flyers—ignoring the one-to-four-time flyers—makes sense. If this project is for an airline to target potential gold or platinum members, using people who don’t even fly as comparison makes little or no sense. Of course, in a situation like this, the analyst in charge (or data scientist, the way we refer to them these days), must come halfway and prescribe exactly what target and comparison definitions would be most effective for that particular user. That requires lots of preliminary data exploration, and it is not all science, but half art.

Now, if I may provide a shortcut in defining the comparison universe, just draw the representable sample from “the pool of names that are eligible for your marketing efforts.” The key word is “eligible” here. For example, many businesses operate within certain areas with certain restrictions or predetermined targeting criteria. It would make no sense to use the U.S. population sample for models for supermarket chains, telecommunications, or utility companies with designated footprints. If the business in question is selling female apparel items, first eliminate the male population from the comparison universe (but I’d leave “unknown” genders in the mix, so that the model can work its magic in that shady ground). You must remember, however, that all this means you need different models when you change the prospecting universe, even if the target definition remains unchanged. Because the model algorithm is the expression of the difference between T and C, you need a new model if you swap out the C part, even if you left the T alone.

Multiple Targets
Sometimes it gets twisted the other way around, where the comparison universe is relatively stable (i.e., your prospecting universe is stable) but there could be multiple targets (i.e., multiple Ts, like T1, T2, etc.) in your customer base.

Let me elaborate with a real-life example. A while back, we were helping a company that sells expensive auto accessories for luxury cars. The client, following his intuition, casually told us that he only cares for big spenders whose average order sizes are more than $300. Now, the trouble with this statement is that:

  1. Such a universe could be too small to be used effectively as a target for models, and
  2. High spenders do not tend to purchase often, so we may end up leaving out the majority of the potential target buyers in the whole process.

This is exactly why some type of customer profiling must precede the actual target definition. A series of simple distribution reports clearly revealed that this particular client was dealing with a dual-universe situation, where the first group (or segment) is made of infrequent, but high-dollar spenders whose average orders were even greater than $300, and the second group is made of very frequent buyers whose average order sizes are well below the $100 mark. If we had ignored this finding, or worse, neglected to run preliminary reports and just relying on our client’s wishful thinking, we would have created a “phantom” target, which is just an average of these dual universes. A model designed for such a phantom target will yield phantom results. The solution? If you find two distinct targets (as in T1 and T2), just bite the bullet and develop two separate models (T1 vs. C and T2 vs. C).

Multi-step Approach
There are still other reasons why you may need multiple models. Let’s talk about the case of “target within a target.” Some may relate this idea to a “drill-down” concept, and it can be very useful when the prospecting universe is very large, and the marketer is trying to reach only the top 1 percent (which can be still very large, if the pool contains hundreds of millions of people). Correctly finding the top 5 percent in any universe is difficult enough. So what I suggest in this case is to build two models in sequence to get to the “Best of the Best” in a stepwise fashion.

  • The first model would be more like an “elimination” model, where obviously not-so-desirable prospects would be removed from the process, and
  • The second-step model would be designed to go after the best prospects among survivors of the first step.

Again, models are expressions of differences between targets and non-targets, so if the first model eliminated the bottom 80 percent to 90 percent of the universe and leaves the rest as the new comparison universe, you need a separate model—for sure. And lots of interesting things happen at the later stage, where new variables start to show up in algorithms or important variables in the first step lose steam in later steps. While a bit cumbersome during deployment, the multi-step approach ensures precision targeting, much like a sniper rifle at close range.

I also suggest this type of multi-step process when clients are attempting to use the result of segmentation analysis as a selection tool. Segmentation techniques are useful as descriptive analytics. But as a targeting tool, they are just too much like a shotgun approach. It is one thing to describe groups of people such as “young working mothers,” “up-and-coming,” and “empty-nesters with big savings” and use them as references when carving out messages tailored toward them. But it is quite another to target such large groups as if the population within a particular segment is completely homogeneous in terms of susceptibility to specific offers or products. Surely, the difference between a Mercedes buyer and a Lexus buyer ain’t income and age, which may have been the main differentiator for segmentation. So, in the interest of maintaining a common theme throughout the marketing campaigns, I’d say such segments are good first steps. But for further precision targeting, you may need a model or two within each segment, depending on the size, channel to be employed and nature of offers.

Another case where the multi-step approach is useful is when the marketing and sales processes are naturally broken down into multiple steps. For typical B-to-B marketing, one may start the campaign by mass mailing or email (I’d say that step also requires modeling). And when responses start coming in, the sales team can take over and start contacting responders through more personal channels to close the deal. Such sales efforts are obviously very time-consuming, so we may build a “value” model measuring the potential value of the mail or email responders and start contacting them in a hierarchical order. Again, as the available pool of prospects gets smaller and smaller, the nature of targeting changes as well, requiring different types of models.

This type of funnel approach is also very useful in online marketing, as the natural steps involved in email or banner marketing go through lifecycles, such as blasting, delivery, impression, clickthrough, browsing, shopping, investigation, shopping basket, checkout (Yeah! Conversion!) and repeat purchases. Obviously, not all steps require aggressive or precision targeting. But I’d say, at the minimum, initial blast, clickthrough and conversion should be looked at separately. For any lifetime value analysis, yes, the repeat purchase is a key step; which, unfortunately, is often neglected by many marketers and data collectors.

Inversely Related Targets
More complex cases are when some of these multiple response and conversion steps are “inversely” related. For example, many responders to invitation-to-apply type credit card offers are often people with not-so-great credit. Well, if one has a good credit score, would all these credit card companies have left them alone? So, in a case like that, it becomes very tricky to find good responders who are also credit-worthy in the vast pool of a prospect universe.

I wouldn’t go as far as saying that it is like finding a needle in a haystack, but it is certainly not easy. Now, I’ve met folks who go after the likely responders with potential to be approved as a single target. It really is a philosophical difference, but I much prefer building two separate models in a situation like this:

  • One model designed to measure responsiveness, and
  • Another to measure likelihood to be approved.

The major benefit for having separate models is that each model will be able employ different types and sources of data variables. A more practical benefit for the users is that the marketers will be able to pick and choose what is more important to them at the time of campaign execution. They will obviously go to the top corner bracket, where both scores are high (i.e., potential responders who are likely to be approved). But as they dial the selection down, they will be able to test responsiveness and credit-worthiness separately.

Mixing Multiple Model Scores
Even when multiple models are developed with completely different intentions, mixing them up will produce very interesting results. Imagine you have access to scores for “High-Value Customer Model” and “Attrition Model.” If you cross these scores in a simple 2×2 matrix, you can easily create a useful segment in one corner called “Valuable Vulnerable” (a term that my mentor created a long time ago). Yes, one score is predicting who is likely to drop your service, but who cares if that customer shows little or no value to your business? Take care of the valuable customers first.

This type of mixing and matching becomes really interesting if you have lots of pre-developed models. During my tenure at a large data compiling company, we built more than 120 models for all kinds of consumer characteristics for general use. I remember the real fun began when we started mixing multiple models, like combining a “NASCAR Fan” model with a “College Football Fan” model; a “Leaning Conservative” model with an “NRA Donor” model; an “Organic Food” one with a “Cook for Fun” model or a “Wine Enthusiast” model; a “Foreign Vacation” model with a “Luxury Hotel” model or a “Cruise” model; a “Safety and Security Conscious” model or a “Home Improvement” model with a “Homeowner” model, etc., etc.

You see, no one is one dimensional, and we proved it with mathematics.

No One is One-dimensional
Obviously, these examples are just excerpts from a long playbook for the art of targeting. My intention is to emphasize that marketers must consider target, comparison and methodologies separately; and a combination of these three elements yields the most fitting solutions for each challenge, way beyond what some popular toolsets or new statistical methodologies presented in some technical conferences can acomplish. In fact, when the marketers are able to define the target in a logical fashion with help from trained analysts and data scientists, the effectiveness of modeling and subsequent marketing campaigns increase dramatically. Creating and maintaining an analytics department or hiring an outsourcing analytics vendor aren’t enough.

One may be concerned about the idea of building multiple models so casually, but let me remind you that it is the reality in which we already reside, anyway. I am saying this, as I’ve seen too many marketers who try to fix everything with just one hammer, and the results weren’t ideal—to say the least.

It is a shame that we still treat people with one-dimensional tools, such segmentations and clusters, in this age of ubiquitous and abundant data. Nobody is one-dimensional, and we must embrace that reality sooner than later. That calls for rapid model development and deployment, using everything that we’ve got.

Arguing about how difficult it is to build one or two more models here and there is so last century.

Boost Your Website Sales: 8 Simple E-commerce Tips That Really Work

OK, so you’ve won half the battle. You’re driving traffic to your site. Now what? How can you get your visitors to convert? This is a challenge that most every website that sells a product faces. The following are some tried and true tactics that, over the years, I’ve seen make a difference. Some may seem simplistic, but they DO most definitely impact your online conversion rate.

OK, so you’ve won half the battle. You’re driving traffic to your site. Now what? How can you get your visitors to convert?

This is a challenge that most every website that sells a product faces. The following are some tried and true tactics that, over the years, I’ve seen make a difference. Some may seem simplistic, but they DO most definitely impact your online conversion rate.

Here are a few things you could do to boost online sales and gain loyal customers. These can be applied and refined for most any business, industry or niche:

1. Make Sure Your SSL Seal And Other Consumer-Trust Logos Are Prominent. SSL or secure socket layer is a sign that the site is encrypted … that the information consumers enter, such as personal and credit card information, is protected. Most e-commerce sites must file for an SSL certificate from vendors such as VeriSign, GoDaddy, eTrust, TRUSTe and others. It’s a good practice to display the vendor’s logo on your order page, as well as make sure in the browser window the “https” or image of a lock is present. This is a clear and comforting sign to consumers that they can order online with confidence. Other logos that are in plain view and are anchors on each page of your website can also instill confidence with potential buyers. Some may require membership or purchase, when applicable, and may include Better Business Bureau (“BBB”), PayPal Verified, Authorize.net Verified Merchant and virus protection software (i.e. “McAfee Secure”). Also, if you accept credit cards and have a money back guarantee, there’s nothing more powerful than strong, eye-catching graphic image icons, such as “100% Money Back Guarantee” or “We Accept All Major Credit Cards” (than have images of Visa, Mastercard, Amex and Discover).

2. Encourage Online Sales vs. Other Response Mechanisms. Offer special “Internet Only Pricing” to customers. It could be a discount of 5 percent to 10 percent if they order online versus by phone, fax or mail. This reduces any potential overhead costs for staffing fees, such as telesales or order entry personnel. These Web-only specials can be highlighted on your homepage via a banner ad, as well as on your product pages near qualified items.

3. Offer Free Shipping. Many e-tailers already factor all or a portion of shipping into the retail price of an item as part of their COGS (cost of goods sold). If you are truly offering free shipping, already factored shipping into the product’s cost, or are simply having a limited time free shipping special—if you’re offering it, mention it—big and bold on your home page. Free shipping offers have a huge psychological affect on consumers when they’re comparing competitor’s products and websites. In addition to product quality and value, offering free shipping can make all the difference regarding the final purchase decision.

4. Use Buyer Feedback To Your Advantage. Have an area on your website or indicate next to select items “Customer Favorite” or “Hot Item.” Also, have some glowing customer testimonials or reviews next to the product itself for potential prospects to see. Sites like Amazon, Babies”R”Us and others are pros at this strategy as well as using ratings and ‘Likes’ to convey a product’s popularity. Consumers like to feel good about the item they are about to purchase. as well as see that it’s popular with the masses. Seeing a great testimonial and knowing that others purchased the product provides validation and a feeling of comfort to a consumer. In addition to helping the conversion rate, this tactic also helps reduce buyer’s remorse and product returns.

5. Advertise Products in Google Shopping (formerly Google Product Search, and before that, Froogle). http://www.google.com/shopping is a free product information platform from Google where you can post a single item or submit a data feed. Your products will appear in Google Product search and may also appear in Google.com search results, depending on keywords used. This is simple and easy way to increase your product’s visibility and market share.

6. Make Sure Your Product Pages are Optimized for Search Engines. Sounds obvious, but many folks overlook their catalog and product pages. After doing some keyword research on actual search behavior for your product, refine your meta description, meta keywords and title tag of your product pages. This will help consumers find your product in the organic listing of search engine results.

7. Have a Special Coupon Code “Call Out” On Your Home Page. This is a best practice with online fashion retailers who typically have a banner ad or interstitial ad on their homepage stating something like, “Summer Blow Out Sale, Use Coupon Code 1234.” But this concept can be applied to virtually any industry. This is another great way to offer a special discount for your online customers that makes them feel good about the purchase. You can also encourage viral activity by having “forward to friend” or “share” create viral marketing. Make sure to have some great intro copy mentioning how customers should “pass on the great savings to friends, family and colleagues.”

8. Consider Payment Plans. For higher-ticket items, consider setting up extended payment plans that allow customers to pay for an item over a few payments. HSN.com and QVC.com have mastered this. If an item is, let’s say, $200, you might want to offer a flex pay option of “6 easy payments of $33.33” that is conveniently auto-billed to their credit card. Just be diligent when calculating your payment prices, as well as creating your return/refund policy for these items. The general rule is that your actual production costs/hard costs should be covered in the first one to three payments.

It’s all about being strategically creative and taking the consumer’s point of view into account regarding e-comm strategies. Remember to keep testing methods that help improve sales and drive prospects to your storefront.

Make note of when you implement new tactics and then after a month of being live. Compare sales results year-over-year to see if your efforts had made an improvement. I’m confident that you will see a positive difference in your online conversion rates.

6 Steps to Building the Perfect Landing Page

Today, I’ve decided to go back to basics. And in the world of direct response marketing, nothing is more basic than the landing page. Having worked in the industry for many years, I can tell you from firsthand knowledge that no campaign can succeed without a Landing Page that converts. This is an indisputable fact. Try launching an email or direct mail campaign with a kick-ass creative that sends people back to the homepage of your wesbsite and see what happens. Inevitably, almost all of your hard-fought leads will evaporate into cyberspace, lost forever, destroying any chance of achieving ROI.

Today I’ve decided to go back to basics. And in the world of direct response marketing, nothing is more basic than the landing page. Having worked in the industry for many years, I can tell you from firsthand knowledge that no campaign can succeed without a landing page that converts. This is an indisputable fact. Try launching an email or direct mail campaign with a kick-ass creative that sends people back to the homepage of your wesbsite and see what happens. Inevitably, almost all of your hard-fought leads will evaporate into cyberspace, lost forever, destroying any chance of achieving ROI.

Don’t believe me? Want to know how big of a difference a kick-ass landing page makes? Huge. Think about it like this. I’ve seen top-performing landing pages convert upwards of 10 percent to 20 percent of visitors into leads or sales. By contrast, a generic Contact Us page on a plain-vanilla website will typically convert anywhere from 1 percent to 3 percent. I’ll save you the time by doing the math for you: This means you’ll covert anywhere from three to 20 times more visitors. Do those numbers turn your head? If so, read on for some tips on how to build a landing page that kicks butt.

  1. KISS, or Keep It Simple Stupid—Generally, when it comes to landing pages less is more. Essentially, keeping visitors focused on the key message is the name of the game. This means eliminating all extraneous details not directly related to the campaign at hand. Links to other pages? Delete them. Fancy and distracting design. Change it. Lots of extra content about your firm? Gone.
  2. Headline—When visitors arrive on your landing page, you’ve got at most 15 seconds (and probably a lot less) to grab their attention. And nothing grabs someone’s attention better than a catchy and hard-hitting headline. According to Jeff Ginsberg (@mktgexperiments), landing page headlines should “emphasize what the customer gets rather than does and be customer-focused.” Couldn’t agree more. If you’re new to the headline game, don’t try to reinvent the wheel. Check out successful campaigns and see what they used. Get a sense of what other marketers are doing, and remember that imitation is sometimes the sincerest form of flattery.
  3. Call-to-action—If you spent your hard-earned marketing bucks to drive someone to your landing page in the first place, bet your bottom dollar it’s because you want them to do something—express interest in your products or services by filling out a Web form, buy your product by whipping out a credit card and clicking submit on a shopping cart, etc. With that in mind, make sure your landing page contains a clear, concise and effective call-to-action that encourages the prospect to follow through and close the loop.
  4. Form—Unless you’re running a branding campaign—in which case you wouldn’t even need a landing page, right?—at the end of the user-engagement process you want to visitor to fill out some sort of Web form. Call it what you will—lead form, shopping cart and so on—but the act of filling out or not filling out this one vital page element is what will ultimately be used as a Key (if not the Key) Performance Indicator (KPI) that determines how well your campaign performed. When it comes to Web forms, the shorter the better. Fact is, nothing turns off or scares away Web visitors more than a long and imposing Web form. So make it short, sweet and to the point. Oh, and if possible, using technology such as Personalized URLs (PURLs) that pre-fills as many of the form fields as possible. Remember, the less there is to do, the greater the chance it gets filled out in the first place.
  5. Advertise security—Nobody likes to submit information on a website they don’t trust. In other words, flaunt your security credentials. If your page is secure and encrypted (SSL), make sure the security certificate is displayed prominently on the landing page. And if there are other security features your firm follows, darn right you should display them, too.
  6. Build credibility—Similarly to the last point, prospects fill out forms on landing pages because they trust the vendor. This means that it’s your job to tell your brand’s story in a clear, concise and compelling manner. The trick to this point is that because we’re talking about a landing page, you don’t have too much real estate in which to tell your story. In other words, talk about what make your firms and its products unique, but don’t waste too much space or verbiage doing so. If you want to tell a customer testimonial or testimonials, make them short and to the point.

Okay, I guess those are my best tips for landing pages. So go out and build some good ones. Trust me, you won’t regret it.

Best Online Marketing Practices For A ‘Bionic’ Business: Part III

My last two posts, part one and part two, focused on real-life questions I’ve gotten from business owners, as well as my responses. Topics that were covered included free online press release distribution best practices and social marketing secrets for stronger visibility.

My last two posts, part one and part two, focused on real-life questions I’ve gotten from business owners, as well as my responses. Topics that were covered included free online press release distribution best practices and social marketing secrets for stronger visibility.

This final post in the series will share some powerful, yet easy, ideas to help build your list and boost website performance.

Enjoy!

Question: What can I do to start building a list of qualified leads?
Answer: Creating free content is a great way to give something and get something in return. You’re offering free, powerful editorial content. And, in return, you’re asking for an email address from the reader. Creating this type of content isn’t just good for acquisition efforts, it’s also good for branding and establishing you as an expert within your niche. You can then leverage your free content to build your list (prospect database). Your list is your key to future sales. Growing and cultivating your list through editorial is a proven business model from top online publishers. It’s a great way to bond with … and cross-sell to … your readers. And this helps create a loyal following. And, from there, the sky is the limit!

Question: What are some tips to boost sales and eCommerce performance?
Answer: No matter what you’re selling, whether it’s products or a service (i.e. copywriting, freelancing, consulting) you should always have a variety of price points for customers at every level. Offering front-end products and back-end products gives you room to bring in a customer at a low level and up-sell them. As far as eComm ideas:

  • Make Sure Your SSL Seal is Prominent. This is a sign that the site is encrypted … that the information consumers enter, such as personal and credit card information, is protected. Most eCommerce sites must file for an SSL certificate from vendors such as VeriSign, GoDaddy, eTrust, TRUSTe, etc.. It’s a good practice to display the vendors’ logo on your order page, as well as make sure in the browser window the “https” or image of a lock is present. This is a clear and comforting sign to consumers that they can order online with confidence.
  • Encourage Online Sales vs. Other Order Mechanisms. Offer special “Internet Only Pricing” to customers. It could be a discount of 5 percent to 10 percent. This reduces any potential overhead costs for staffing fees such as telesales or order entry personnel.
  • Offer Free Shipping. Many eTailers already factor shipping into their published price, so when there’s a big, flashing banner next to the item saying “free shipping” it gives consumers that extra little push to move forward with the transaction. It boils down to basic psychology. Everyone likes to feel like they’re getting something for free.
  • Use Buyer Feedback To Your Advantage. Have an area on your website or next to select items that says “Customer Favorite” or “Hot Item.” Also, have some glowing customer testimonials next to the product. Consumers like to feel good about the item they are about to purchase. To see a great testimonial and knowing that others purchased the product is a validation and comforting feeling. In addition to helping the conversion, this tactic also helps reduce buyer’s remorse and product returns.
  • Make Sure Your Product Pages are Optimized for Search Engines. After doing some keyword research on actual search behavior for your product, refine your meta description, meta keywords and title tag of your product pages. This will help consumers find your product in the organic listing of search engine results.
  • Have a Special Coupon Code Banner on Your Home Page. Something like, “Summer Blow Out Sale, Use Coupon Code 1234.” This makes consumers feel good about the purchase. In addition, encourage viral activity by having a “forward to friend” text link that opens an Outlook email window with the coupon or coupon code. Make sure to have some great promotional copy mentioning how customers should “pass on the great savings to friends, family, and colleagues.”
  • Consider Payment Plans. For your higher ticket items, consider setting up extended payment plans that allow customers to pay for an item over a few payments. If an item is $200, you might want to offer a flex pay of “6 easy payments of $33.33” that is conveniently auto-billed to their credit card. Just be diligent when calculating your payment prices, as well as creating your return/refund policy for these items. The general rule is that your actual production costs/hard costs should be covered in the first one to three payments.