Best Practices Exist for a Reason, Part 3: Email Results Analysis

For many marketers, the bulk of their time is taken up with list selection, subject lines, email design and ensuring the email links to integrated landing pages. But not enough time is spent analyzing the results of those efforts in order to learn and apply it to the next campaign.

For many marketers, the bulk of their time is taken up with list selection, subject lines, email design and ensuring the email links to integrated landing pages. But not enough time is spent analyzing the results of those efforts in order to learn and apply it to the next campaign.

Whenever clients ask us to help them with new creative, our very first question is “Can we see what you’ve done before and the results associated with it?” You’d be amazed how many don’t have this information — let alone know where they can find it.

And on many occasions when they do provide us with it, they are unsure how to interpret the results or use them to influence the next campaign effort. So here are a few best practices worth considering:

  • Maintain a Historical Record: You should have one digital file that contains the creative for every email you’ve blasted. Organize them by target audience (existing customers, warm prospects, cold prospects). This makes them much easier to find when you’re thinking about your next campaign by audience type.
  • Target Audience: Ideally your notes should include the parameters you used to select your target from your house file (e.g. customers who haven’t purchased in 60 days from X/XX/XX; or inquirers who downloaded whitepaper “ABC” between X/XX/XX and X/XX/XX”). If you rented or purchased an outside list, include additional information like the company you rented from, the name of the list, the parameters you provided to them, and the price you paid (you’ll want this to measure your ROI).
  • Blast Quantity: While this is important, it’s NOT the metric you should be using to calculate your open rates. You’ll also want the number of hard and soft bounces, and whether or not your email system is automatically re-blasting to soft bounces at another time. You want to start measuring results based on how many recipients actually received your email which requires you to subtract hard and soft bounces from your gross blast quantity.
  • Open Rates: While this may seem like a no-brainer, emails can get opened up to 10 days after you blasted, so be sure to take a “final” tally a few weeks after your initial blast date instead of creating your only results report within a few days of the blast.
  • Clickthrough Rates: This number should be based as a percent of the number of unique individuals who opened the email. I’ve seen lots of email companies report click thru rates as a calculation of clicks divided by the number blasted—and I just find that irritating. Your audience can’t click unless it opens the email, so the most important stat is to understand how many of those that opened, clicked.
  • Conversion to Sale: What was the objective of the campaign? To sell a product? To download a whitepaper? Don’t assume that just because your target clicked on the link, they took the next step. Using Google analytics (if you don’t have any other source of intel) will let you see which links a web visitor clicked on, including the “download” or “shopping cart” button. If your email is the only source of driving traffic to this page, then you can match this rate back to your email campaign. If your email drives the recipient to your website, shame on you. Read Part 2 of this series about Best Practices for Landing Pages.

If you achieve a high open rate (there are lots of recent industry stats here for comparison), then your problem is not your subject line.

If you achieve a high click thru rate, then you’ve designed great email creative with a great offer (and may want to examine/test your subject line to see if you can get better exposure of your message).

If you achieve a low conversion rate, reexamine your landing page. Does it match your email offer? Is it working as hard as possible to lead the visitor to the desired next step? Is your button obvious and clear what the recipient will get when they click?

The truth is, you won’t really know if YOUR email campaign is working until you establish some benchmark metrics, and then begin to compare additional email efforts against that benchmark (while always keeping an eye on industry averages).

How to Outsource Analytics

In this series, I have been emphasizing the importance of statistical modeling in almost every article. While there are plenty of benefits of using statistical models in a more traditional sense (refer to “Why Model?”), in the days when “too much” data is the main challenge, I would dare to say that the most important function of statistical models is that they summarize complex data into simple-to-use “scores.”

In this series, I have been emphasizing the importance of statistical modeling in almost every article. While there are plenty of benefits of using statistical models in a more traditional sense (refer to “Why Model?”), in the days when “too much” data is the main challenge, I would dare to say that the most important function of statistical models is that they summarize complex data into simple-to-use “scores.”

The next important feature would be that models fill in the gaps, transforming “unknowns” to “potentials.” You see, even in the age of ubiquitous data, no one will ever know everything about everybody. For instance, out of 100,000 people you have permission to contact, only a fraction will be “known” wine enthusiasts. With modeling, we can assign scores for “likelihood of being a wine enthusiast” to everyone in the base. Sure, models are not 100 percent accurate, but I’ll take “70 percent chance of afternoon shower” over not knowing the weather forecast for the day of the company picnic.

I’ve already explained other benefits of modeling in detail earlier in this series, but if I may cut it really short, models will help marketers:

1. In deciding whom to engage, as they cannot afford to spam the world and annoy everyone who can read, and

2. In determining what to offer once they decide to engage someone, as consumers are savvier than ever and they will ignore and discard any irrelevant message, no matter how good it may look.

OK, then. I hope you are sold on this idea by now. The next question is, who is going to do all that mathematical work? In a country where jocks rule over geeks, it is clear to me that many folks are more afraid of mathematics than public speaking; which, in its own right, ranks higher than death in terms of the fear factor for many people. If I may paraphrase “Seinfeld,” many folks are figuratively more afraid of giving a eulogy than being in the coffin at a funeral. And thanks to a sub-par math education in the U.S. (and I am not joking about this, having graduated high school on foreign soil), yes, the fear of math tops them all. Scary, heh?

But that’s OK. This is a big world, and there are plenty of people who are really good at mathematics and statistics. That is why I purposefully never got into the mechanics of modeling techniques and related programming issues in this series. Instead, I have been emphasizing how to formulate questions, how to express business goals in a more logical fashion and where to invest to create analytics-ready environments. Then the next question is, “How will you find the right math geeks who can make all your dreams come true?”

If you have a plan to create an internal analytics team, there are a few things to consider before committing to that idea. Too many organizations just hire one or two statisticians, dump all the raw data onto them, and hope to God that they will figure some ways to make money with data, somehow. Good luck with that idea, as:

1. I’ve seen so many failed attempts like that (actually, I’d be shocked if it actually worked), and

2. I am sure God doesn’t micromanage statistical units.

(Similarly, I am almost certain that she doesn’t care much for football or baseball scores of certain teams, either. You don’t think God cares more for the Red Sox than the Yankees, do ya?)

The first challenge is locating good candidates. If you post any online ad for “Statistical Analysts,” you will receive a few hundred resumes per day. But the hiring process is not that simple, as you should ask the right questions to figure out who is a real deal, and who is a poser (and there are many posers out there). Even among qualified candidates with ample statistical knowledge, there are differences between the “Doers” and “Vendor Managers.” Depending on your organizational goal, you must differentiate the two.

Then the next challenge is keeping the team intact. In general, mathematicians and statisticians are not solely motivated by money; they also want constant challenges. Like any smart and creative folks, they will simply pack up and leave, if “they” determine that the job is boring. Just a couple of modeling projects a year with some rudimentary sets of data? Meh. Boring! Promises of upward mobility only work for a fraction of them, as the majority would rather deal with numbers and figures, showing no interest in managing other human beings. So, coming up with interesting and challenging projects, which will also benefit the whole organization, becomes a job in itself. If there are not enough challenges, smart ones will quit on you first. Then they need constant mentoring, as even the smartest statisticians will not know everything about challenges associated with marketing, target audiences and the business world, in general. (If you stumble into a statistician who is even remotely curious about how her salary is paid for, start with her.)

Further, you would need to invest to set up an analytical environment, as well. That includes software, hardware and other supporting staff. Toolsets are becoming much cheaper, but they are not exactly free yet. In fact, some famous statistical software, such as SAS, could be quite expensive year after year, although there are plenty of alternatives now. And they need an “analytics-ready” data environment, as I emphasized countless times in this series (refer to “Chicken or the Egg? Data or Analytics?” and “Marketing and IT; Cats and Dogs”). Such data preparation work is not for statisticians, and most of them are not even good at cleaning up dirty data, anyway. That means you will need different types of developers/programmers on the analytics team. I pointed out that analytical projects call for a cohesive team, not some super-duper analyst who can do it all (refer to “How to Be a Good Data Scientist”).

By now you would say “Jeez Louise, enough already,” as all this is just too much to manage to build just a few models. Suddenly, outsourcing may sound like a great idea. Then you would realize there are many things to consider when outsourcing analytical work.

First, where would you go? Everyone in the data industry and their cousins claim that they can take care of analytics. But in reality, it is a scary place where many who have “analytics” in their taglines do not even touch “predictive analytics.”

Analytics is a word that is abused as much as “Big Data,” so we really need to differentiate them. “Analytics” may mean:

  • Business Intelligence (BI) Reporting: This is mostly about the present, such as the display of key success metrics and dashboard reporting. While it is very important to know about the current state of business, much of so-called “analytics” unfortunately stops right here. Yes, it is good to have a dashboard in your car now, but do you know where you should be going?
  • Descriptive Analytics: This is about how the targets “look.” Common techniques such as profiling, segmentation and clustering fall under this category. These techniques are mainly for describing the target audience to enhance and optimize messages to them. But using these segments as a selection mechanism is not recommended, while many dare to do exactly that (more on this subject in future articles).
  • Predictive Modeling: This is about answering the questions about the future. Who would be more likely to behave certain ways? What communication channels will be most effective for whom? How much is the potential spending level of a prospect? Who is more likely to be a loyal and profitable customer? What are their preferences? Response models, various of types of cloning models, value models, and revenue models, attrition models, etc. all fall under this category, and they require hardcore statistical skills. Plus, as I emphasized earlier, these model scores compact large amounts of complex data into nice bite-size packages.
  • Optimization: This is mostly about budget allocation and attribution. Marketing agencies (or media buyers) generally deal with channel optimization and spending analysis, at times using econometrics models. This type of statistical work calls for different types of expertise, but many still insist on calling it simply “analytics.”

Let’s say that for the purpose of customer-level targeting and personalization, we decided to outsource the “predictive” modeling projects. What are our options?

We may consider:

  • Individual Consultants: In-house consultants are dedicated to your business for the duration of the contract, guaranteeing full access like an employee. But they are there for you only temporarily, with one foot out the door all the time. And when they do leave, all the knowledge walks away with them. Depending on the rate, the costs can add up.
  • Standalone Analytical Service Providers: Analytical work is all they do, so you get focused professionals with broad technical and institutional knowledge. Many of them are entrepreneurs, but that may work against you, as they could often be understaffed and stretched thin. They also tend to charge for every little step, with not many freebies. They are generally open to use any type of data, but the majority of them do not have secure sources of third-party data, which could be essential for certain types of analytics involving prospecting.
  • Database Service Providers: Almost all data compilers and brokers have statistical units, as they need to fill in the gap within their data assets with statistical techniques. (You didn’t think that they knew everyone’s income or age, did you?) For that reason, they have deep knowledge in all types of data, as well as in many industry verticals. They provide a one-stop shop environment with deep resource pools and a variety of data processing capabilities. However, they may not be as agile as smaller analytical shops, and analytics units may be tucked away somewhere within large and complex organizations. They also tend to emphasize the use of their own data, as after all, their main cash cows are their data assets.
  • Direct Marketing Agencies: Agencies are very strategic, as they touch all aspects of marketing and control creative processes through segmentation. Many large agencies boast full-scale analytical units, capable of all types of analytics that I explained earlier. But some agencies have very small teams, stretched really thin—just barely handling the reporting aspect, not any advanced analytics. Some just admit that predictive analytics is not part of their core competencies, and they may outsource such projects (not that it is a bad thing).

As you can see here, there is no clear-cut answer to “with whom you should you work.” Basically, you will need to check out all types of analysts and service providers to determine the partner best suitable for your long- and short-term business purposes, not just analytical goals. Often, many marketers just go with the lowest bidder. But pricing is just one of many elements to be considered. Here, allow me to introduce “10 Essential Items to Consider When Outsourcing Analytics.”

1. Consulting Capabilities: I put this on the top of the list, as being a translator between the marketing and the technology world is the most important differentiator (refer to “How to Be a Good Data Scientist”). They must understand the business goals and marketing needs, prescribe suitable solutions, convert such goals into mathematical expressions and define targets, making the best of available data. If they lack strategic vision to set up the data roadmap, statistical knowledge alone will not be enough to achieve the goals. And such business goals vary greatly depending on the industry, channel usage and related success metrics. Good consultants always ask questions first, while sub-par ones will try to force-fit marketers’ goals into their toolsets and methodologies.

Translating marketing goals into specific courses of action is a skill in itself. A good analytical partner should be capable of building a data roadmap (not just statistical steps) with a deep understanding of the business impact of resultant models. They should be able to break down larger goals into smaller steps, creating proper phased approaches. The plan may call for multiple models, all kinds of pre- and post-selection rules, or even external data acquisition, while remaining sensitive to overall costs.

The target definition is the core of all these considerations, which requires years of experience and industry knowledge. Simply, the wrong or inadequate targeting decision leads to disastrous results, no matter how sound the mathematical work is (refer to “Art of Targeting”).

Another important quality of a good analytical partner is the ability to create usefulness out of seemingly chaotic and unstructured data environments. Modeling is not about waiting for the perfect set of data, but about making the best of available data. In many modeling bake-offs, the winners are often decided by the creative usage of provided data, not just statistical techniques.

Finally, the consultative approach is important, as models do not exist in a vacuum, but they have to fit into the marketing engine. Be aware of the ones who want to change the world around their precious algorithms, as they are geeks not strategists. And the ones who understand the entire marketing cycle will give advice on what the next phase should be, as marketing efforts must be perpetual, not transient.

So, how will you find consultants? Ask the following questions:

  • Are they “listening” to you?
  • Can they repeat “your” goals in their own words?
  • Do their roadmaps cover both short- and long-term goals?
  • Are they confident enough to correct you?
  • Do they understand “non-statistical” elements in marketing?
  • Have they “been there, done that” for real, or just in theories?

2. Data Processing Capabilities: I know that some people look down upon the word “processing.” But data manipulation is the most important key step “before” any type of advanced analytics even begins. Simply, “garbage-in, garbage out.” And unfortunately, most datasets are completely unsuitable for analytics and modeling. In general, easily more than 80 percent of model development time goes into “fixing” the data, as most are unstructured and unrefined. I have been repeatedly emphasizing the importance of a “model-ready” (or “analytics-ready”) environment for that reason.

However, the reality dictates that the majority of databases are indeed NOT model-ready, and most of them are not even close to it. Well, someone has to clean up the mess. And in this data business, the last one who touches the dataset becomes responsible for all the errors and mistakes made to it thus far. I know it is not fair, but that is why we need to look at the potential partner’s ability to handle large and really messy data, not just the statistical savviness displayed in glossy presentations.

Yes, that dirty work includes data conversion, edit/hygiene, categorization/tagging, data summarization and variable creation, encompassing all kinds of numeric, character and freeform data (refer to “Beyond RFM Data” and “Freeform Data Aren’t Exactly Free”). It is not the most glorious part of this business, but data consistency is the key to successful implementation of any advanced analytics. So, if a model-ready environment is not available, someone had better know how to make the best of whatever is given. I have seen too many meltdowns in “before” and “after” modeling steps due to inconsistencies in databases.

So, grill the candidates with the following questions:

  • If they support file conversions, edit, categorization and summarization
  • How big of a dataset is too big, and how many files/tables are too many for them
  • How much free-form data are too much for them
  • Ask for sample model variables that they have created in the past

3. Track Records in the Industry: It can be argued that industry knowledge is even more crucial for the success than statistical know-how, as nuances are often “Lost in Translation” without relevant industry experience. In fact, some may not even be able to carry on a proper conversation with a client without it, leading to all kinds of wrong assumptions. I have seen a case where “real” rocket scientists messed up models for credit card campaigns.

The No. 1 reason why industry experience is important is everyone’s success metrics are unique. Just to name a few, financial services (banking, credit card, insurance, investment, etc.), travel and hospitality, entertainment, packaged goods, online and offline retail, catalogs, publication, telecommunications/utilities, non-profit and political organizations all call for different types of analytics and models, as their business models and the way they interact with target audiences are vastly different. For example, building a model (or a database, for that matter) for businesses where they hand over merchandise “before” they collect money is fundamentally different than the ones where exchange happens simultaneously. Even a simple concept of payment date or transaction date cannot be treated the same way. For retailers, recent dates could be better for business, but for subscription business, older dates may carry more weight. And these are just some examples with “dates,” before touching any dollar figures or other fun stuff.

Then the job gets even more complicated, if we further divide all of these industries by B-to-B vs. B-to-C, where available data do not even look similar. On top of that, divisional ROI metrics may be completely different, and even terminology and culture may play a role in all of this. When you are a consultant, you really don’t want to stop the flow of a meeting to clarify some unfamiliar acronyms, as you are supposed to know them all.

So, always demand specific industry references and examine client roasters, if allowed. (Many clients specifically ask vendors not to use their names as references.) Basically, watch out for the ones who push one-size-fits-all cookie-cutter solutions. You deserve way more than that.

4. Types of Models Supported: Speaking of cookie-cutter stuff, we need to be concerned with types of models that the outsourcing partner would support. Sure, nobody employs every technique, and no one can be good at everything. But we need to watch out for the “One-trick Ponies.”

This could be a tricky issue, as we are going into a more technical domain. Plus, marketers should not self-prescribe with specific techniques, instead of clearly stating their business goals (refer to “Marketing and IT; Cats and Dogs”). Some of the modeling goals are:

  • Rank and select prospect names
  • Lead scoring
  • Cross-sell/upsell
  • Segment the universe for messaging strategy
  • Pinpoint the attrition point
  • Assign lifetime values for prospects and customers
  • Optimize media/channel spending
  • Create new product packages
  • Detect fraud
  • Etc.

Unless you have successfully dealt with the outsourcing partner in the past (or you have a degree in statistics), do not blurt out words like Neural-net, CHAID, Cluster Analysis, Multiple Regression, Discriminant Function Analysis, etc. That would be like demanding specific medication before your new doctor even asks about your symptoms. The key is meeting your business goals, not fulfilling buzzwords. Let them present their methodology “after” the goal discussion. Nevertheless, see if the potential partner is pushing one or two specific techniques or solutions all the time.

5. Speed of Execution: In modern marketing, speed to action is the king. Speed wins, and speed gains respect. However, when it comes to modeling or other advanced analytics, you may be shocked by the wide range of time estimates provided by each outsourcing vendor. To be fair they are covering themselves, mainly because they have no idea what kind of messy data they will receive. As I mentioned earlier, pre-model data preparation and manipulation are critical components, and they are the most time-consuming part of all; especially when available data are in bad shape. Post-model scoring, audit and usage support may elongate the timeline. The key is to differentiate such pre- and post-modeling processes in the time estimate.

Even for pure modeling elements, time estimates vary greatly, depending on the complexity of assignments. Surely, a simple cloning model with basic demographic data would be much easier to execute than the ones that involve ample amounts of transaction- and event-level data, coming from all types of channels. If time-series elements are added, it will definitely be more complex. Typical clustering work is known to take longer than regression models with clear target definitions. If multiple models are required for the project, it will obviously take more time to finish the whole job.

Now, the interesting thing about building a model is that analysts don’t really finish it, but they just run out of time—much like the way marketers work on PowerPoint presentations. The commonality is that we can basically tweak models or decks forever, but we have to stop at some point.

However, with all kinds of automated tools and macros, model development time has decreased dramatically in past decades. We really came a long way since the first application of statistical techniques to marketing, and no one should be quoting a 1980s timeline in this century. But some still do. I know vendors are trained to follow the guideline “always under-promise and over-deliver,” but still.

An interesting aspect of this dilemma is that we can negotiate the timeline by asking for simpler and less sophisticated versions with diminished accuracy. If, hypothetically, it takes a week to be 98 percent accurate, but it only takes a day to be 90 percent accurate, what would you pick? That should be the business decision.

So, what is a general guideline? Again, it really depends on many factors, but allow me to share a version of it:

  • Pre-modeling Processing

– Data Conversions: from half a day to weeks

– Data Append/Enhancement: between overnight and two days

– Data Edit and Summarization: Data-dependent

  • Modeling: Ranges from half a day to weeks

– Depends on type, number of models and complexity

  • Scoring: from half a day to one week

– Mainly depends on number of records and state of the database to be scored

I know these are wide ranges, but watch out for the ones that routinely quote 30 days or more for simple clone models. They may not know what they are doing, or worse, they may be some mathematical perfectionists who don’t understand the marketing needs.

6. Pricing Structure: Some marketers would put this on top of the checklist, or worse, use the pricing factor as the only criterion. Obviously, I disagree. (Full disclosure: I have been on the service side of the fence during my entire career.) Yes, every project must make an economic sense in the end, but the budget should not and cannot be the sole deciding factor in choosing an outsourcing partner. There are many specialists under famous brand names who command top dollars, and then there are many data vendors who throw in “free” models, disrupting the ecosystem. Either way, one should not jump to conclusions too fast, as there is no free lunch, after all. In any case, I strongly recommend that no one should start the meeting with pricing questions (hence, this article). When you get to the pricing part, ask what the price includes, as the analytical journey could be a series of long and winding roads. Some of the biggest factors that need to be considered are:

  • Multiple Model Discounts—Less for second or third models within a project?
  • Pre-developed (off-the-shelf) Models—These can be “much” cheaper than custom models, while not custom-fitted.
  • Acquisition vs. CRM—Employing client-specific variables certainly increases the cost.
  • Regression Models vs. Other Types—At times, types of techniques may affect the price.
  • Clustering and Segmentations—They are generally priced much higher than target-specific models.

Again, it really depends on the complexity factor more than anything else, and the pre- and post-modeling process must be estimated and priced separately. Non-modeling charges often add up fast, and you should ask for unit prices and minimum charges for each step.

Scoring charges in time can be expensive, too, so negotiate for discounts for routine scoring of the same models. Some may offer all-inclusive package pricing for everything. The important thing is that you must be consistent with the checklist when shopping around with multiple candidates.

7. Documentation: When you pay for a custom model (not pre-developed, off-the-shelf ones), you get to own the algorithm. Because algorithms are not tangible items, the knowledge is to be transformed in model documents. Beware of the ones who offer “black-box” solutions with comments like, “Oh, it will work, so trust us.”

Good model documents must include the following, at the minimum:

  • Target and Comparison Universe Definitions: What was the target variable (or “dependent” variable) and how was it defined? How was the comparison universe defined? Was there any “pre-selection” for either of the universes? These are the most important factors in any model—even more than the mechanics of the model itself.
  • List of Variables: What are the “independent” variables? How were they transformed or binned? From where did they originate? Often, these model variables describe the nature of the model, and they should make intuitive sense.
  • Model Algorithms: What is the actual algorithm? What are the assigned weight for each independent variable?
  • Gains Chart: We need to examine potential effectiveness of the model. What are the “gains” for each model group, from top to bottom (e.g., 320 percent gain at the top model group in comparison to the whole universe)? How fast do such gains decrease as we move down the scale? How do the gains factors compare against the validation sample? A graphic representation would be nice, too.

For custom models, it is customary to have a formal model presentation, full documentation and scoring script in designated programming languages. In addition, if client files are provided, ask for a waterfall report that details input and output counts of each step. After the model scoring, it is also customary for the vendor to provide a scored universe count by model group. You will be shocked to find out that many so-called analytical vendors do not provide thorough documentation. Therefore, it is recommended to ask for sample documents upfront.

8. Scoring Validation: Models are built and presented properly, but the job is not done until the models are applied to the universe from which the names are ranked and selected for campaigns. I have seen too many major meltdowns at this stage. Simply, it is one thing to develop models with a few hundred thousand record samples, but it is quite another to apply the algorithm to millions of records. I am not saying that the scoring job always falls onto the developers, as you may have an internal team or a separate vendor for such ongoing processes. But do not let the model developer completely leave the building until everything checks out.

The model should have been validated against the validation sample by then, but live scoring may reveal all kinds of inconsistencies. You may also want to back-test the algorithms with past campaign results, as well. In short, many things go wrong “after” the modeling steps. When I hear customers complaining about models, I often find that the modeling is the only part that was done properly, and “before” and “after” steps were all messed up. Further, even machines misunderstand each other, as any differences in platform or scripting language may cause discrepancies. Or, maybe there was no technical error, but missing values may have caused inconsistencies (refer to “Missing Data Can Be Meaningful”). Nonetheless, the model developers would have the best insight as to what could have gone wrong, so make sure that they are available for questions after models are presented and delivered.

9. Back-end Analysis: Good analytics is all about applying learnings from past campaigns—good or bad—to new iterations of efforts. We often call it “closed-loop marketing—while many marketers often neglect to follow up. Any respectful analytics shop must be aware of it, while they may classify such work separately from modeling or other analytical projects. At the minimum, you need to check out if they even offer such services. In fact, so-called “match-back analysis” is not as simple as just matching campaign files against responders in this omnichannel environment. When many channels are employed at the same time, allocation of credit (i.e., “what worked?”) may call for all kinds of business rules or even dedicated models.

While you are at it, ask for a cheaper version of “canned” reports, as well, as custom back-end analysis can be even more costly than the modeling job itself, over time. Pre-developed reports may not include all the ROI metrics that you’re looking for (e.g., open, clickthrough, conversion rates, plus revenue and orders-per-mailed, per order, per display, per email, per conversion. etc.). So ask for sample reports upfront.

If you start breaking down all these figures by data source, campaign, time series, model group, offer, creative, targeting criteria, channel, ad server, publisher, keywords, etc., it can be unwieldy really fast. So contain yourself, as no one can understand 100-page reports, anyway. See if the analysts can guide you with such planning, as well. Lastly, if you are so into ROI analysis, get ready to share the “cost” side of the equation with the selected partner. Some jobs are on the marketers.

10. Ongoing Support: Models have a finite shelf life, as all kinds of changes happen in the real world. Seasonality may be a factor, or the business model or strategy may have changed. Fluctuations in data availability and quality further complicate the matter. Basically assumptions like “all things being equal” only happen in textbooks, so marketers must plan for periodic review of models and business rules.

A sure sign of trouble is decreasing effectiveness of models. When in doubt, consult the developers and they may recommend a re-fit or complete re-development of models. Quarterly reviews would be ideal, but if the cost becomes an issue, start with 6-month or yearly reviews, but never go past more than a year without any review. Some vendors may offer discounts for redevelopment, so ask for the price quote upfront.

I know this is a long list of things to check, but picking the right partner is very important, as it often becomes a long-term relationship. And you may find it strange that I didn’t even list “technical capabilities” at all. That is because:

1. Many marketers are not equipped to dig deep into the technical realm anyway, and

2. The difference between the most mathematically sound models and the ones from the opposite end of the spectrum is not nearly as critical as other factors I listed in this article.

In other words, even the worst model in the bake-off would be much better than no model, if these other business criterion are well-considered. So, happy shopping with this list, and I hope you find the right partner. Employing analytics is not an option when living in the sea of data.

Email Creative That Demands an “I Do”

So my big sister is getting married in July! I’m the maid of honor, which means somewhere along the way I signed up for one wedding planner website or another. Obviously, this means my personal inbox has been overflowing for the last six months with everything from florist promotions to caterers to personalized cufflinks.

So my big sister is getting married in July! I’m the maid of honor, which means somewhere along the way I signed up for one wedding planner website or another. Obviously, this means my personal inbox has been overflowing for the last six months with everything from florist promotions to caterers to personalized cufflinks.

Promotions for pretty much any object or service you could ever even vaguely imagine at a wedding have made their way into my inbox. Half the time I’m too lazy to unsubscribe from all these lists, but it turned out for the best because now I get to focus a blog entry on it!

Thought I’d just give you a little taste of a few bridal-centric emails I thought really caught the bouquet. In the spirit of wedding themes, I’m using an age-old nuptials adage as a guide. To be honest it’s probably going to take a little creative fudging to fit my choices into these categories. Work with me here, planning a wedding is stressful.

SOMETHING OLD
From:
MyGatsby.com
Subj.:
10 Days Only: Enjoy 40% OFF Wedding Invitations
Why it’s an “I Do”: Subject line cuts to the chase and gives a clear time frame to act within. 40% is an appealing offer for anyone shopping around for invitations which, like all things matrimonial, can lighten your purse quite a bit. The email itself has a nice, clean, elegant look as well.

SOMETHING NEW
From:
J. Crew
Subj.:
Exciting wedding news…
Why it’s an “I Do”: Big fan of the ellipsis in subject lines, I fall for them regularly, just can’t handle the suspense. Beautiful HTML design that makes it easy to quickly find your fashions—the full email had a section for the groom and groomsmen too.

SOMETHING BORROWED
From:
www.BellaPictures.com (From line: Nicole Reilly)
Subj.: Touching base about your wedding
Why it’s an “I Do”: First off, this one’s a Something Borrowed because apparently I’ve borrowed my sister’s name and her role in the wedding. News to me! So they get a few points docked for not looking at the “I am the __________ in this wedding” response on whichever form got me added to this list. That said, the subject line, text-based copy, and conversational tone were all on point. It really does feel like this is someone you’ve already chatted with, making their studio more appealing. The casual question about venue in the beginning is a nice personal touch.

SOMETHING BLUE
From:
David’s Bridal
Subj.: Bridesmaids, Get Your Lace On!
Why it’s an “I Do”: When it comes to fashion, sometimes it’s best to let the image do the talking, such is the case here. The few lines of text above and below are just brief and easy-breezy enough to give the extra nudge—plus, of course, a free shipping offer. Not to get all color-psychological on you, but maybe worth noting that pale blue like the shade they chose for this email is by and large considered to be pleasing and calming to the human eye.

Actually, that last one gives me an idea for testing the same copy and creative with varying color schemes. Look out for that in a future post, maybe. As always, feel free to comment if you have more to add about these picks, or if something landed in your inbox you’d like to share.

In the meantime, I’m knee-deep in shower planning and the wedding isn’t until July but I’m already having nightmares about writing my speech. I’m not sure any amount of email marketing analysis can help me there. Wish me luck!

Measuring Customer Engagement: It’s Not Easy and It Takes Time

Here’s what’s easy: Measuring the effect of individual engagements like Web page views, email opens, paid and organic search clicks, call center interactions, Facebook likes, Twitter follows, tweets, retweets, referrals, etc. Here’s what’s hard: Understanding the combined effect of your promotions across all those channels. Many marketers turn to online attribution methods to assign credit for all or part of an individual order across multiple online channels. es as the independent variables.

Here’s what’s easy: Measuring the effect of individual engagements like Web page views, email opens, paid and organic search clicks, call center interactions, Facebook likes, Twitter follows, tweets, retweets, referrals, etc.

Here’s what’s hard: Understanding the combined effect of your promotions across all those channels.

Many marketers turn to online attribution methods to assign credit for all or part of an individual order across multiple online channels. Digital marketing guru Avinash Kaushik points out the strengths of weaknesses of various methods in his blog, Occam’s Razor in “Multichannel Attribution: Definitions, Models and a Reality Check” and concludes that none are perfect and many are far from it.

But online attribution models look to give credit to an individual tactic rather than measuring the combined effects of your entire promotion mix. Here’s a different approach to getting a holistic view of your entire promotion mix. It’s similar to the methodology I discussed in the post “Use Market Research to Tie Brand Awareness and Purchase Intent to Sales,” and like that methodology, it’s not something you’re going to be able to do overnight. It’s an iterative process that will take some time.

Start by assigning a point value to every consumer touch and every consumer action to create an engagement score for each customer. This process will be different for every marketer and will vary according to your customer base and your promotion mix. For illustration’s sake, consider the arbitrary assignments in the table in the media player, at right.

Next, perform this preliminary analysis:

  1. Rank your customers on sales volume for different time periods
    —previous month, quarter, year, etc.
  2. Rank your customers on their engagement score for the same periods
  3. Examine the correlation between sales and engagement
    —How much is each point of engagement worth in sales $$$?

After you’ve done this preliminary scoring, do your best to isolate customers who were not exposed to specific elements of the promotion mix into control groups, i.e., they didn’t engage on Facebook or they didn’t receive email. Compare their revenue against the rest of the file to see how well you’ve weighted that particular element. With several iterations of this process over time, you will be able to place a dollar value on each point of engagement and plan your promotion mix accordingly.

How you assign your point values may seem arbitrary at first, but you will need to work through this iteratively, looking at control cells wherever you can isolate them. For a more scientific approach, run a regression analysis on the customer file with revenue as the dependent variable and the number and types of touches as the independent variables. The more complete your customer contact data is, the lower your p value and the more descriptive the regression will be in identifying the contribution of each element.

As with any methodology, this one is only as good as the data you’re able to put into it, but don’t be discouraged if your data is not perfect or complete. Even in an imperfect world, this exercise will get you closer to a holistic view of customer engagement.

Channel Collaboration or Web Cannibalization?

Multichannel marketers experience the frequent concern that online is competing with, or “cannibalizing,” sales in other channels. It seems like a reasonable problem for those responsible, for instance, for the P&L of the retail business to consider; same goes for the general managers responsible for the store-level P&L. I like to do something that we “digital natives” (professionals whose career has only been digitally driven) miss all too often. We talk to retail people and customers in the stores, store managers, general managers, sales and service staff.

Multichannel marketers experience the frequent concern that online is competing with, or “cannibalizing,” sales in other channels. It seems like a reasonable problem for those responsible, for instance, for the P&L of the retail business to consider; same goes for the general managers responsible for the store-level P&L.

I like to do something that we “digital natives” (professionals whose career has only been digitally driven) miss all too often. We talk to retail people and customers in the stores, store managers, general managers, sales and service staff. Imagine that … left-brain dominant Data Athletes who want to talk to people! Actually, a true Data Athlete will always engage the stakeholders to inform their analysis with tacit knowledge.

Every time we do this, we learn something about the customer that we quite frankly could not have gleaned from website analytics, transactional data or third-party data alone. We learn about how different kinds of customers engage with the product and their experiences are in an environment that, to this day, is far more immersive than we can create online. It’s nothing short of fascinating for the left-brainers. Moreover, access and connection with the field interaction does something powerful when we turn back to mining the data mass that grows daily. It creates context that inspires better analysis and greater performance.

This best practice may seem obvious, but is missed so often. It is just too easy to get “sucked into the data” first for a right-brain-dominant analyst. The same thing happens in an online-only environment. I can’t count how many times I sat with and coached truly brilliant Web analysts inside of organization who are talking through a data-backed hypothesis they are working through from Web analytics data, observing and measuring behaviors and drawing inferences … and they haven’t looked at the specific screens and treatments on the website or mobile app where those experiences are happening. They are disconnected from the consumer experience. If you look in your organization, odds are you’ll find examples of this kind of disconnect.

So Does The Web Compete with Retail Stores? Well, that depends.
While many businesses are seeing the same shift to digital consumption and engagement, especially on mobile devices, the evidence is clear that it’s a mistake to assume that you have a definitive answer. In fact, it is virtually always a nuanced answer that informs strategy and can help better-focus your investments in online and omnichannel marketing approaches.

In order to answer this question you need a singular view of a customer. Sounds easy, I know. So here’s the first test if you are ready to answer that question:

How many customers do you have?

If you don’t know with precision, you’re not ready to determine if the Web is competing or “cannibalizing” retail sales.

More often than not, what you’ll hear is the number of transactions, the number of visitors (from Web analytics) or the number of email addresses or postal addresses on file—or some other “proxy” that’s considered relevant.

The challenge is, these proxy values for customer-count belie a greater challenge. Without a well-thought-out data blending approach that converts transaction files into an actionable customer profile, we can’t begin to tell who bought what and how many times.

Once we have this covered, we’re now able to begin constructing metrics and developing counts of orders by customer, over time periods.

Summarization is Key
If you want to act on the data, you’ll likely need to develop a summarization routine—that is, that does the breakout of order counts and order values. This isn’t trivial. Leaving this step out creates a material amount of work slicing the data.

A few good examples of how you would summarize the data to answer the question by channel include totals:

  • by month
  • by quarter
  • by year
  • last year
  • prior quarter
  • by customer lifetime
  • and many more

Here’s The Key Takeaway: It’s not just one or the other.
Your customers buy across multiple channels. Across many brands and many datasets, we’ve always seen different pictures of the breakout between and across online and retail store transactions.

But you’re actually measuring the overlap and should focus your analysis on that overlap population. To go further, you’ll require summarization “snapshots” of the data so you can determine if the channel preference has changed over time.

The Bottom Line
While no one can say that the Web does or doesn’t definitively “cannibalize sales,” the evidence is overwhelming that buyers want to use the channel that is best for them for the specific product or service, at the time that works for them.

This being the case, it is almost inevitable that you will see omnichannel behaviors when your data is prepared and organized effectively to begin to see that shift in behavior.

Oftentimes, that shift can effectively equate to buyers spending more across channels, as specific products may sell better in person. It’s hard to feel the silky qualities of a cashmere scarf online, but you might reorder razor blades only online.

The analysis should hardly stop at channel shift and channel preference. Layering in promotion consumption can tell you how a buyer waits for the promotion online, or is more likely to buy “full-price” in a retail store. We’ve seen both of these frequently, but not always. Every data set is different.

Start by creating the most actionable customer file you can, integrating the transactions, behavioral and lifestyle data, and the depth that you can understand how customers choose between the channels you deliver becomes increasingly rich and actionable. Most of all—remember, it’s better to shift the sale to an alternative channel the customer prefers, than to lose it to a competitor who did a better job.

Sex and the Schoolboy: Predictive Modeling – Who’s Doing It? Who’s Doing it Right?

Forgive the borrowed interest, but predictive modeling is to marketers as sex is to schoolboys. They’re all talking about it, but few are doing it. And among those who are, fewer are doing it right. In customer relationship marketing (CRM), predictive modeling uses data to predict the likelihood of a customer taking a specific action. It’s a three-step process.

Forgive the borrowed interest, but predictive modeling is to marketers as sex is to schoolboys.

They’re all talking about it, but few are doing it. And among those who are, fewer are doing it right.

In customer relationship marketing (CRM), predictive modeling uses data to predict the likelihood of a customer taking a specific action. It’s a three-step process:

1. Examine the characteristics of the customers who took a desired action

2. Compare them against the characteristics of customers who didn’t take that action

3. Determine which characteristics are most predictive of the customer taking the action and the value or degree to which each variable is predictive

Predictive modeling is useful in allocating CRM resources efficiently. If a model predicts that certain customers are less likely respond to a specific offer, then fewer resources can be allocated to those customers, allowing more resources to be allocated to those who are more likely to respond.

Data Inputs
A predictive model will only be as good as the input data that’s used in the modeling process. You need the data that define the dependent variable; that is, the outcome the model is trying to predict (such as response to a particular offer). You’ll also need the data that define the independent variables, or the characteristics that will be predictive of the desired outcome (such as age, income, purchase history, etc.). Attitudinal and behavioral data may also be predictive, such as an expressed interest in weight loss, fitness, healthy eating, etc.

The more variables that are fed into the model at the beginning, the more likely the modeling process will identify relevant predictors. Modeling is an iterative process, and those variables that are not at all predictive will fall out in the early iterations, leaving those that are most predictive for more precise analysis in later iterations. The danger in not having enough independent variables to model is that the resultant model will only explain a portion of the desired outcome.

For example, a predictive model created to determine the factors affecting physician prescribing of a particular brand was inconclusive, because there weren’t enough dependent variables to explain the outcome fully. In a standard regression analysis, the number of RXs written in a specific timeframe was set as the dependent variable. There were only three independent variables available: sales calls, physician samples and direct mail promotions to physicians. And while each of the three variables turned out to have a positive effect on prescriptions written, the “Multiple R” value of the regression equation was high at 0.44, meaning that these variables only explained 44 percent of the variance in RXs. The other 56 percent of the variance is from factors that were not included in the model input.

Sample Size
Larger samples will produce more robust models than smaller ones. Some modelers recommend a minimum data set of 10,000 records, 500 of those with the desired outcome. Others report acceptable results with as few as 100 records with the desired outcome. But in general, size matters.

Regardless, it is important to hold out a validation sample from the modeling process. That allows the model to be applied to the hold-out sample to validate its ability to predict the desired outcome.

Important First Steps

1. Define Your Outcome. What do you want the model to do for your business? Predict likelihood to opt-in? Predict likelihood to respond to a particular offer? Your objective will drive the data set that you need to define the dependent variable. For example, if you’re looking to predict likelihood to respond to a particular offer, you’ll need to have prospects who responded and prospects who didn’t in order to discriminate between them.

2. Gather the Data to Model. This requires tapping into several data sources, including your CRM database, as well as external sources where you can get data appended (see below).

3. Set the Timeframe. Determine the time period for the data you will analyze. For example, if you’re looking to model likelihood to respond, the start and end points for the data should be far enough in the past that you have a sufficient sample of responders and non-responders.

4. Examine Variables Individually. Some variables will not be correlated with the outcome, and these can be eliminated prior to building the model.

Data Sources
Independent variable data
may include

  • In-house database fields
  • Data overlays (demographics, HH income, lifestyle interests, presence of children,
    marital status, etc.) from a data provider such as Experian, Epsilon or Acxiom.

Don’t Try This at Home
While you can do regression analysis in Microsoft Excel, if you’re going to invest a lot of promotion budget in the outcome, you should definitely leave the number crunching to the professionals. Expert modelers know how to analyze modeling results and make adjustments where necessary.

Creating a One-Word Brand Statement

What do your customers think of when they see your organization name and logo? Your public image is important and should be up-to-date and fresh, especially during times of swift technology, cultural changes, and new generations. Every organization should go through a periodic review of how it is viewed and how it wants to be viewed by customers, donors and prospects.

What do your customers think of when they see your organization’s name and logo? Your public image is important and should be up-to-date and fresh, especially during times of swift technology, cultural changes, and new generations. Every organization should go through a periodic review of how it is viewed and how it wants to be viewed by customers, donors and prospects.

While sitting in an organization’s Board of Directors meeting last month, the topic came up of the desire to create a new logo. It had been the 1990s when it was last updated, and at that, it still had visual remnants of a decidedly 1970s feel. It was agreed a new logo should be developed, but it was also agreed that before going too far, a branding statement should be created to guide along the process more efficiently and result in a better outcome.

If you’re like many organizations, you might not have a branding statement. This isn’t to be confused with a mission statement (which can too often be filled with empty language that rings hollow to customers and staff).

A branding statement is a marketing tool. It reflects your organization’s reputation: what you are known for, or would like to be known for. It articulates how you stand apart from competitors. A branding statement is often written by individuals to define and enhance their own careers. If that’s of interest to you, adapt these steps and you can be on your way to creating your personal branding statement.

Today we launch into steps you can take to freshen your organization’s brand and image. This first installment will lay out five research and brainstorming steps to distill your image down to a single word. My next blog post, published in a couple of weeks, will focus on how to succinctly state your logical and emotional promise, both of which must be formulated in order to create a hard-working branding statement for your organization.

  1. Audience Research:
    Are you confident you accurately know the demographics, psychographics, and purchase behavior of your audience? If you’ve recently profiled or modeled your customers, then you probably have a good grasp of who they are. But if it’s been a year or longer, a profile is affordable and will yield a tremendous wealth of information about your customers. Demographics (age, income, education, etc.) are a good foundation. Knowing psychographics (personality, values, opinions, attitudes, interests, and lifestyles) takes you further. And knowing categories of purchase behavior enables you to drill down even further.
  2. Competitive Analysis:
    You can’t completely construct your own brand identity without understanding how your competitors position themselves. A competitive analysis can be conducted along two lines of inquiry: offline, such as direct mail and other print materials, along with what you can learn online. If you have print samples, you can discern much about a competitor’s marketing message. But you may not be able to pin down demographics, psychographics, and purchase behavior by looking at a direct mail package. There are a number of tools you can use online to deliver insights about your competition. Here are a few:
    • Compete.com offers detailed traffic data so you can compare your site to other sites. You can also get keyword data, demographics, and more.
    • Alexa.com provides SEO audits, engagement, reputation metrics, demographics, and more.
    • Quantcast.com enables you to compare the demographics of who comes to your site versus your competitors. You’ll be shown an index of how a website performs compared to the internet average. You’ll get statistics on attributes such as age, presence of children, income, education, and ethnicity.
  3. Interpretation and Insight:
    Now that you’ve conducted research, you’re positioned to interpret the data to create your own insights. This is where creativity needs to kick in and where you need to consider the type of individual who will embrace and advocate for your organization. You may want to involve a few people from your team in brainstorming, or perhaps you’ll want to bring in someone from outside your organization who can objectively look at your data. What’s key is that you peer below the surface of the numbers and reports. Transform facts into insights through interpretation. Use comparison charts and create personas. Then create statements describing who your best customers are.
  4. One-Word Description:
    Now the challenging work begins. Distill your interpretation and insight into one word that personifies your organization. Then think deeply about that word. Does it capture the essence of who you are (or want to become) and what your customer desires? For example, a technology company might use a word like “innovative,” “cutting-edge,” or “intuitive.” Car manufacturers might use a one-word description like “sleek,” “utilitarian,” or “safe” to describe their brand and what they want their customers to feel when they hear a brand’s name. You might think that by only allowing one word, you are short-changing everything about your organization’s image. It won’t. Finding the one word that describes your organization’s image will force you to focus.
  5. Reality Check:
    So now you’ve identified a word to describe your organization’s brand and image that resonates with both your team and your customers. It’s time for a reality check. Can your organization or product actually support that word? Or if it’s aspirational—that is, a word that you’d like your image to reflect in the future—is it achievable? And if it’s aspirational, what plans are in place to take it to reality?

My next blog will extend the important foundational work you’ve done working through these five steps. It will discuss how to look at your brand as it appeals to both logic and emotion, as well as credibility, uniqueness, and ultimately an example branding statement that you can use with your team. Watch for it in two weeks.

As always, your comments, questions, and challenges are welcome.

Exciting New Tools for B-to-B Prospecting

Finding new customers is a lot easier these days, what with innovative, digitally based ways to capture and collect data. Early examples of this exciting new trend in prospecting were Jigsaw, a business card swapping tool that allowed salespeople to trade contacts, and ZoomInfo, which scrapes corporate websites for information about businesspeople and merges the information into a vast pool of data for analysis and lead generation campaigns. New ways to find prospects continue to come on the scene—it seems like on the daily.

Finding new customers is a lot easier these days, what with innovative, digitally based ways to capture and collect data. Early examples of this exciting new trend in prospecting were Jigsaw, a business card swapping tool that allowed salespeople to trade contacts, and ZoomInfo, which scrapes corporate websites for information about businesspeople and merges the information into a vast pool of data for analysis and lead generation campaigns. New ways to find prospects continue to come on the scene—it seems like on the daily.

One big new development is the trend away from static name/address lists, and towards dynamic sourcing of prospect names complete with valuable indicators of buying readiness culled from their actual behavior online. Companies such as InsideView and Leadspace are developing solutions in this area. Leadspace’s process begins with constructing an ideal buyer persona by analyzing the marketer’s best customers, which can be executed by uploading a few hundred records of name, company name and email address. Then, Leadspace scours the Internet, social networks and scores of contact databases for look-alikes and immediately delivers prospect names, fresh contact information and additional data about their professional activities.

Another dynamic data sourcing supplier with a new approach is Lattice, which also analyzes current customer data to build predictive models for prospecting, cross-sell and churn prevention. The difference from Leadspace is that Lattice builds the client models using their own massive “data cloud” of B-to-B buyer behavior, fed by 35 data sources like LexisNexis, Infogroup, D&B, and the US Government Patent Office. CMO Brian Kardon says Lattice has identified some interesting variables that are useful in prospecting, for example:

  • Juniper Networks found that a company that has recently “signed a lease for a new building” is likely to need new networks and routers.
  • American Express’s foreign exchange software division identified “opened an office in a foreign country” suggests a need for foreign exchange help.
  • Autodesk searches for companies who post job descriptions online that seek “design engineers with CAD/CAM experience.”

Lattice faces competition from Mintigo and Infer, which are also offering prospect scoring models—more evidence of the growing opportunity for marketers to take advantage of new data sources and applications.

Another new approach is using so-called business signals to identify opportunity. As described by Avention’s Hank Weghorst, business signals can be any variable that characterizes a business. Are they growing? Near an airport? Unionized? Minority owned? Susceptible to hurricane damage? The data points are available today, and can be harnessed for what Weghorst calls “hyper segmentation.” Avention’s database of information flowing from 70 suppliers, overlaid by data analytics services, intends to identify targets for sales, marketing and research.

Social networks, especially LinkedIn, are rapidly becoming a source of marketing data. For years, marketers have mined LinkedIn data by hand, often using low-cost offshore resources to gather targets in niche categories. Recently, a gaggle of new companies—like eGrabber and Social123—are experimenting with ways to bring social media data into CRM systems and marketing databases, to populate and enhance customer and prospect records.

Then there’s 6Sense, which identifies prospective accounts that are likely to be in the market for particular products, based on the online behavior of their employees, anonymous or identifiable. 6Sense analyzes billions of rows of 3rd party data, from trade publishers, blogs and forums, looking for indications of purchase intent. If Cisco is looking to promote networking hardware, for example, 6Sense will come back with a set of accounts that are demonstrating an interest in that category, and identify where they were in their buying process, from awareness to purchase. The account data will be populated with contacts, indicating their likely role in the purchase decision, and an estimate of the likely deal size. The data is delivered in real-time to whatever CRM or marketing automation system the client wants, according to CEO and founder Amanda Kahlow.

Just to whet your appetite further, have a look at CrowdFlower, a start-up company in San Francisco, which sends your customer and prospect records to a network of over five million individual contributors in 90 countries, to analyze, clean or collect the information at scale. Crowd sourcing can be very useful for adding information to, and checking on the validity and accuracy of, your data. CrowdFlower has developed an application that lets you manage the data enrichment or validity exercises yourself. This means that you can develop programs to acquire new fields whenever your business changes and still take advantage of their worldwide network of individuals who actually look at each record.

The world of B-to-B data is changing quickly, with exciting new technologies and data sources coming available at record pace. Marketers can expect plenty of new opportunity for reaching customers and prospects efficiently.

A version of this article appeared in Biznology, the digital marketing blog.

Not All Databases Are Created Equal

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

Not all databases are created equal. No kidding. That is like saying that not all cars are the same, or not all buildings are the same. But somehow, “judging” databases isn’t so easy. First off, there is no tangible “tire” that you can kick when evaluating databases or data sources. Actually, kicking the tire is quite useless, even when you are inspecting an automobile. Can you really gauge the car’s handling, balance, fuel efficiency, comfort, speed, capacity or reliability based on how it feels when you kick “one” of the tires? I can guarantee that your toes will hurt if you kick it hard enough, and even then you won’t be able to tell the tire pressure within 20 psi. If you really want to evaluate an automobile, you will have to sign some papers and take it out for a spin (well, more than one spin, but you know what I mean). Then, how do we take a database out for a spin? That’s when the tool sets come into play.

However, even when the database in question is attached to analytical, visualization, CRM or drill-down tools, it is not so easy to evaluate it completely, as such practice reveals only a few aspects of a database, hardly all of them. That is because such tools are like window treatments of a building, through which you may look into the database. Imagine a building inspector inspecting a building without ever entering it. Would you respect the opinion of the inspector who just parks his car outside the building, looks into the building through one or two windows, and says, “Hey, we’re good to go”? No way, no sir. No one should judge a book by its cover.

In the age of the Big Data (you should know by now that I am not too fond of that word), everything digitized is considered data. And data reside in databases. And databases are supposed be designed to serve specific purposes, just like buildings and cars are. Although many modern databases are just mindless piles of accumulated data, granted that the database design is decent and functional, we can still imagine many different types of databases depending on the purposes and their contents.

Now, most of the Big Data discussions these days are about the platform, environment, or tool sets. I’m sure you heard or read enough about those, so let me boldly skip all that and their related techie words, such as Hadoop, MongoDB, Pig, Python, MapReduce, Java, SQL, PHP, C++, SAS or anything related to that elusive “cloud.” Instead, allow me to show you the way to evaluate databases—or data sources—from a business point of view.

For businesspeople and decision-makers, it is not about NoSQL vs. RDB; it is just about the usefulness of the data. And the usefulness comes from the overall content and database management practices, not just platforms, tool sets and buzzwords. Yes, tool sets are important, but concert-goers do not care much about the types and brands of musical instruments that are being used; they just care if the music is entertaining or not. Would you be impressed with a mediocre guitarist just because he uses the same brand of guitar that his guitar hero uses? Nope. Likewise, the usefulness of a database is not about the tool sets.

In my past column, titled “Big Data Must Get Smaller,” I explained that there are three major types of data, with which marketers can holistically describe their target audience: (1) Descriptive Data, (2) Transaction/Behavioral Data, and (3) Attitudinal Data. In short, if you have access to all three dimensions of the data spectrum, you will have a more complete portrait of customers and prospects. Because I already went through that subject in-depth, let me just say that such types of data are not the basis of database evaluation here, though the contents should be on top of the checklist to meet business objectives.

In addition, throughout this series, I have been repeatedly emphasizing that the database and analytics management philosophy must originate from business goals. Basically, the business objective must dictate the course for analytics, and databases must be designed and optimized to support such analytical activities. Decision-makers—and all involved parties, for that matter—suffer a great deal when that hierarchy is reversed. And unfortunately, that is the case in many organizations today. Therefore, let me emphasize that the evaluation criteria that I am about to introduce here are all about usefulness for decision-making processes and supporting analytical activities, including predictive analytics.

Let’s start digging into key evaluation criteria for databases. This list would be quite useful when examining internal and external data sources. Even databases managed by professional compilers can be examined through these criteria. The checklist could also be applicable to investors who are about to acquire a company with data assets (as in, “Kick the tire before you buy it.”).

1. Depth
Let’s start with the most obvious one. What kind of information is stored and maintained in the database? What are the dominant data variables in the database, and what is so unique about them? Variety of information matters for sure, and uniqueness is often related to specific business purposes for which databases are designed and created, along the lines of business data, international data, specific types of behavioral data like mobile data, categorical purchase data, lifestyle data, survey data, movement data, etc. Then again, mindless compilation of random data may not be useful for any business, regardless of the size.

Generally, data dictionaries (lack of it is a sure sign of trouble) reveal the depth of the database, but we need to dig deeper, as transaction and behavioral data are much more potent predictors and harder to manage in comparison to demographic and firmographic data, which are very much commoditized already. Likewise, Lifestyle variables that are derived from surveys that may have been conducted a long time ago are far less valuable than actual purchase history data, as what people say they do and what they actually do are two completely different things. (For more details on the types of data, refer to the second half of “Big Data Must Get Smaller.”)

Innovative ideas should not be overlooked, as data packaging is often very important in the age of information overflow. If someone or some company transformed many data points into user-friendly formats using modeling or other statistical techniques (imagine pre-developed categorical models targeting a variety of human behaviors, or pre-packaged segmentation or clustering tools), such effort deserves extra points, for sure. As I emphasized numerous times in this series, data must be refined to provide answers to decision-makers. That is why the sheer size of the database isn’t so impressive, and the depth of the database is not just about the length of the variable list and the number of bytes that go along with it. So, data collectors, impress us—because we’ve seen a lot.

2. Width
No matter how deep the information goes, if the coverage is not wide enough, the database becomes useless. Imagine well-organized, buyer-level POS (Point of Service) data coming from actual stores in “real-time” (though I am sick of this word, as it is also overused). The data go down to SKU-level details and payment methods. Now imagine that the data in question are collected in only two stores—one in Michigan, and the other in Delaware. This, by the way, is not a completely made -p story, and I faced similar cases in the past. Needless to say, we had to make many assumptions that we didn’t want to make in order to make the data useful, somehow. And I must say that it was far from ideal.

Even in the age when data are collected everywhere by every device, no dataset is ever complete (refer to “Missing Data Can Be Meaningful“). The limitations are everywhere. It could be about brand, business footprint, consumer privacy, data ownership, collection methods, technical limitations, distribution of collection devices, and the list goes on. Yes, Apple Pay is making a big splash in the news these days. But would you believe that the data collected only through Apple iPhone can really show the overall consumer trend in the country? Maybe in the future, but not yet. If you can pick only one credit card type to analyze, such as American Express for example, would you think that the result of the study is free from any bias? No siree. We can easily assume that such analysis would skew toward the more affluent population. I am not saying that such analyses are useless. And in fact, they can be quite useful if we understand the limitations of data collection and the nature of the bias. But the point is that the coverage matters.

Further, even within multisource databases in the market, the coverage should be examined variable by variable, simply because some data points are really difficult to obtain even by professional data compilers. For example, any information that crosses between the business and the consumer world is sparsely populated in many cases, and the “occupation” variable remains mostly blank or unknown on the consumer side. Similarly, any data related to young children is difficult or even forbidden to collect, so a seemingly simple variable, such as “number of children,” is left unknown for many households. Automobile data used to be abundant on a household level in the past, but a series of laws made sure that the access to such data is forbidden for many users. Again, don’t be impressed with the existence of some variables in the data menu, but look into it to see “how much” is available.

3. Accuracy
In any scientific analysis, a “false positive” is a dangerous enemy. In fact, they are worse than not having the information at all. Many folks just assume that any data coming out a computer is accurate (as in, “Hey, the computer says so!”). But data are not completely free from human errors.

Sheer accuracy of information is hard to measure, especially when the data sources are unique and rare. And the errors can happen in any stage, from data collection to imputation. If there are other known sources, comparing data from multiple sources is one way to ensure accuracy. Watching out for fluctuations in distributions of important variables from update to update is another good practice.

Nonetheless, the overall quality of the data is not just up to the person or department who manages the database. Yes, in this business, the last person who touches the data is responsible for all the mistakes that were made to it up to that point. However, when the garbage goes in, the garbage comes out. So, when there are errors, everyone who touched the database at any point must share in the burden of guilt.

Recently, I was part of a project that involved data collected from retail stores. We ran all kinds of reports and tallies to check the data, and edited many data values out when we encountered obvious errors. The funniest one that I saw was the first name “Asian” and the last name “Tourist.” As an openly Asian-American person, I was semi-glad that they didn’t put in “Oriental Tourist” (though I still can’t figure out who decided that word is for objects, but not people). We also found names like “No info” or “Not given.” Heck, I saw in the news that this refugee from Afghanistan (he was a translator for the U.S. troops) obtained a new first name as he was granted an entry visa, “Fnu.” That would be short for “First Name Unknown” as the first name in his new passport. Welcome to America, Fnu. Compared to that, “Andolini” becoming “Corleone” on Ellis Island is almost cute.

Data entry errors are everywhere. When I used to deal with data files from banks, I found that many last names were “Ira.” Well, it turned out that it wasn’t really the customers’ last names, but they all happened to have opened “IRA” accounts. Similarly, movie phone numbers like 777-555-1234 are very common. And fictitious names, such as “Mickey Mouse,” or profanities that are not fit to print are abundant, as well. At least fake email addresses can be tested and eliminated easily, and erroneous addresses can be corrected by time-tested routines, too. So, yes, maintaining a clean database is not so easy when people freely enter whatever they feel like. But it is not an impossible task, either.

We can also train employees regarding data entry principles, to a certain degree. (As in, “Do not enter your own email address,” “Do not use bad words,” etc.). But what about user-generated data? Search and kill is the only way to do it, and the job would never end. And the meta-table for fictitious names would grow longer and longer. Maybe we should just add “Thor” and “Sponge Bob” to that Mickey Mouse list, while we’re at it. Yet, dealing with this type of “text” data is the easy part. If the database manager in charge is not lazy, and if there is a bit of a budget allowed for data hygiene routines, one can avoid sending emails to “Dear Asian Tourist.”

Numeric errors are much harder to catch, as numbers do not look wrong to human eyes. That is when comparison to other known sources becomes important. If such examination is not possible on a granular level, then median value and distribution curves should be checked against historical transaction data or known public data sources, such as U.S. Census Data in the case of demographic information.

When it’s about the companies’ own data, follow your instincts and get rid of data that look too good or too bad to be true. We all can afford to lose a few records in our databases, and there is nothing wrong with deleting the “outliers” with extreme values. Erroneous names, like “No Information,” may be attached to a seven-figure lifetime spending sum, and you know that can’t be right.

The main takeaways are: (1) Never trust the data just because someone bothered to store them in computers, and (2) Constantly look for bad data in reports and listings, at times using old-fashioned eye-balling methods. Computers do not know what is “bad,” until we specifically tell them what bad data are. So, don’t give up, and keep at it. And if it’s about someone else’s data, insist on data tallies and data hygiene stats.

4. Recency
Outdated data are really bad for prediction or analysis, and that is a different kind of badness. Many call it a “Data Atrophy” issue, as no matter how fresh and accurate a data point may be today, it will surely deteriorate over time. Yes, data have a finite shelf-life, too. Let’s say that you obtained a piece of information called “Golf Interest” on an individual level. That information could be coming from a survey conducted a long time ago, or some golf equipment purchase data from a while ago. In any case, someone who is attached to that flag may have stopped shopping for new golf equipment, as he doesn’t play much anymore. Without a proper database update and a constant feed of fresh data, irrelevant data will continue to drive our decisions.

The crazy thing is that, the harder it is to obtain certain types of data—such as transaction or behavioral data—the faster they will deteriorate. By nature, transaction or behavioral data are time-sensitive. That is why it is important to install time parameters in databases for behavioral data. If someone purchased a new golf driver, when did he do that? Surely, having bought a golf driver in 2009 (“Hey, time for a new driver!”) is different from having purchased it last May.

So-called “Hot Line Names” literally cease to be hot after two to three months, or in some cases much sooner. The evaporation period maybe different for different product types, as one may stay longer in the market for an automobile than for a new printer. Part of the job of a data scientist is to defer the expiration date of data, finding leads or prospects who are still “warm,” or even “lukewarm,” with available valid data. But no matter how much statistical work goes into making the data “look” fresh, eventually the models will cease to be effective.

For decision-makers who do not make real-time decisions, a real-time database update could be an expensive solution. But the databases must be updated constantly (I mean daily, weekly, monthly or even quarterly). Otherwise, someone will eventually end up making a wrong decision based on outdated data.

5. Consistency
No matter how much effort goes into keeping the database fresh, not all data variables will be updated or filled in consistently. And that is the reality. The interesting thing is that, especially when using them for advanced analytics, we can still provide decent predictions if the data are consistent. It may sound crazy, but even not-so-accurate-data can be used in predictive analytics, if they are “consistently” wrong. Modeling is developing an algorithm that differentiates targets and non-targets, and if the descriptive variables are “consistently” off (or outdated, like census data from five years ago) on both sides, the model can still perform.

Conversely, if there is a huge influx of a new type of data, or any drastic change in data collection or in a business model that supports such data collection, all bets are off. We may end up predicting such changes in business models or in methodologies, not the differences in consumer behavior. And that is one of the worst kinds of errors in the predictive business.

Last month, I talked about dealing with missing data (refer to “Missing Data Can Be Meaningful“), and I mentioned that data can be inferred via various statistical techniques. And such data imputation is OK, as long as it returns consistent values. I have seen so many so-called professionals messing up popular models, like “Household Income,” from update to update. If the inferred values jump dramatically due to changes in the source data, there is no amount of effort that can save the targeting models that employed such variables, short of re-developing them.

That is why a time-series comparison of important variables in databases is so important. Any changes of more than 5 percent in distribution of variables when compared to the previous update should be investigated immediately. If you are dealing with external data vendors, insist on having a distribution report of key variables for every update. Consistency of data is more important in predictive analytics than sheer accuracy of data.

6. Connectivity
As I mentioned earlier, there are many types of data. And the predictive power of data multiplies as different types of data get to be used together. For instance, demographic data, which is quite commoditized, still plays an important role in predictive modeling, even when dominant predictors are behavioral data. It is partly because no one dataset is complete, and because different types of data play different roles in algorithms.

The trouble is that many modern datasets do not share any common matching keys. On the demographic side, we can easily imagine using PII (Personally Identifiable Information), such as name, address, phone number or email address for matching. Now, if we want to add some transaction data to the mix, we would need some match “key” (or a magic decoder ring) by which we can link it to the base records. Unfortunately, many modern databases completely lack PII, right from the data collection stage. The result is that such a data source would remain in a silo. It is not like all is lost in such a situation, as they can still be used for trend analysis. But to employ multisource data for one-to-one targeting, we really need to establish the connection among various data worlds.

Even if the connection cannot be made to household, individual or email levels, I would not give up entirely, as we can still target based on IP addresses, which may lead us to some geographic denominations, such as ZIP codes. I’d take ZIP-level targeting anytime over no targeting at all, even though there are many analytical and summarization steps required for that (more on that subject in future articles).

Not having PII or any hard matchkey is not a complete deal-breaker, but the maneuvering space for analysts and marketers decreases significantly without it. That is why the existence of PII, or even ZIP codes, is the first thing that I check when looking into a new data source. I would like to free them from isolation.

7. Delivery Mechanisms
Users judge databases based on visualization or reporting tool sets that are attached to the database. As I mentioned earlier, that is like judging the entire building based just on the window treatments. But for many users, that is the reality. After all, how would a casual user without programming or statistical background would even “see” the data? Through tool sets, of course.

But that is the only one end of it. There are so many types of platforms and devices, and the data must flow through them all. The important point is that data is useless if it is not in the hands of decision-makers through the device of their choice, at the right time. Such flow can be actualized via API feed, FTP, or good, old-fashioned batch installments, and no database should stay too far away from the decision-makers. In my earlier column, I emphasized that data players must be good at (1) Collection, (2) Refinement, and (3) Delivery (refer to “Big Data is Like Mining Gold for a Watch—Gold Can’t Tell Time“). Delivering the answers to inquirers properly closes one iteration of information flow. And they must continue to flow to the users.

8. User-Friendliness
Even when state-of-the-art (I apologize for using this cliché) visualization, reporting or drill-down tool sets are attached to the database, if the data variables are too complicated or not intuitive, users will get frustrated and eventually move away from it. If that happens after pouring a sick amount of money into any data initiative, that would be a shame. But it happens all the time. In fact, I am not going to name names here, but I saw some ridiculously hard to understand data dictionary from a major data broker in the U.S.; it looked like the data layout was designed for robots by the robots. Please. Data scientists must try to humanize the data.

This whole Big Data movement has a momentum now, and in the interest of not killing it, data players must make every aspect of this data business easy for the users, not harder. Simpler data fields, intuitive variable names, meaningful value sets, pre-packaged variables in forms of answers, and completeness of a data dictionary are not too much to ask after the hard work of developing and maintaining the database.

This is why I insist that data scientists and professionals must be businesspeople first. The developers should never forget that end-users are not trained data experts. And guess what? Even professional analysts would appreciate intuitive variable sets and complete data dictionaries. So, pretty please, with sugar on top, make things easy and simple.

9. Cost
I saved this important item for last for a good reason. Yes, the dollar sign is a very important factor in all business decisions, but it should not be the sole deciding factor when it comes to databases. That means CFOs should not dictate the decisions regarding data or databases without considering the input from CMOs, CTOs, CIOs or CDOs who should be, in turn, concerned about all the other criteria listed in this article.

Playing with the data costs money. And, at times, a lot of money. When you add up all the costs for hardware, software, platforms, tool sets, maintenance and, most importantly, the man-hours for database development and maintenance, the sum becomes very large very fast, even in the age of the open-source environment and cloud computing. That is why many companies outsource the database work to share the financial burden of having to create infrastructures. But even in that case, the quality of the database should be evaluated based on all criteria, not just the price tag. In other words, don’t just pick the lowest bidder and hope to God that it will be alright.

When you purchase external data, you can also apply these evaluation criteria. A test-match job with a data vendor will reveal lots of details that are listed here; and metrics, such as match rate and variable fill-rate, along with complete the data dictionary should be carefully examined. In short, what good is lower unit price per 1,000 records, if the match rate is horrendous and even matched data are filled with missing or sub-par inferred values? Also consider that, once you commit to an external vendor and start building models and analytical framework around their its, it becomes very difficult to switch vendors later on.

When shopping for external data, consider the following when it comes to pricing options:

  • Number of variables to be acquired: Don’t just go for the full option. Pick the ones that you need (involve analysts), unless you get a fantastic deal for an all-inclusive option. Generally, most vendors provide multiple-packaging options.
  • Number of records: Processed vs. Matched. Some vendors charge based on “processed” records, not just matched records. Depending on the match rate, it can make a big difference in total cost.
  • Installment/update frequency: Real-time, weekly, monthly, quarterly, etc. Think carefully about how often you would need to refresh “demographic” data, which doesn’t change as rapidly as transaction data, and how big the incremental universe would be for each update. Obviously, a real-time API feed can be costly.
  • Delivery method: API vs. Batch Delivery, for example. Price, as well as the data menu, change quite a bit based on the delivery options.
  • Availability of a full-licensing option: When the internal database becomes really big, full installment becomes a good option. But you would need internal capability for a match and append process that involves “soft-match,” using similar names and addresses (imagine good-old name and address merge routines). It becomes a bit of commitment as the match and append becomes a part of the internal database update process.

Business First
Evaluating a database is a project in itself, and these nine evaluation criteria will be a good guideline. Depending on the businesses, of course, more conditions could be added to the list. And that is the final point that I did not even include in the list: That the database (or all data, for that matter) should be useful to meet the business goals.

I have been saying that “Big Data Must Get Smaller,” and this whole Big Data movement should be about (1) Cutting down on the noise, and (2) Providing answers to decision-makers. If the data sources in question do not serve the business goals, cut them out of the plan, or cut loose the vendor if they are from external sources. It would be an easy decision if you “know” that the database in question is filled with dirty, sporadic and outdated data that cost lots of money to maintain.

But if that database is needed for your business to grow, clean it, update it, expand it and restructure it to harness better answers from it. Just like the way you’d maintain your cherished automobile to get more mileage out of it. Not all databases are created equal for sure, and some are definitely more equal than others. You just have to open your eyes to see the differences.

5 Proven Ways to Create a Blockbuster Unique Selling Proposition

You’ve heard of USP. A strong unique selling proposition can produce more sales because it works to engrain new long-term memory. A proven way to differentiate yourself from your competitors is through repositioning your copy and design. If you haven’t examined your USP lately, there’s a good chance you’re not leveraging your unique strengths as strategically as you could. Here are five proven ideas to help you refine your USP and create a blockbuste

You’ve heard of USP. A strong unique selling proposition can produce more sales because it works to engrain new long-term memory. A proven way to differentiate yourself from your competitors is through repositioning your copy and design. If you haven’t examined your USP lately, there’s a good chance you’re not leveraging your unique strengths as strategically as you could. Here are five proven ideas to help you refine your USP and create a blockbuster campaign.

Over the years, I’ve come to appreciate what repositioning a USP can do to skyrocket response. For client Collin Street Bakery, a number of years ago, we repositioned the product from what is widely called fruitcake to a “Native Texas Pecan Cake.” Sales increased 60 percent over the control in prospecting direct mail with a repositioned USP. For client Assurity Life Insurance, repositioning the beneficiary of the product, through analysis of data, increased response 35 percent, and for another Assurity campaign, response increased 60 percent (read the Assurity case study here).

First, it may be helpful to clarify what a Unique Selling Proposition isn’t:

  • Customer service: Great customer service doesn’t qualify because your customer expects you’ll provide great customer service and support in the first place.
  • Quality: Same thing as customer service. It’s expected.
  • Price: You can never win if you think your USP is price and price cutting (or assuming that a high price will signify better quality).

A strong USP boosts the brain’s ability to absorb a new memory because you’ll be seen as distinct from competitors.

Identifying your position, or repositioning an existing product or service, is a process. Most organizations should periodically reposition their products or services (or in the case of a non-profit, reposition why someone may be moved to contribute to your cause).

Here are five approaches I’ve used to better understand buyers, and create a repositioned USP to deliver blockbuster results:

  1. Interview customers and prospects. Talk directly with customers about why they have purchased or supported your organization. And for contrast, talk directly with prospects about why they didn’t act. You can interview by phone, but a better approach, in my experience, is in a focus group setting. Focus groups are an investment, so make sure you have two things in order: first, a completely considered and planned discussion guide of questions; and second, an interviewer who can probe deeply with questions. Key word: “deeply.” Superficial questions aren’t like to get what you want. Ask why a question was answered in a specific way, then ask “why?” again and again. Your moderator must be able to continually peel back the onion, so to speak, to get to a deeper why. Knowing the deeper why can be transformational for all concerned.
  2. Review customer data. Profile your customer list. A profile can be obtained from many data bureaus to review more than basic demographics, to more deeply understand your customer’s interests and behaviors. You need to understand what your customer does in their spare time, for example, what they read and, to the degree possible, what they think. Getting a profile report is usually affordable, but the real cost may be in retaining someone from outside your organization to interpret the data on your behalf, drawing inferences and conclusions, and transforming raw numbers into charts and graphics and imagining the possibilities. If you have someone on your staff who can lead that charge, another option is to have open discussions with your team as you review the data and to commit to describing the persona of your best customer. Make this an ongoing process. You’re not going to completely imagine and profile your customer in a one-hour meeting.
  3. Analyze only your best customers. As a subset of the prior point, consider analyzing only your very top customers. You’ve heard of the Pareto Principle, where 80% of your business comes from 20% of your customers. Over the years, I’ve conducted many customer analyses. I have yet to find exactly an “80/20” balance, but I have found, at the “flattest,” a 60/40 weighting, that is, 60% of a company’s revenue coming from 40% of its customers (this for a business-to-consumer marketer). At the other extreme, for a business-to-business corporation, the weighting was 90/10, where 90% of business came from just 10% of customers. Knowing this balance can be essential, too, to creating your position. If you were the organization who derived 90% of youir business from just 10% of customers, chances are you’d listen very closely to only those 10% of customers as you evaluate your position. In this instance, if you were to reposition your organization, you have to ask yourself at what risk. Conversely, in the 60/40 weighted organization, repositioning most likely doesn’t have the same level of exposure.
  4. Review prospect modeled data. If you are using modeled mailing lists, make sure you look at the subset of data you’re mailing for the common characteristics of your best prospect. Like the profile of customers (mentioned in the previous point), you need to transform the data into charts and graphs, to reveal trends and insights. Then have a discussion and arrive at your interpretation of results.
  5. Conduct a competitive analysis. Examine a competitor’s product or service and compare it to your offer. Be harsh on yourself. While conducting focus groups, you might allocate some of your discussion to your competitors and find out who buys from whom. As you look at your competitor’s products, make sure you analyze their positioning in the market. Much can be learned from analysis of a competitor’s online presence.

Follow these steps to smartly reposition your USP, and you’re on the way repositioning your own product or service that could deliver a new blockbuster campaign.