How to Outsource Analytics

In this series, I have been emphasizing the importance of statistical modeling in almost every article. While there are plenty of benefits of using statistical models in a more traditional sense (refer to “Why Model?”), in the days when “too much” data is the main challenge, I would dare to say that the most important function of statistical models is that they summarize complex data into simple-to-use “scores.”

In this series, I have been emphasizing the importance of statistical modeling in almost every article. While there are plenty of benefits of using statistical models in a more traditional sense (refer to “Why Model?”), in the days when “too much” data is the main challenge, I would dare to say that the most important function of statistical models is that they summarize complex data into simple-to-use “scores.”

The next important feature would be that models fill in the gaps, transforming “unknowns” to “potentials.” You see, even in the age of ubiquitous data, no one will ever know everything about everybody. For instance, out of 100,000 people you have permission to contact, only a fraction will be “known” wine enthusiasts. With modeling, we can assign scores for “likelihood of being a wine enthusiast” to everyone in the base. Sure, models are not 100 percent accurate, but I’ll take “70 percent chance of afternoon shower” over not knowing the weather forecast for the day of the company picnic.

I’ve already explained other benefits of modeling in detail earlier in this series, but if I may cut it really short, models will help marketers:

1. In deciding whom to engage, as they cannot afford to spam the world and annoy everyone who can read, and

2. In determining what to offer once they decide to engage someone, as consumers are savvier than ever and they will ignore and discard any irrelevant message, no matter how good it may look.

OK, then. I hope you are sold on this idea by now. The next question is, who is going to do all that mathematical work? In a country where jocks rule over geeks, it is clear to me that many folks are more afraid of mathematics than public speaking; which, in its own right, ranks higher than death in terms of the fear factor for many people. If I may paraphrase “Seinfeld,” many folks are figuratively more afraid of giving a eulogy than being in the coffin at a funeral. And thanks to a sub-par math education in the U.S. (and I am not joking about this, having graduated high school on foreign soil), yes, the fear of math tops them all. Scary, heh?

But that’s OK. This is a big world, and there are plenty of people who are really good at mathematics and statistics. That is why I purposefully never got into the mechanics of modeling techniques and related programming issues in this series. Instead, I have been emphasizing how to formulate questions, how to express business goals in a more logical fashion and where to invest to create analytics-ready environments. Then the next question is, “How will you find the right math geeks who can make all your dreams come true?”

If you have a plan to create an internal analytics team, there are a few things to consider before committing to that idea. Too many organizations just hire one or two statisticians, dump all the raw data onto them, and hope to God that they will figure some ways to make money with data, somehow. Good luck with that idea, as:

1. I’ve seen so many failed attempts like that (actually, I’d be shocked if it actually worked), and

2. I am sure God doesn’t micromanage statistical units.

(Similarly, I am almost certain that she doesn’t care much for football or baseball scores of certain teams, either. You don’t think God cares more for the Red Sox than the Yankees, do ya?)

The first challenge is locating good candidates. If you post any online ad for “Statistical Analysts,” you will receive a few hundred resumes per day. But the hiring process is not that simple, as you should ask the right questions to figure out who is a real deal, and who is a poser (and there are many posers out there). Even among qualified candidates with ample statistical knowledge, there are differences between the “Doers” and “Vendor Managers.” Depending on your organizational goal, you must differentiate the two.

Then the next challenge is keeping the team intact. In general, mathematicians and statisticians are not solely motivated by money; they also want constant challenges. Like any smart and creative folks, they will simply pack up and leave, if “they” determine that the job is boring. Just a couple of modeling projects a year with some rudimentary sets of data? Meh. Boring! Promises of upward mobility only work for a fraction of them, as the majority would rather deal with numbers and figures, showing no interest in managing other human beings. So, coming up with interesting and challenging projects, which will also benefit the whole organization, becomes a job in itself. If there are not enough challenges, smart ones will quit on you first. Then they need constant mentoring, as even the smartest statisticians will not know everything about challenges associated with marketing, target audiences and the business world, in general. (If you stumble into a statistician who is even remotely curious about how her salary is paid for, start with her.)

Further, you would need to invest to set up an analytical environment, as well. That includes software, hardware and other supporting staff. Toolsets are becoming much cheaper, but they are not exactly free yet. In fact, some famous statistical software, such as SAS, could be quite expensive year after year, although there are plenty of alternatives now. And they need an “analytics-ready” data environment, as I emphasized countless times in this series (refer to “Chicken or the Egg? Data or Analytics?” and “Marketing and IT; Cats and Dogs”). Such data preparation work is not for statisticians, and most of them are not even good at cleaning up dirty data, anyway. That means you will need different types of developers/programmers on the analytics team. I pointed out that analytical projects call for a cohesive team, not some super-duper analyst who can do it all (refer to “How to Be a Good Data Scientist”).

By now you would say “Jeez Louise, enough already,” as all this is just too much to manage to build just a few models. Suddenly, outsourcing may sound like a great idea. Then you would realize there are many things to consider when outsourcing analytical work.

First, where would you go? Everyone in the data industry and their cousins claim that they can take care of analytics. But in reality, it is a scary place where many who have “analytics” in their taglines do not even touch “predictive analytics.”

Analytics is a word that is abused as much as “Big Data,” so we really need to differentiate them. “Analytics” may mean:

  • Business Intelligence (BI) Reporting: This is mostly about the present, such as the display of key success metrics and dashboard reporting. While it is very important to know about the current state of business, much of so-called “analytics” unfortunately stops right here. Yes, it is good to have a dashboard in your car now, but do you know where you should be going?
  • Descriptive Analytics: This is about how the targets “look.” Common techniques such as profiling, segmentation and clustering fall under this category. These techniques are mainly for describing the target audience to enhance and optimize messages to them. But using these segments as a selection mechanism is not recommended, while many dare to do exactly that (more on this subject in future articles).
  • Predictive Modeling: This is about answering the questions about the future. Who would be more likely to behave certain ways? What communication channels will be most effective for whom? How much is the potential spending level of a prospect? Who is more likely to be a loyal and profitable customer? What are their preferences? Response models, various of types of cloning models, value models, and revenue models, attrition models, etc. all fall under this category, and they require hardcore statistical skills. Plus, as I emphasized earlier, these model scores compact large amounts of complex data into nice bite-size packages.
  • Optimization: This is mostly about budget allocation and attribution. Marketing agencies (or media buyers) generally deal with channel optimization and spending analysis, at times using econometrics models. This type of statistical work calls for different types of expertise, but many still insist on calling it simply “analytics.”

Let’s say that for the purpose of customer-level targeting and personalization, we decided to outsource the “predictive” modeling projects. What are our options?

We may consider:

  • Individual Consultants: In-house consultants are dedicated to your business for the duration of the contract, guaranteeing full access like an employee. But they are there for you only temporarily, with one foot out the door all the time. And when they do leave, all the knowledge walks away with them. Depending on the rate, the costs can add up.
  • Standalone Analytical Service Providers: Analytical work is all they do, so you get focused professionals with broad technical and institutional knowledge. Many of them are entrepreneurs, but that may work against you, as they could often be understaffed and stretched thin. They also tend to charge for every little step, with not many freebies. They are generally open to use any type of data, but the majority of them do not have secure sources of third-party data, which could be essential for certain types of analytics involving prospecting.
  • Database Service Providers: Almost all data compilers and brokers have statistical units, as they need to fill in the gap within their data assets with statistical techniques. (You didn’t think that they knew everyone’s income or age, did you?) For that reason, they have deep knowledge in all types of data, as well as in many industry verticals. They provide a one-stop shop environment with deep resource pools and a variety of data processing capabilities. However, they may not be as agile as smaller analytical shops, and analytics units may be tucked away somewhere within large and complex organizations. They also tend to emphasize the use of their own data, as after all, their main cash cows are their data assets.
  • Direct Marketing Agencies: Agencies are very strategic, as they touch all aspects of marketing and control creative processes through segmentation. Many large agencies boast full-scale analytical units, capable of all types of analytics that I explained earlier. But some agencies have very small teams, stretched really thin—just barely handling the reporting aspect, not any advanced analytics. Some just admit that predictive analytics is not part of their core competencies, and they may outsource such projects (not that it is a bad thing).

As you can see here, there is no clear-cut answer to “with whom you should you work.” Basically, you will need to check out all types of analysts and service providers to determine the partner best suitable for your long- and short-term business purposes, not just analytical goals. Often, many marketers just go with the lowest bidder. But pricing is just one of many elements to be considered. Here, allow me to introduce “10 Essential Items to Consider When Outsourcing Analytics.”

1. Consulting Capabilities: I put this on the top of the list, as being a translator between the marketing and the technology world is the most important differentiator (refer to “How to Be a Good Data Scientist”). They must understand the business goals and marketing needs, prescribe suitable solutions, convert such goals into mathematical expressions and define targets, making the best of available data. If they lack strategic vision to set up the data roadmap, statistical knowledge alone will not be enough to achieve the goals. And such business goals vary greatly depending on the industry, channel usage and related success metrics. Good consultants always ask questions first, while sub-par ones will try to force-fit marketers’ goals into their toolsets and methodologies.

Translating marketing goals into specific courses of action is a skill in itself. A good analytical partner should be capable of building a data roadmap (not just statistical steps) with a deep understanding of the business impact of resultant models. They should be able to break down larger goals into smaller steps, creating proper phased approaches. The plan may call for multiple models, all kinds of pre- and post-selection rules, or even external data acquisition, while remaining sensitive to overall costs.

The target definition is the core of all these considerations, which requires years of experience and industry knowledge. Simply, the wrong or inadequate targeting decision leads to disastrous results, no matter how sound the mathematical work is (refer to “Art of Targeting”).

Another important quality of a good analytical partner is the ability to create usefulness out of seemingly chaotic and unstructured data environments. Modeling is not about waiting for the perfect set of data, but about making the best of available data. In many modeling bake-offs, the winners are often decided by the creative usage of provided data, not just statistical techniques.

Finally, the consultative approach is important, as models do not exist in a vacuum, but they have to fit into the marketing engine. Be aware of the ones who want to change the world around their precious algorithms, as they are geeks not strategists. And the ones who understand the entire marketing cycle will give advice on what the next phase should be, as marketing efforts must be perpetual, not transient.

So, how will you find consultants? Ask the following questions:

  • Are they “listening” to you?
  • Can they repeat “your” goals in their own words?
  • Do their roadmaps cover both short- and long-term goals?
  • Are they confident enough to correct you?
  • Do they understand “non-statistical” elements in marketing?
  • Have they “been there, done that” for real, or just in theories?

2. Data Processing Capabilities: I know that some people look down upon the word “processing.” But data manipulation is the most important key step “before” any type of advanced analytics even begins. Simply, “garbage-in, garbage out.” And unfortunately, most datasets are completely unsuitable for analytics and modeling. In general, easily more than 80 percent of model development time goes into “fixing” the data, as most are unstructured and unrefined. I have been repeatedly emphasizing the importance of a “model-ready” (or “analytics-ready”) environment for that reason.

However, the reality dictates that the majority of databases are indeed NOT model-ready, and most of them are not even close to it. Well, someone has to clean up the mess. And in this data business, the last one who touches the dataset becomes responsible for all the errors and mistakes made to it thus far. I know it is not fair, but that is why we need to look at the potential partner’s ability to handle large and really messy data, not just the statistical savviness displayed in glossy presentations.

Yes, that dirty work includes data conversion, edit/hygiene, categorization/tagging, data summarization and variable creation, encompassing all kinds of numeric, character and freeform data (refer to “Beyond RFM Data” and “Freeform Data Aren’t Exactly Free”). It is not the most glorious part of this business, but data consistency is the key to successful implementation of any advanced analytics. So, if a model-ready environment is not available, someone had better know how to make the best of whatever is given. I have seen too many meltdowns in “before” and “after” modeling steps due to inconsistencies in databases.

So, grill the candidates with the following questions:

  • If they support file conversions, edit, categorization and summarization
  • How big of a dataset is too big, and how many files/tables are too many for them
  • How much free-form data are too much for them
  • Ask for sample model variables that they have created in the past

3. Track Records in the Industry: It can be argued that industry knowledge is even more crucial for the success than statistical know-how, as nuances are often “Lost in Translation” without relevant industry experience. In fact, some may not even be able to carry on a proper conversation with a client without it, leading to all kinds of wrong assumptions. I have seen a case where “real” rocket scientists messed up models for credit card campaigns.

The No. 1 reason why industry experience is important is everyone’s success metrics are unique. Just to name a few, financial services (banking, credit card, insurance, investment, etc.), travel and hospitality, entertainment, packaged goods, online and offline retail, catalogs, publication, telecommunications/utilities, non-profit and political organizations all call for different types of analytics and models, as their business models and the way they interact with target audiences are vastly different. For example, building a model (or a database, for that matter) for businesses where they hand over merchandise “before” they collect money is fundamentally different than the ones where exchange happens simultaneously. Even a simple concept of payment date or transaction date cannot be treated the same way. For retailers, recent dates could be better for business, but for subscription business, older dates may carry more weight. And these are just some examples with “dates,” before touching any dollar figures or other fun stuff.

Then the job gets even more complicated, if we further divide all of these industries by B-to-B vs. B-to-C, where available data do not even look similar. On top of that, divisional ROI metrics may be completely different, and even terminology and culture may play a role in all of this. When you are a consultant, you really don’t want to stop the flow of a meeting to clarify some unfamiliar acronyms, as you are supposed to know them all.

So, always demand specific industry references and examine client roasters, if allowed. (Many clients specifically ask vendors not to use their names as references.) Basically, watch out for the ones who push one-size-fits-all cookie-cutter solutions. You deserve way more than that.

4. Types of Models Supported: Speaking of cookie-cutter stuff, we need to be concerned with types of models that the outsourcing partner would support. Sure, nobody employs every technique, and no one can be good at everything. But we need to watch out for the “One-trick Ponies.”

This could be a tricky issue, as we are going into a more technical domain. Plus, marketers should not self-prescribe with specific techniques, instead of clearly stating their business goals (refer to “Marketing and IT; Cats and Dogs”). Some of the modeling goals are:

  • Rank and select prospect names
  • Lead scoring
  • Cross-sell/upsell
  • Segment the universe for messaging strategy
  • Pinpoint the attrition point
  • Assign lifetime values for prospects and customers
  • Optimize media/channel spending
  • Create new product packages
  • Detect fraud
  • Etc.

Unless you have successfully dealt with the outsourcing partner in the past (or you have a degree in statistics), do not blurt out words like Neural-net, CHAID, Cluster Analysis, Multiple Regression, Discriminant Function Analysis, etc. That would be like demanding specific medication before your new doctor even asks about your symptoms. The key is meeting your business goals, not fulfilling buzzwords. Let them present their methodology “after” the goal discussion. Nevertheless, see if the potential partner is pushing one or two specific techniques or solutions all the time.

5. Speed of Execution: In modern marketing, speed to action is the king. Speed wins, and speed gains respect. However, when it comes to modeling or other advanced analytics, you may be shocked by the wide range of time estimates provided by each outsourcing vendor. To be fair they are covering themselves, mainly because they have no idea what kind of messy data they will receive. As I mentioned earlier, pre-model data preparation and manipulation are critical components, and they are the most time-consuming part of all; especially when available data are in bad shape. Post-model scoring, audit and usage support may elongate the timeline. The key is to differentiate such pre- and post-modeling processes in the time estimate.

Even for pure modeling elements, time estimates vary greatly, depending on the complexity of assignments. Surely, a simple cloning model with basic demographic data would be much easier to execute than the ones that involve ample amounts of transaction- and event-level data, coming from all types of channels. If time-series elements are added, it will definitely be more complex. Typical clustering work is known to take longer than regression models with clear target definitions. If multiple models are required for the project, it will obviously take more time to finish the whole job.

Now, the interesting thing about building a model is that analysts don’t really finish it, but they just run out of time—much like the way marketers work on PowerPoint presentations. The commonality is that we can basically tweak models or decks forever, but we have to stop at some point.

However, with all kinds of automated tools and macros, model development time has decreased dramatically in past decades. We really came a long way since the first application of statistical techniques to marketing, and no one should be quoting a 1980s timeline in this century. But some still do. I know vendors are trained to follow the guideline “always under-promise and over-deliver,” but still.

An interesting aspect of this dilemma is that we can negotiate the timeline by asking for simpler and less sophisticated versions with diminished accuracy. If, hypothetically, it takes a week to be 98 percent accurate, but it only takes a day to be 90 percent accurate, what would you pick? That should be the business decision.

So, what is a general guideline? Again, it really depends on many factors, but allow me to share a version of it:

  • Pre-modeling Processing

– Data Conversions: from half a day to weeks

– Data Append/Enhancement: between overnight and two days

– Data Edit and Summarization: Data-dependent

  • Modeling: Ranges from half a day to weeks

– Depends on type, number of models and complexity

  • Scoring: from half a day to one week

– Mainly depends on number of records and state of the database to be scored

I know these are wide ranges, but watch out for the ones that routinely quote 30 days or more for simple clone models. They may not know what they are doing, or worse, they may be some mathematical perfectionists who don’t understand the marketing needs.

6. Pricing Structure: Some marketers would put this on top of the checklist, or worse, use the pricing factor as the only criterion. Obviously, I disagree. (Full disclosure: I have been on the service side of the fence during my entire career.) Yes, every project must make an economic sense in the end, but the budget should not and cannot be the sole deciding factor in choosing an outsourcing partner. There are many specialists under famous brand names who command top dollars, and then there are many data vendors who throw in “free” models, disrupting the ecosystem. Either way, one should not jump to conclusions too fast, as there is no free lunch, after all. In any case, I strongly recommend that no one should start the meeting with pricing questions (hence, this article). When you get to the pricing part, ask what the price includes, as the analytical journey could be a series of long and winding roads. Some of the biggest factors that need to be considered are:

  • Multiple Model Discounts—Less for second or third models within a project?
  • Pre-developed (off-the-shelf) Models—These can be “much” cheaper than custom models, while not custom-fitted.
  • Acquisition vs. CRM—Employing client-specific variables certainly increases the cost.
  • Regression Models vs. Other Types—At times, types of techniques may affect the price.
  • Clustering and Segmentations—They are generally priced much higher than target-specific models.

Again, it really depends on the complexity factor more than anything else, and the pre- and post-modeling process must be estimated and priced separately. Non-modeling charges often add up fast, and you should ask for unit prices and minimum charges for each step.

Scoring charges in time can be expensive, too, so negotiate for discounts for routine scoring of the same models. Some may offer all-inclusive package pricing for everything. The important thing is that you must be consistent with the checklist when shopping around with multiple candidates.

7. Documentation: When you pay for a custom model (not pre-developed, off-the-shelf ones), you get to own the algorithm. Because algorithms are not tangible items, the knowledge is to be transformed in model documents. Beware of the ones who offer “black-box” solutions with comments like, “Oh, it will work, so trust us.”

Good model documents must include the following, at the minimum:

  • Target and Comparison Universe Definitions: What was the target variable (or “dependent” variable) and how was it defined? How was the comparison universe defined? Was there any “pre-selection” for either of the universes? These are the most important factors in any model—even more than the mechanics of the model itself.
  • List of Variables: What are the “independent” variables? How were they transformed or binned? From where did they originate? Often, these model variables describe the nature of the model, and they should make intuitive sense.
  • Model Algorithms: What is the actual algorithm? What are the assigned weight for each independent variable?
  • Gains Chart: We need to examine potential effectiveness of the model. What are the “gains” for each model group, from top to bottom (e.g., 320 percent gain at the top model group in comparison to the whole universe)? How fast do such gains decrease as we move down the scale? How do the gains factors compare against the validation sample? A graphic representation would be nice, too.

For custom models, it is customary to have a formal model presentation, full documentation and scoring script in designated programming languages. In addition, if client files are provided, ask for a waterfall report that details input and output counts of each step. After the model scoring, it is also customary for the vendor to provide a scored universe count by model group. You will be shocked to find out that many so-called analytical vendors do not provide thorough documentation. Therefore, it is recommended to ask for sample documents upfront.

8. Scoring Validation: Models are built and presented properly, but the job is not done until the models are applied to the universe from which the names are ranked and selected for campaigns. I have seen too many major meltdowns at this stage. Simply, it is one thing to develop models with a few hundred thousand record samples, but it is quite another to apply the algorithm to millions of records. I am not saying that the scoring job always falls onto the developers, as you may have an internal team or a separate vendor for such ongoing processes. But do not let the model developer completely leave the building until everything checks out.

The model should have been validated against the validation sample by then, but live scoring may reveal all kinds of inconsistencies. You may also want to back-test the algorithms with past campaign results, as well. In short, many things go wrong “after” the modeling steps. When I hear customers complaining about models, I often find that the modeling is the only part that was done properly, and “before” and “after” steps were all messed up. Further, even machines misunderstand each other, as any differences in platform or scripting language may cause discrepancies. Or, maybe there was no technical error, but missing values may have caused inconsistencies (refer to “Missing Data Can Be Meaningful”). Nonetheless, the model developers would have the best insight as to what could have gone wrong, so make sure that they are available for questions after models are presented and delivered.

9. Back-end Analysis: Good analytics is all about applying learnings from past campaigns—good or bad—to new iterations of efforts. We often call it “closed-loop marketing—while many marketers often neglect to follow up. Any respectful analytics shop must be aware of it, while they may classify such work separately from modeling or other analytical projects. At the minimum, you need to check out if they even offer such services. In fact, so-called “match-back analysis” is not as simple as just matching campaign files against responders in this omnichannel environment. When many channels are employed at the same time, allocation of credit (i.e., “what worked?”) may call for all kinds of business rules or even dedicated models.

While you are at it, ask for a cheaper version of “canned” reports, as well, as custom back-end analysis can be even more costly than the modeling job itself, over time. Pre-developed reports may not include all the ROI metrics that you’re looking for (e.g., open, clickthrough, conversion rates, plus revenue and orders-per-mailed, per order, per display, per email, per conversion. etc.). So ask for sample reports upfront.

If you start breaking down all these figures by data source, campaign, time series, model group, offer, creative, targeting criteria, channel, ad server, publisher, keywords, etc., it can be unwieldy really fast. So contain yourself, as no one can understand 100-page reports, anyway. See if the analysts can guide you with such planning, as well. Lastly, if you are so into ROI analysis, get ready to share the “cost” side of the equation with the selected partner. Some jobs are on the marketers.

10. Ongoing Support: Models have a finite shelf life, as all kinds of changes happen in the real world. Seasonality may be a factor, or the business model or strategy may have changed. Fluctuations in data availability and quality further complicate the matter. Basically assumptions like “all things being equal” only happen in textbooks, so marketers must plan for periodic review of models and business rules.

A sure sign of trouble is decreasing effectiveness of models. When in doubt, consult the developers and they may recommend a re-fit or complete re-development of models. Quarterly reviews would be ideal, but if the cost becomes an issue, start with 6-month or yearly reviews, but never go past more than a year without any review. Some vendors may offer discounts for redevelopment, so ask for the price quote upfront.

I know this is a long list of things to check, but picking the right partner is very important, as it often becomes a long-term relationship. And you may find it strange that I didn’t even list “technical capabilities” at all. That is because:

1. Many marketers are not equipped to dig deep into the technical realm anyway, and

2. The difference between the most mathematically sound models and the ones from the opposite end of the spectrum is not nearly as critical as other factors I listed in this article.

In other words, even the worst model in the bake-off would be much better than no model, if these other business criterion are well-considered. So, happy shopping with this list, and I hope you find the right partner. Employing analytics is not an option when living in the sea of data.

5 Positioning Ideas When Leading With Price

What works best? Selling product benefits, then revealing the price? Or revealing the price, followed by selling benefits? There are rarely absolute answers, and statistically valid A/B testing in a direct marketing environment will give you the answer that works for your situation. Still, findings of a new study suggest five ways for direct marketers to reveal price

What works best? Selling product benefits, then revealing the price? Or revealing the price, followed by selling benefits? There are rarely absolute answers, and statistically valid A/B testing in a direct marketing environment will give you the answer that works for your situation. Still, findings of a new study suggest five ways for direct marketers to reveal price.

Neuroscientists and professors from Harvard Business School and Stanford University conducted a study to see if considering price first changed the way the brain coded the value of the product.

The focus on the research was on brain activity when the participant saw the price and product presented together. They were most interested in the area in the brain that deals with estimating decision value (the medial prefrontal cortex), and the area of the brain that’s been called the pleasure center and whose activity is correlated with whether a product is viscerally desirable. This pleasure center is called the nucleus accumbens.

Fundamentally, the research indicates there are differences in how a person codes information, based on whether the product has a greater emotional attachment, or whether the product was more practical.

They found that brain activity did vary in the sequence of product versus price first. A conclusion of the report is that when the product came first, the decision question seemed to be one of “Do I like it?” and when the price came first, the question seemed to be “Is it worth it?” Three other points made in the research suggest that showing price first can make a difference:

  • The order of price or product presentation doesn’t matter when the product is desirable and easily understood and consumed (e.g., movies, clothes, electronics), and fulfill an emotional need. If the product is affordable in this instance, then it’s an easy decision no matter how price was presented.
  • When a product is on sale or bargain-priced, showing price first can positively influence the sale.
  • When the product is practical or useful (more than emotional) showing price first prompted participants to be significantly more likely to purchase a product.

“The question isn’t whether the price makes a product seem better, it’s whether a product is worth its price.” said Uma R. Karmarkar, one of the research author. “Putting the price first just tightens the link between the benefit you get from the price and the benefit you get from the product itself.”

For direct marketing, copywriting formulas often dictate that the price comes toward the end of a sales message, after the product has been presented—particularly in letters and longer-form copy. This study suggests an A/B test of revealing price first is in order.

If you are going to test revealing price first, here are five positioning ideas:

  1. When a product is on sale, prominently show the price. Use dollars, not percentages. Percentages aren’t easily calculated in the mind (or worse, they are miscalculated in the mind and you risk losing a sale).
  2. Incrementally break down the price. Show it as the cost per day, cost per use, or some other practical way to reveal increments of the price.
  3. Compare the price to an everyday item. One of my most successful direct mail packages included a letter with a headline that said, “For about the cost of a cup of coffee a day, you can have …”
  4. Compare to your competition. If you have a price advantage, show it. If you don’t, then compare at a different level that includes longer product life, more convenience, or other benefits.
  5. Position the price presentation as a cost of not buying now. In other words, show how the price could increase in the future, or the loss that can happen by waiting. This positioning also creates urgency.

It’s important to acknowledge is that the research didn’t study emotion-based long-copy with storytelling and unique selling positioning of the product. Using emotion, story and a strong USP before revealing price in a direct marketing environment may be more effective to sell your product. Every situation is different. The only way to conclusively know if revealing price first will generate a higher response than presenting the sales message first is to A/B test.

Should You Make Your Site Secure for Improved SEO Results?

Just this past month Google confirmed that in the future, its search algorithm would be giving a rankings boost to secure sites. This confirms rumors that have rippled through the search marketing industry for several months. This recent change is part of Google’s continuing efforts toward a more secure Web. Like so many pronouncements from Google, this has forced many site owners to reconsider whether to make their sites secure. Site owners need to carefully evaluate the pros and cons of going secure. It may not be either prudent or cost effective at this time.

Just this past month Google confirmed that in the future, its search algorithm would be giving a rankings boost to secure sites. This confirms rumors that have rippled through the search marketing industry for several months. This recent change is part of Google’s continuing efforts toward a more secure Web. Like so many pronouncements from Google, this has forced many site owners to reconsider whether to make their sites secure. Site owners need to carefully evaluate the pros and cons of going secure. It may not be either prudent or cost effective at this time.

When Google made all searches secure and stopped providing site owners the keywords used by searchers to sites, the search giant gave a clear indication of its path and direction toward a ensuring a more secure, safe, Web environment. Google reasoned that it is protecting the identity of the searcher by not providing the keyword referrer. Some find this claim a bit disingenuous, given that the keyword referrer is still available for users of paid search.

The Pros and Cons—A Short Primer
The single-largest benefit gained by making your site secure is a minor algorithmic boost in Google results. This benefit must be weighed against a number of potential negatives and some steep costs. Secure sites run slower than unsecure sites—all that encryption takes more effort than just delivering an unsecure site. Several years ago, Google announced that site speed was going to figure into the rankings formula. At this time, it is unclear whether the rankings boost from having a secure site will be larger than the penalty for slowness. Google does not reveal the valences of its ranking factors, except for declaring some minor. Unless you have made your unsecure site fast and have in place protocols for continuously monitoring and testing your site’s speed, don’t even consider going secure. It will be like adding another brake to it. Your users and your Google rankings will be negatively impacted. A perceived need to possibly go secure in the future should be the impetus to address existing site speed issues.

Then there is the potential for additional penalties for duplicate content, should redirection and canonicalization schemes prove incomplete. The task of shifting and redirecting a very large site into a secure environment is a large task and may require remapping thousands of URLs. No matter how good your team is, you should expect leaks and misses. It is practically built into such projects. If your site is well-mapped and setting redirections and canonicalization are automated, then you may be ready to go secure. If this is not the case, tap the brakes on going secure. You may be creating huge headaches with just minor payback potential.

Did I mention that there are added costs? SSL certificates must be bought and maintained. How often have you gotten a message that a site’s certificate is out of date? You can be sure that Google will take a dim view of sites with expired certificates. Another unnecessary hit! Then, there are the operating costs. Many small businesses rely on gateways and do not manage a secure environment even though they take payments. If your business already has a secure environment in place and you have fully prepared your entire operation for this change, then and only then should you implement having a completely secure site. If you are not ready, consider what steps you should take to get ready and begin the process, for we can expect others to follow Google’s lead in making the Web safer and more secure.

Linger Longer: A Branding Imperative

“Summer afternoon—summer afternoon; to me those have always been the two most beautiful words in the English language,” wrote Henry James. I couldn’t agree more. I just love summer. Summer is the time for a new speed. For sauntering and slowing down. For purposefully stretching those extra long afternoons into all sorts of pleasurable outdoor activities like gardening or grilling or just unscheduled hammock time. For three- or four-day long weekends spent with family and friends or just catching up with yourself. For easy everything.

“Summer afternoon—summer afternoon; to me those have always been the two most beautiful words in the English language,” wrote Henry James. I couldn’t agree more. I just love summer. Summer is the time for a new speed. For sauntering and slowing down. For purposefully stretching those extra long afternoons into all sorts of pleasurable outdoor activities like gardening or grilling or just unscheduled hammock time. For three- or four-day long weekends spent with family and friends or just catching up with yourself. For easy everything.

I think brands have a lesson to learn from this time of the year. Summer is the season that encourages lingering. Brands that consciously create space and time for customers to linger within their brand experience win their hearts. Grant it, sometimes you want to dash into a store (or website), hunt down your purchase and leave promptly. Other times, a store, a site, an atmosphere is so compelling you want to linger and linger and linger some more.

Terrain is one of those kinds of places. It’s part of the Urban Outfitters family of creative retailers whose stated goal is “to offer a product assortment and an environment so compelling and distinctive that the customer feels an empathetic connection to the brand and is persuaded to buy.”

Terrain was designed purposefully for leisurely strolls through all its “mini-terrains”—eclectic little rooms and areas that beckon customers with all sorts of indoor-outdoor lifestyle products the company hopes you’ll find irresistible. The merchant has waved its magic fairy dust over everything: meals, merchandise assortments and even Web copy to create a menagerie you want to somehow recreate in your own life.

Terrain has elevated lingering to an art form with experiential pauses built into its brand DNA. Both stores have delicious “farm-to-table” restaurants that encourage spontaneous long lunches and Sunday brunches, as well as scheduled events and workshops. Here’s the invitation the Terrain restaurant in Glen Mills, Pa. puts forth:

Share our local, organic meals with close family and friends as you create lasting memories in our charming antique greenhouse. Taking your personal style, interpreting it by our talented culinary team, and presenting it all in our horticultural setting, we’ll create a truly unique experience for you and your guests. We work tirelessly to craft an environment that aesthetically and gastronomically reflects the cycle of the seasons.

President Wendy McDevitt shared this in a Bloomberg interview: “Customers typically spend 1.5 hours browsing Terrain and that can double to three hours if they’re visiting the café and shopping between glasses of wine or lunch. The one thing you can’t get in the cyberworld is the tactile experience and that won’t go away.”

Lingering happens online as well as you stroll through their three main categories with simple teasers like Garden + Outdoor, House + Home, Jewelry + Accessories. Spend time on Terrain’s site and you’ll want to know more about Branches + Bunches or what’s in The Reading Room or what Wanderlust is all about. You are enticed by the plus and you aren’t disappointed. The Bulletin, Terrain’s eclectic, informative blog is like a gardening class, cooking class, landscaping class, and artist date all rolled into one lovely scroll you can’t help but linger on.

Does your overall product experience invite lingering? Is it a sensory, tactile experience? What unusual product assortment combinations might you create to entice your customers to linger longer within your brand?

Chicken or the Egg? Data or Analytics?

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first.

I just saw an online discussion about the role of a chief data officer, whether it should be more about data or analytics. My initial response to that question is “neither.” A chief data officer must represent the business first. And I had the same answer when such a title didn’t even exist and CTOs or other types of executives covered that role in data-rich environments. As soon as an executive with a seemingly technical title starts representing the technology, that business is doomed. (Unless, of course, the business itself is about having fun with the technology. How nice!)

Nonetheless, if I really have to pick just one out of the two choices, I would definitely pick the analytics over data, as that is the key to providing answers to business questions. Data and databases must be supporting that critical role of analytics, not the other way around. Unfortunately, many organizations are completely backward about it, where analysts are confined within the limitations of database structures and affiliated technologies, and the business owners and decision-makers are dictated to by the analysts and analytical tool sets. It should be the business first, then the analytics. And all databases—especially marketing databases—should be optimized for analytical activities.

In my previous columns, I talked about the importance of marketing databases and statistical modeling in the age of Big Data; not all depositories of information are necessarily marketing databases, and statistical modeling is the best way to harness marketing answers out of mounds of accumulated data. That begs for the next question: Is your marketing database model-ready?

When I talk about the benefits of statistical modeling in data-rich environments (refer to my previous column titled “Why Model?”), I often encounter folks who list reasons why they do not employ modeling as part of their normal marketing activities. If I may share a few examples here:

  • Target universe is too small: Depending on the industry, the prospect universe and customer base are sometimes very small in size, so one may decide to engage everyone in the target group. But do you know what to offer to each of your prospects? Customized offers should be based on some serious analytics.
  • Predictive data not available: This may have been true years back, but not in this day and age. Either there is a major failure in data collection, or collected data are too unstructured to yield any meaningful answers. Aren’t we living in the age of Big Data? Surely we should all dig deeper.
  • 1-to-1 marketing channels not in plan: As I repeatedly said in my previous columns, “every” channel is, or soon will be, a 1-to-1 channel. Every audience is secretly screaming, “Entertain us!” And customized customer engagement efforts should be based on modeling, segmentation and profiling.
  • Budget doesn’t allow modeling: If the budget is too tight, a marketer may opt in for some software solution instead of hiring a team of statisticians. Remember that cookie-cutter models out of software packages are still better than someone’s intuitive selection rules (i.e., someone’s “gut” feeling).
  • The whole modeling process is just too painful: Hmm, I hear you. The whole process could be long and difficult. Now, why do you think it is so painful?

Like a good doctor, a consultant should be able to identify root causes based on pain points. So let’s hear some complaints:

  • It is not easy to find “best” customers for targeting
  • Modelers are fixing data all the time
  • Models end up relying on a few popular variables, anyway
  • Analysts are asking for more data all the time
  • It takes too long to develop and implement models
  • There are serious inconsistencies when models are applied to the database
  • Results are disappointing
  • Etc., etc…

I often get called in when model-based marketing efforts yield disappointing results. More often than not, the opening statement in such meetings is that “The model did not work.” Really? What is interesting is that in more than nine times out of 10 cases like that, the models are the only elements that seem to have been done properly. Everything else—from pre-modeling steps, such as data hygiene, conversion, categorization, and summarization; to post-modeling steps, such as score application and validation—often turns out to be the root cause of all the troubles, resulting in pain points listed here.

When I speak at marketing conferences, talking about this subject of this “model-ready” environment, I always ask if there are statisticians and analysts in the audience. Then I ask what percentage of their time goes into non-statistical activities, such as data preparation and remedying data errors. The absolute majority of them say they spend of 80 percent to 90 percent of their time fixing the data, devoting the rest to the model development work. You don’t need me to tell you that something is terribly wrong with this picture. And I am pretty sure that none of those analysts got their PhDs and master’s degrees in statistics to spend most of their waking hours fixing the data. Yeah, I know from experience that, in this data business, the last guy who happens to touch the dataset always ends up being responsible for all errors made to the file thus far, but still. No wonder it is often quoted that one of the key elements of being a successful data scientist is the programming skill.

When you provide datasets filled with unstructured, incomplete and/or missing data, diligent analysts will devote their time to remedying the situation and making the best out of what they have received. I myself often tell newcomers that analytics is really about making the best of what you’ve got. The trouble is that such data preparation work calls for a different set of skills that have nothing to do with statistics or analytics, and most analysts are not that great at programming, nor are they trained for it.

Even if they were able to create a set of sensible variables to play with, here comes the bigger trouble; what they have just fixed is just a “sample” of the database, when the models must be applied to the whole thing later. Modern databases often contain hundreds of millions of records, and no analyst in his or her right mind uses the whole base to develop any models. Even if the sample is as large as a few million records (an overkill, for sure) that would hardly be the entire picture. The real trouble is that no model is useful unless the resultant model scores are available on every record in the database. It is one thing to fix a sample of a few hundred thousand records. Now try to apply that model algorithm to 200 million entries. You see all those interesting variables that analysts created and fixed in the sample universe? All that should be redone in the real database with hundreds of millions of lines.

Sure, it is not impossible to include all the instructions of variable conversion, reformat, edit and summarization in the model-scoring program. But such a practice is the No. 1 cause of errors, inconsistencies and serious delays. Yes, it is not impossible to steer a car with your knees while texting with your hands, but I wouldn’t call that the best practice.

That is why marketing databases must be model-ready, where sampling and scoring become a routine with minimal data transformation. When I design a marketing database, I always put the analysts on top of the user list. Sure, non-statistical types will still be able to run queries and reports out of it, but those activities should be secondary as they are lower-level functions (i.e., simpler and easier) compared to being “model-ready.”

Here is list of prerequisites of being model-ready (which will be explained in detail in my future columns):

  • All tables linked or merged properly and consistently
  • Data summarized to consistent levels such as individuals, households, email entries or products (depending on the ranking priority by the users)
  • All numeric fields standardized, where missing data and zero values are separated
  • All categorical data edited and categorized according to preset business rules
  • Missing data imputed by standardized set of rules
  • All external data variables appended properly

Basically, the whole database should be as pristine as the sample datasets that analysts play with. That way, sampling should take only a few seconds, and applying the resultant model algorithms to the whole base would simply be the computer’s job, not some nerve-wrecking, nail-biting, all-night baby-sitting suspense for every update cycle.

In my co-op database days, we designed and implemented the core database with this model-ready philosophy, where all samples were presented to the analysts on silver platters, with absolutely no need for fixing the data any further. Analysts devoted their time to pondering target definitions and statistical methodologies. This way, each analyst was able to build about eight to 10 “custom” models—not cookie-cutter models—per “day,” and all models were applied to the entire database with more than 200 million individuals at the end of each day (I hear that they are even more efficient these days). Now, for the folks who are accustomed to 30-day model implementation cycle (I’ve seen as long as 6-month cycles), this may sound like a total science fiction. And I am not even saying that all companies need to build and implement that many models every day, as that would hardly be a core business for them, anyway.

In any case, this type of practice has been in use way before the words “Big Data” were even uttered by anyone, and I would say that such discipline is required even more desperately now. Everyone is screaming for immediate answers for their questions, and the questions should be answered in forms of model scores, which are the most effective and concise summations of all available data. This so-called “in-database” modeling and scoring practice starts with “model-ready” database structure. In the upcoming issues, I will share the detailed ways to get there.

So, here is the answer for the chicken-or-the-egg question. It is the business posing the questions first and foremost, then the analytics providing answers to those questions, where databases are optimized to support such analytical activities including predictive modeling. For the chicken example, with the ultimate goal of all living creatures being procreation of their species, I’d say eggs are just a means to that end. Therefore, for a business-minded chicken, yeah, definitely the chicken before the egg. Not that I’ve seen too many logical chickens.

3 Things You Can Do Now to Make an ‘Earthly’ Difference

Readers of my blog know my distaste for financial service companies, utilities and other brands that admonish me in my mailbox to switch to digital statements “to help save the environment,” “save trees,” “pay it green” and other marketing hyperbole with absolutely no scientific backing. I’m waiting for three things

Readers of my blog know my distaste for financial service companies, utilities and other brands that admonish me in my mailbox to switch to digital statements “to help save the environment,” “save trees,” “pay it green” and other marketing hyperbole with absolutely no scientific backing.

I’m waiting for three things.

First, I’d love some examples—and you may post them in the comments section—of brands that are more honest and forthcoming about why they want their customers to switch to digital. It saves the organizations behind these brands money—money that either gets returned to the customer in lower prices or better service (right?), or (more likely) goes to the bottom line to improve margins. (Sorry if I’m too cynical here; it must be the prolonged winter-like weather.)

Second, I look forward to the Federal Trade Commission presenting an enforcement action that helps to educate businesses (and consumers) that the “print vs. digital” positioning of “being green” is misleading, if not deceptive or untruthful. Such a case would underscore the latest version (2012) of the FTC Green Guides and its substantiation requirement for any and all environmental marketing claims.

Third, I look forward to an independent apples-to-apples, cross-channel, life-cycle analysis of your “average” mail and digital communication in the United States. It may yet happen, but until then, we are left with helpful, but limited, research on paper, print, mail and electronics life-cycle inventories and analyses. Each of them have their own sets of assumptions, scopes and qualifications.

We don’t need the third event to happen, however, to take some helpful action on the mail side of the equation … right now. Here are three steps to consider:

  1. Educate yourself and follow the DMA “Green 15.” These 15 principles and practices apply to data hygiene and management, mail design and production, paper procurement, packaging and fulfillment, and recycling collection. I understand from contacts that a “digital” version may be in the works! Stay tuned.
  2. Label mail, catalogs, inserts and paper packaging to encourage recycling collection. That “junk mail” moniker is so yesterday. Discarded mail—after the consumer has used it—should be recycled. Close to two-thirds of municipalities in the United States now offer local recycling options for “mixed paper”—a threshold that FTC allows for recycling collection labels and “recyclable” claims. By using the DMA’s “Recycle Please” logo, mail marketers can help consumers increase awareness and participate in these programs without hurting response. Visit www.recycleplease.org for more information, and to download the latest version of the logo (which is available to DMA-member agencies, brands and organizations only).
  3. Use the FTC Green Guides—2012 version anew—to guide any environmental claims you may make.
  4. Extra Credit! Enter the 2013 DMA International ECHO Awards competition and its Green Marketing Award. The campaign does not need to be about an environmental product or cause—it only needs to demonstrate adherence to the DMA Green 15 in business action! The DMA Green 15 and Green ECHO are not about Earth Day and environmentalism—they’re about everyday marketing planning and decision-making that show efficiency and effectiveness in marketing: strategy, creative and response. The deadline is May 3—and agencies and brands may enter here: http://dma-echo.org/enter.jsp.

Now, if I only knew the carbon footprint of my blog. Hopefully, some of the information conveyed here will help mitigate the impact!

Get Ready for 2013: Customer Acquisition Emails

Acquiring long-term platinum customers is much harder today than it was even a decade ago. The globalization of the marketplace created an environment where people have access to multiple choices for every product or service they want to buy. This availability has created an environment where long-term customer loyalty has been replaced by hit-and-run shoppers. The only way to offset this to create a relationship with your customers that makes them want to stay with your company even when the competition offers lower prices and faster service.

Acquiring long-term platinum customers is much harder today than it was even a decade ago. The globalization of the marketplace created an environment where people have access to multiple choices for every product or service they want to buy; a simple search on Google for any item or service will reveal a multitude of choices at a variety of prices.

This availability has created an environment where long-term customer loyalty has been replaced by hit-and-run shoppers. The only way to offset this to create a relationship with your customers that makes them want to stay with your company even when the competition offers lower prices and faster service.

Relationships begin at the first contact point. Prospects who sign up for your email list have different expectations than your customers. Sending them the same promotional emails may convert a few, but it will not create the foundation for a long-term lasting relationship. People need to know they’re valued. The best way to communicate that is by creating customized emails designed to woo prospects into becoming customers. The same technological advances that increased your competition also make it easier and more economical to connect with people.

Every email marketing strategy needs a triggered systematic campaign designed to convert prospects into customers. Most companies have a welcome email automatically triggers when someone subscribes to their email list but very few businesses follow-up with additional emails that communicate information about the company products and services. It’s as if they presume that everyone knows everything there is to know about their company.

People subscribe to email lists for a variety of reasons. Some are simply looking for discount offers, others want to learn more about the products and services. Failure to take advantage of the opportunity to share information with people who have indicated they want to know more is a waste. The cost is minimal. The potential return is huge. If you do not have a customized prospect conversion strategy, you are squandering an opportunity to build a foundation for long-term customer loyalty.

It’s almost impossible to identify the prospects with long-term customer potential. The only information you have available is the original source and what people choose to share. Requiring additional information to better qualify subscribers is counterproductive—long sign-up forms yield fewer subscribers. The objective of your sign-up form is to gain permission to email prospects. The trigger emails following subscription can be used together additional information as well as convert the subscribers.

Start with a welcome email that thanks people for subscribing. Ask if they will share their preferences so you only send relevant emails. Be very careful with this. Do not ask what the subscribers want if you are not going to honor their wishes, it will alienate your prospects. If you choose to ask the questions, limit them to five. Keep them on one page above the fold with the save button in clear view. People’s eyes start to glazing over when they see a long list of questions.

The emails following the welcome letter need to build trust, provide relevant information and match the preferences indicated earlier. Don’t presume your prospects know about your top-notch service, liberal return policy or special promotions. If they do, the emails will serve as a reminder. If they don’t, providing the information is a service. Including customer testimonials and product reviews provide social proof and help establish trust.

Here are some do’s and don’ts for creating a triggered welcome email campaign:

  • Do include an added bonus in every email. This can be as simple as providing tips. For example, a B-to-C business selling cookware could offer recipes and cooking tips. A B-to-B company selling office supplies could offer productivity tips.
  • Don’t overwhelm new subscribers by bombarding them with emails. Test different delivery times and spacing to find the best strategy.
  • Do provide links to your website and additional information in every email. Always gives people a place to go and easy way to get there if they want more.
  • Don’t include icons for social media sites without providing a call to action. Give people a reason to connect with you on the other channels.
  • Do test everything. What works for your competitor may fail for you and vice versa.
  • Don’t think of your welcome email campaign as “set it and forget it” marketing. Strive for continuous improvement to maximize your return.

3 Ways Rank-and-File Marketers Matter to the C-Suite in a Brave New Marketing World

A couple weeks ago in my post titled “Wanted: Data-Driven, Digital CMOs,” I wrote about the enormous pressure CMOs are finding themselves under as the world digitizes, requiring a new type of leader, one who understands and feels comfortable in the digital space. The result of this changing dynamic has been a dramatic shortening of your average CMO’s tenure. I’m not the first to observe this trend—it’s been covered in many places over the past few months, including this great article from Fast Company. In response to this post, however, many colleagues have asked me “What does this mean for the rank-and-file marketer?” I thought this was an excellent question; one I’ve not seen discussed elsewhere.

A couple weeks ago in my post titled “Wanted: Data-Driven, Digital CMOs,” I wrote about the enormous pressure CMOs are finding themselves under as the world digitizes, requiring a new type of leader, one who understands and feels comfortable in the digital space. The result of this changing dynamic has been a dramatic shortening of your average CMO’s tenure.

I’m not the first to observe this trend—it’s been covered in many places over the past few months, including this great article from Fast Company. In response to this post, however, many colleagues have asked me “What does this mean for the rank-and-file marketer?” I thought this was an excellent question; one I’ve not seen discussed elsewhere.

By any standard, it’s certainly not an easy time to be a marketer. Over the past decade, nearly everything we know has changed, as new technologies have arrived in a dizzying fashion, upending the established order. The result for most firms has ranged from confusion to clarity, from paralysis to paroxysm—very frequently all at the same time! Working in an environment like this is definitely no picnic, as firms flail around like a hurt animal trying to figure out what to do, reducing head count, hiring, outsourcing, in-sourcing, you name it.

It may not be an easy time to be a marketer, but I think it’s a good time. The reason why is that marketing has evolved in four very important ways:

1. Marketing has become data driven—in the digital age, information is power. Contemporary marketing requires learning about who your customers are, what they look like, what attributes and affinities they share, and so on. Success means becoming fluent in the new language of the digital age—understanding what terms like “impressions,” “clicks,” “likes” and “followers” mean. But that’s not all: Success requires a deep understanding of and familiarity with campaign analytics, what they mean and signify, and how to interpret and improve upon them.

2. Marketing is technology-focused—it’s no secret that a large portion of marketers’ budgets are now being allocated to digital. Anyone who’s worked in the digital marketing arena knows that success in the space means understanding the new technology ecosystem. The other major technology trend is the fragmentation of the IT infrastructure as the SaaS/Cloud model gains traction. In this new service model, it’s marketing that’s mostly responsible for buying, using and maintaining these new tools.

3. Marketing is highly operational in nature—unlike the brand strategists of yesteryear, today’s marketing department is almost entirely focused on operations, with a heavy emphasis being placed on creating, testing and launching, tracking and optimizing numerous marketing campaigns across various channels using different tools.

In this new environment, the DNA of the rank-and-file marketer has changed radically, morphing from that of a brand steward into, well, something else entirely. Any way you look at it, today’s marketers are highly trained and qualified specialists, possessing a wide range of skills and knowledge, which can take months, if not years, to master.

Moreover, success in any given marketing role requires a deep understanding of various marketing program details, familiarity with firm’s marketing technology, systems and tools, not to mention the prevailing corporate culture. All in all, it’s a tall order.

Over the years, I’ve consulted with dozens of large firms, and I can tell you firsthand that most marketing leadership stakeholders are not digital people. In other words, the only people in the firm who really “get” what the firm’s marketing department is actually doing are the marketers themselves. Interesting, huh?

So what does this all mean? Well, in coming years I foresee a shift in the balance of power as the old generation of marketers gives way to a new generation of younger digital specialists. Now, of course, one generation passing the mantle to the next is the natural order of things. But, based on what’s going on, I see this trend accelerating dramatically in coming months and years, as those who don’t get it are replaced by those who do.

If you’re a marketer, all if this is undoubtedly good news, meaning you’re not only much more important than you think, but your trip up the proverbial corporate ladder is that much shorter. So go forth, young man (or woman), it’s a brave new world!

Any questions or feedback? As usual, I’d love to hear it.

—Rio

How Evolving Mobile Behaviors are Raising the Stakes for Marketers

While none would argue that 2011 was the year of the mobile app, marketers have been hearing more noise about the mobile web as a cross-device alternative to apps that are downloaded and installed. The reality isn’t so clear-cut.

While none would argue that 2011 was the year of the mobile app, marketers have been hearing more noise about the mobile web as a cross-device alternative to apps that are downloaded and installed. The reality isn’t so clear-cut.

If anything, the division of the mobile smartphone space into iOS and Android, as well as demographic and usage patterns on these platforms, means that targeting and developing effective mobile experiences just got a whole lot harder. But this is translating into more options for mobile marketers in 2012.

When you look at actual user behavior on smartphones, you might wonder how the mobile web would effectively fit in at all. The focus for both iOS and mobile users is on app usage versus mobile web access. Apps have become so successful that they’re moving us away from the web in general. The reasons are rather straightforward:

1. Curated content apps have become primary experiences. Whether public or ad supported, curated content sources (e.g., NPR and The Wall Street Journal) have found the niche within application environments that move users away from the web and directly toward branded experiences they trust as either primary or authoritative sources of information.

2. Excerpted content typically satisfies curiosity. Even more popular apps don’t necessarily translate to more mobile web activity. This has always been the fear with content syndication in general, but combine it with a preference for a more focused and curated experience and you get a further erosion of mobile web traffic.

3. The ease of use and established reliance on app stores. The effectiveness of the app store model combined with mobile context to include desktop environments further reinforces the shift from the web search route as a first stop for function resources.

Websites are driving traffic to apps instead of presenting a mobile-optimized version of themselves. Many sites could take advantage of users visiting via mobile device to optimize their experience. Instead, you should drive them to download apps that provide a specific or focused subset of content and functionality. Focus on creating a controlled and curated environment for experiencing content.

Further complicating matters are the differences in demographics and behavior between iOS and Android users. Android users tend to be heavier app users than iOS users (by a significant percentage), according to recent Fiksu research.

According to a recent Hunch.com survey, gender balances, income levels, age ranges and other important segmenting criteria also differ significantly between audiences. Certainly there’s enough to merit taking a closer look at these considerations when designing mobile experiences for these platforms. Android adoption rates make it clear that supporting Android isn’t an option; it’s a requirement in order to reach as broad a mobile and tablet audience as possible.

Tablets are an important area where the mobile web, and the higher percentage of mobile web usage among iOS users, comes into play. Tablets offer a superior web browsing experience. In addition, differing usage patterns and behaviors mean that tablet-based experiences can be deeper and richer than mobile-optimized executions and will track close to desktop browsing.

What does all of this mean for mobile marketers and advertisers in 2012? Android’s broader audience and superior mobile ad performance will make it a focus for mobile display advertising efforts. Apple’s advertising formats are of primary interest within the context of specific applications where their inclusion and application usage merit the investment. In-app advertisement effectiveness becomes even more critical to understand and measure in this context, as those investments tend to be higher than broader mobile ad networks buys.

Social platform mobile integration efforts need to be watched closely. Emerging apps and potential ad integration capabilities are key focal points for marketers already heavily invested in social platforms or for those looking to leverage location-enabled social networks more heavily.

Tablet and touch-optimized experiences via the mobile web will be critical to support the heavier skew of browser usage among tablet owners. Give specific consideration to the ability to leverage touch-enabled HTML5 implementations and the superior browsers offered by these platforms.

2012 will certainly be the year when marketers’ attention will be firmly focused on mobile, but in reality that represents separate and to some extent distinct experiences — e.g., mobile apps, mobile websites and tablet-optimized versions of both.

Virtual Worlds Marketing Is Kids Stuff

Remember virtual worlds? You know, those 3-D computer environments where users are represented on screen as themselves or as made-up characters and interact in real time with other users?

Remember virtual worlds? You know, those 3-D computer environments where users are represented on screen as themselves or as made-up characters and interact in real time with other users?

A few years ago, these online “other worlds” were the place to be for brand marketers. You couldn’t get through the day without reading about how such big brands as Cisco, Dell, Starwood Hotels and Toyota were plunking down a big percentage of their marketing budgets to be a part of the buzz — and hopefully get some returns.

These companies ran campaigns in these online worlds to build their brand names, test products and in some cases even sell digital merchandise.

The buzz around virtual worlds marketing has died down for sure. Many of these companies didn’t get the results they were looking for. The virtual worlds didn’t either. Second Life, for example, has even switched its business focus to training, promoting itself as a place where companies can hold meetings, conduct training, build product prototypes or simulate business situations “in a safe learning environment,” according to its Web site.

But despite the changes, virtual worlds marketing should not be ignored. Know why? Kids are now visiting these sites regularly, albeit not Second Life.

In 2008, eight million children and teens in the U.S. visited virtual worlds on a regular basis, according to a recent eMarketer article. What’s more, the online research firm projects that number will surpass 15 million by 2013. The report references an eMarketer report, Kids and Teens: Growing Up Virtual, which provides some more noteworthy findings.

The article estimates 37 percent of children ages 3 to 11 use virtual worlds at least once a month. By 2013, it projects that 54 percent will. In addition, 18 percent of teens will visit virtual worlds on at least a monthly basis this year; by 2013, that figure will rise to 25 percent.

What’s more, the article cited research from Virtual Worlds Management, which found that as of January, 112 virtual worlds aimed at children younger than 18 were already up and running worldwide, while another 81 were in development.

As a result, virtual worlds still offer tremendous opportunities for engagement, the article points out, such as offering marketers the ability to gain new insights into how consumers perceive and interact with their brands.

So, if you’re marketing to kids, why not give virtual worlds a try — especially those targeted to kids — either again or for the first time? You’ll be able to reach a captive audience with a unique marketing approach. You may even get a real ROI this time.