Politics Aside, At Least Folks Are Focused on the USPS – Let’s Keep It Up

The United States Postal Service (USPS) is a vital institution in our economy, democracy, and history – and future. It provides for confidential communication in a timely and affordable manner, paid for entirely by ratepayers rather than taxpayers. And, while we were on summer vacation, the ugly state of today’s politics brought it to the top of the news cycle.

Well, maybe that’s a good thing.

The U.S. mail stream also is a vehicle for millions of properly cast votes during primary and general elections, a process that even President Trump’s campaign knows is true, and in the frenzy of this moment, that reality must be promoted and protected. And although the 2020 U.S. Census is primarily an online event this year, mail notices have gone to all residential addresses to drive populations to the counting website. Earlier this month, the Census reported that self-reporting has accounted for an estimated 63.3% of all U.S. households thus far – so now field operations are underway to count the rest before Sept. 30.

As citizens, being counted in the Census ranks up there with voting and serving on juries. As non-citizens seeking citizenship, being counted may be the only voice one has at all. Many of us in direct and data marketing know how crucial, too, Census commercial products are to business. For all the billions spent on targeted advertising, and billions more on general advertising, understanding Census statistical areas provides valuable insights and informs strategies.

All of this only underscores the role USPS has in executing all of this. If dirty politics is what it takes to call attention to USPS operations and “fix” what needs fixing at the Postal Service, then so be it. Floating loan guarantees is a crucial start, in my humble opinion. A reinvigorated attempt during the next Congress at a postal reform bill might help, too, to soften the blow from the 2006 law – with its outrageous healthcare pre-funding mandates, for one.

It’s wrong to summarily dismiss the Postmaster General or his intentions. If his goal is to increase USPS efficiencies, then all parties can rally around that objective – as long as service levels are maintained. Privatization, however, is likely a non-starter, and may even require Constitutional changes. If the goal – as some critics maintain – is to throw an election, let’s uncover the truth of it. In the least, many states have been conducting elections by mail for years with integrity – which the Secretary of State in Oregon, a Republican, maintains. At least, the Postmaster General has halted mail processing cuts, with his stated goal of long-term sustainability, until after the November election.

Direct Mail – With Integrity

So what does all this mean to direct mailers? I love John Miglautsch’s message: “Direct mail ain’t dead.” Miglautsch says too many marketers are still prone to “digital delusional” thinking that digital can replace direct mail altogether. (Please, folks, test first – you’ll see the mail moment is real.) The Winterberry Group in January predicted a small uptick in direct mail spending in 2020 to $41.6 billion, but reported in June a Q2 drop in USPS mail volume of 33%. It’s clear that at least temporarily, marketers slashed direct mail budgets much more than their digital counterparts.

Yet direct mail has supreme advantages: It’s personalized, and free from identity challenges that still exist in digital. (See the latest Winterberry Research on data spending on digital identity management.) It’s secure and confidential. Direct mail also is a direct relationship – there are no intermediary infrastructures where audience, measurement, and attribution data can be unavailable to the advertiser. In many, many ways, direct marketers hope for an addressable digital media future that matches the offline addressable direct mail realities of today. We’re making progress in addressable media across all channels, but we’re not there yet.

From a direct mail perspective, perhaps the best contribution of digital is that (1) it has taught more U.S. households to shop direct; and (2) it has lessened competition in the mailbox. The two media work in tandem powerfully. Less clutter in the physical mail box opens the opportunity for increased response. All this assumes, however, that direct mail delivery can be predicted in-home reliably. That’s why we cannot monkey around with USPS service standards.

So fill out your Census form, if you haven’t already. Vote in the November election. And make sure USPS (and direct mail advertising) is getting the attention – and protection – commensurate to its powerful contribution to our nation.

Models Are Built, But the Job Isn’t Done Yet

In my line of business – data and analytics consulting and coaching – I often recommend some modeling work when confronted with complex targeting challenges. Through this series, I’ve shared many reasons why modeling becomes a necessity in data-rich environments (refer to “Why Model?”).

The history of model-based targeting goes back to the 1960’s, but what is the number one reason to employ modeling techniques these days? We often have too much information, way beyond the cognitive and arithmetical capacities of our brains. Most of us mortals cannot effectively consider more than two or three variables at a time. Conversely, machines don’t have such limitations when it comes to recognizing patterns among countless data variables. Subsequent marketing automation is just an added bonus.

We operate under a basic assumption that model-based targeting (with deep data) should outperform some man-made rules (with a handful of information). At times, however, I get calls as campaign results prove otherwise. Sometimes campaign segments selected by models show worse response rates than randomly selected test groups do.

When such disappointing results happen, most decision makers casually say, “The model did not work.” That may be true, but more often than not, I find that something went wrong “before” or “after” the modeling process. (Refer to “Know What to Automate With Machine Learning”, where I list major steps concerning the “before” of model-based targeting).

If the model is developed in an “analytics-ready” environment where most input errors are eradicated, then here are some common mishaps in post-modeling stages to consider.

Mishap #1: The Model Is Applied to the Wrong Universe

Model algorithm is nothing but a mathematical expression between target and comparison universes. Yes, setting up the right target is the key for success in any modeling, but defining a proper comparison universe is equally important. And the comparison group must represent the campaign universe to which the resultant model is applied.

Sometimes such universes are defined by a series of pre-selection rules before the modeling even begins. For example, the campaign universes may be set by region (or business footprint), gender of the target, availability of email address or digital ID, income level, home ownership, etc. Once set, the rules must be enforced throughout the campaign execution.

What if the rules that define the modeling universe are even slightly different from the actual campaign universe? The project may be doomed from the get-go.

For example, do not expect that models developed within a well-established business footprint will be equally effective in relatively new prospecting areas. Such expansion calls for yet another set of models, as target prospects are indeed in a different world.

If there are multiple distinct segments in the customer base, we often develop separate models within each key segment. Don’t even think about applying a model developed in one specific segment to another, just because they may look similar on the surface. And if you do something like that, don’t blame the modeler later.

Mishap #2: The Model Is Used Outside Design Specification

Even in the same modeling universe, we may develop multiple types of models for different purposes. Some models may be designed to predict future lifetime value of customers, while others are to estimate campaign responsiveness. In this example, customer value and campaign responsiveness may actually be inversely related (e.g., potential high value customers less likely to be responsive to email campaigns).

If multiple response models are built for specific channels, do not use them interchangeably. Each model should be describing distinct channel behaviors, not just general responsiveness to given offers or products.

I’ve seen a case where a cruise ship company used an affinity model specifically designed for a seasonal European line for general purposes in the name of cost savings. The result? It would have been far more cost effective developing another model than having to deal with the fallout from ineffective campaigns. Modeling cost is often a small slice in the whole pie of campaign expenses. Don’t get stingy on analytics and call for help when in doubt.

Mishap #3: There Are Scoring Errors

Applying a model algorithm to a validation sample is relatively simple, as such samples are not really large. Now, try to apply the same algorithm to over 100 million potential targets. You may encounter all kinds of performance issues caused by the sheer volume of data.

Then there are more fundamental errors stemming from the database structure itself. What if the main database structure is different from that of the development sample? That type of discrepancy – which is very common – often leads to disasters.

Always check if anything is different between the development samples and the main database:

  • Database Structure: There are so many types of database platforms, and the way they store simple transaction data may be vastly different. In general, to rank individuals, each data record must be scored on an individual level, not transaction or event levels. It is strongly recommended that data consolidation, summarization, and variable creation be done in an analytics-friendly environment “before” any modeling begins. Structural consistency eliminates many potential errors.
  • Variable List/Names: When you have hundreds, or even thousands of variables in the database, there will be similar sounding names. I’ve seen many different variable names that may represent “Total Individual Dollar Amount Past 12-month,” for example. It is a common mistake to use a wrong data field in the scoring process.
  • Variable Values: Not all similar sounding variables have similar values in them. For example, ever-so-popular “Household Income” may include dollar values in thousand-dollar increments, or pre-coded value that looks like alphabets. What if someone changed the grouping definition of such binned variables? It would be a miracle if the model scores come out correctly.
  • Imputation Assumptions: There are many ways to treat missing values (refer to “Missing Data Can Be Meaningful”). Depending on how they were transformed and stored, even missing values can be predictable in models. If missing values are substituted with imputed values, it is absolutely important to maintain their consistency throughout the process. Mistreatment of missing values is often the main cause for scoring errors.

Mishap #4: Nature of Data Is Significantly Shifted

Data values change over time due to outside factors. For instance, if there is a major shift in the business model (e.g., business moving to a subscription model), or a significant change in data collection methods or vendors, consider that all the previous models are now rendered useless. Models should be predictors of customer behaviors, not reflections of changes in your business.

Mishap #5: Scores Are Tempered After-the-Fact

This one really breaks my heart, but it happens. I once saw a user in a major financial institution unilaterally change the ranges of model decile groups after observing significant fluctuations in model group counts. As you can imagine by now, uneven model group counts are indeed revealing serious inconsistencies caused by any of the factors that I mentioned thus far. You cannot tape over a major wound — just bite the bullet and commission a new model when you see uneven or inconsistent model decile counts.

Mishap #6: There Are Selection Errors

When campaign targets are selected based on model scores, the users must be fully aware of the nature of them. If the score is grouped into model groups 1 through 10, is the ideal target “1” or “10”?

I’ve seen cases where the campaign selection was completely off the mark, as someone sorted the raw score in an ascending order, not a descending order, pushing the worse prospects to the top. But I’ve also seen errors in documentation or judgement, as it can be really confusing to figure out which group is “better.”

I tend to put things in 0-9 scale when designing a series of personas or affinity models to avoid confusion. If score groups range from 0 to 9, the user is much less likely to assume that “zero” is the best score. Without a doubt, reversed score is far worse than not using the model at all.

Final Thoughts

After all, the model algorithm itself can be wrong, too. Not all modelers are equally competent, and machine-learning is only as good as the analyst who originally set it up. Of course, you must turn that stone when investigating bad results. But you should trace all pre- and post-modeling steps, as well. After years of such detective work, my bet is firmly on errors outside the modeling processes, unless the model validation smells fishy.

In any case, do not entirely give up on modeling just because you’ve had a few bad results. There are many things to be checked and tweaked, and model-based targeting is a long series of iterative adjustments. Be mindful that even a mediocre model is still better than someone’s gut feelings, if it is applied to campaigns properly.

Competition: Another Big DC Week for Tech (Where Do We Go From Here?)

When the leaders of Amazon, Apple, Facebook, and Google come to Washington, you know there’s going to be a lot of posturing – and it’s usually not (just) from the witnesses.

The focus this past week was the House Judiciary Subcommittee on Antitrust Law rather than privacy, security, and foreign influence – topics of previous high-profile hearings. Yet the out-sized attention on these leading executives and companies – all of them U.S.-based – is actually a testament, in my humble opinion, to the power of data, information, and innovation at work advancing the American and global economy. Has this exercise and accumulation of power been benign, beneficial… or harmful?

I’ve not been shy to tout the conveniences and benefits that we’ve accrued and enjoyed as a result of responsible data use. Yet I do not dismiss an investigation of harm, unintended or otherwise. Simply, I ask that in our zeal to rein in questionable practices, let’s flash a sign to policymakers: “Handle with Care.”

The world has embraced the Information Economy. It just so happens, not by accident, that the United States has both many global leaders (four of them visiting DC) and – it must be said – a long tail of innovative companies that want to grow, prosper, and potentially join the ranks of the next big, successful data-driven entities.

As Americans, we should do all we can to recognize our own advantage, and to encourage such business ingenuity – for a better world.  Transparency, control, and civil liberties must be protected… that’s all.

There’s a part of me – with my direct marketing heritage – that’s utterly in awe of what these companies have achieved, each of them forging their own paths to business success, and doing so in a way that has cultivated and curated data – marketing and otherwise – to create in each a global powerhouse. Digital has always been “direct marketing on steroids” (please let me know who coined this phrase), and many of these companies achieved their success through a fervor for measurability and accountability.

But the question of the day – antitrust – is a very serious charge. 

Practically every business revolution in the age of capitalism – oil, banking, computing, communications, digital, among others – have had to grapple with the question, how much power is too much? What constitutes “too big” in the Information Economy? Though no one has gone there yet, could there ever be a concept in the digital world as Wall Street’s “too big to fail” – in reference to our banking giants?

I myself don’t have these answers, but I do think it’s worth looking (again) to our digital and direct marketing heritage for some guidance. Certainly any new federal laws and regulation, such as for privacy, ought to be pragmatic in their approach – rather than overly prescriptive. We have a blueprint for a federal privacy law in Privacy for America, for example, which seeks to discern reasonable from unreasonable data uses.

Some consideration, please.

  • What if we held out that data collected for marketing use should be used for marketing purposes only? What non-marketing uses – product development and design possibly – might also be acceptable?
  • Should personally identifiable data collected for marketing use ever or always be anonymized for non-marketing use? Certainly, let’s make sure we can recognize consumers as they jump from device to device and across digital and offline platforms, if for no other reason than marketing or fraud prevention purposes. These aims grow the economy, serve consumers, and finance vital social aims such as news reporting.
  • Under what circumstances should private-sector data be handed over to government sources? What legal protections should govern such handovers – subpoenas and otherwise? It’s a borderless world. What access should foreign governments have to such data, about U.S. citizens or from other jurisdictions? It’s a fine line – or even a fuzzy blur – between anti-terrorism and unwanted surveillance of ordinary people.
  • And of course, there’s anti-competition. Data enablement and data sharing should grow the economy, foster competition, and serve consumers. Laws – whether anti-competition or privacy – should seek the same, and not undermine innovation. For example, the current demonization of third-party data feeds a frenzy that concentrates first-party data collection and power in “walled gardens” – where knowledge about customers’ marketing preferences often becomes incomplete and clouded. Could policymakers use their pen unwittingly to diminish the long tail of ad tech to detrimental effects? Even (some) Europeans have questioned what they’ve done.

As far as bias is concerned, add my voice to those who wish to do our utmost to minimize and eliminate protected-class discrimination in our algorithms and artificial intelligence – gender, race, religion, sexual preference – as we practice the art and science of commerce.

All the same, I have deep sympathy for this same task regarding political free speech: when and how we would ever attempt to define and remove political bias is dangerous territory. What is a lie? What is hate speech? What is a conservative or liberal bias?

There are no easy answers here. But I look forward to this public investigation, all the same. We need to understand fully where the Information Economy may overstep, overreach, restrict free speech, or undermine competition – even if these grievances are found to be remote.

A Map or a Matrix? Identity Management Is More Complex By the Day

A newly published white paper on how advertisers and brands can recognize unique customers across marketing platforms underscores just how tough this important job is for data-driven marketers.

As technologists and policymakers weigh in themselves on the data universe – often without understanding the full ramifications of what they do (or worse, knowing so but proceeding anyway) – data flows on the Internet and on mobile platforms are being dammed, diverted, denuded, and divided.

In my opinion, these developments are not decidedly good for advertising – which relies on such data to deliver relevance in messaging, as well as attribution and measurement. There is a troubling anti-competition mood in the air. It needs to be reckoned with.

Consider these recent developments:

  • Last week, the European Court of Justice rendered a decision that overturned “Privacy Shield” – the safe harbor program that upward of 5,000 companies rely upon to move data securely between the European Union and the United States. Perhaps we can blame U.S. government surveillance practices made known by Edward Snowden, but the impact will undermine hugely practical, beneficial, and benign uses of data – including for such laudable aims as identity management, and associated advertising and marketing uses.
  • Apple announced it will mandate an “opt-in” for mobile identification data used for advertising and marketing beginning with iOS 14. Apple may report this is about privacy, but it is also a business decision to keep Apple user data from other large digital companies. How can effective cross-app advertising survive (and be measured) when opt-in rates are tiny? What about the long-tail and diversity of content that such advertising finances?
  • Google’s announcement that it plans to cease third-party cookies – as Safari and Mozilla have already done – in two years’ time (six months and ticking) is another erosion on data monetization used for advertising. At least Google is making a full-on attempt to work with industry stakeholders (Privacy Sandbox) to replace cookies with something else yet to be formulated. All the same, ad tech is getting nervous.
  • California’s Attorney General – in promulgating regulation in conjunction with the enforcement of the California Consumer Privacy Act (in itself an upset of a uniform national market for data flows, and an undermining of interstate commerce) – came forth with a new obligation that is absent from the law, but asked for by privacy advocates: Companies will be required to honor a browser’s global default signals for data collection used for advertising, potentially interfering with a consumer’s own choice in the matter. It’s the Do Not Track debate all over again, with a decision by fiat.

These external realities for identity are only part of the complexity. Mind you, I haven’t even explored here the volume, variety, and velocity of data that make data collection, integration, analysis, and application by advertisers both vital and difficult to do. As consumers engage with brands on a seemingly ever-widening number of media channels and data platforms, there’s nothing simple about it. No wonder Scott Brinker’s Mar Tech artwork is becoming more and more an exercise in pointillism.

Searching for a Post-Cookie Blueprint

So it is in this flurry (or fury) of policy developments that the Winterberry Group issued its most recent paper, “Identity Outlook 2020: The Evolution of Identity in a Privacy-First, Post-Cookie World.”

Its authors take a more positive view of recent trends – reflecting perhaps a resolve that the private sector will seize the moment:

“We believe that regulation and cookie deprecation are a positive for the future health and next stage of growth for the advertising and marketing industry as they are appropriate catalysts for change in an increasingly privacy-aware consumer environment,” write authors Bruce Biegel, Charles Ping, and Michael Harrison, all of whom are with the Winterberry Group.

The researchers report five emerging identity management processes, each with its own regulatory risk. Brands may pursue any one or combination of these methodologies:

  • “A proprietary ID based on authenticated first-party data where the brand or media owner has established a unique ID for use on their owned properties and for matching with partners either directly or through privacy safe environments (e.g.: Facebook, Google, Amazon).
  • “A common ID based on a first-party data match to a PII- [personally identifiable information] based reference data set in order to enable scale across media providers while maintaining high levels of accuracy.
  • “A common ID based on a first-party data match to a third-party, PII-based reference data set in order to enable scale across media providers while maintaining high levels of accuracy; leverages a deterministic approach, with probabilistic matching to increase reach.
  • “A second-party data environment based on clean environments with anonymous ID linking to allow privacy safe data partnerships to be created.
  • “A household ID based on IP address and geographic match.”

The authors offer a chart that highlights some of the regulatory risks with each approach.

“As a result of the diversity of requirements across the three ecosystems (personalization, programmatic and ATV [advanced television]) the conclusion that Winterberry Group draws from the market is that multiple identity solutions will be required and continue to evolve in parallel. To achieve the goals of consumer engagement and customer acquisition marketers will seek to apply a blend of approaches based on the availability of privacy-compliant identifiers and the suitability of the approach for specific channels and touchpoints.”

A blend of approaches? Looks like I’ll need a navigator as well as the map. As one of the six key takeaways, the report authors write:

“Talent gaps, not tech gaps: One of the issues holding the market back is the lack of focus in the brand/agency model that is dedicated to understanding the variety of privacy-compliant identity options. We expect that the increased market complexity in identity will require Chief Data Officers to expand their roles and place themselves at the center of efforts to reduce the media silos that separate paid, earned and owned use cases. The development of talent that overlaps marketing/advertising strategy, data/data science and data privacy will be more critical in the post-cookie, privacy-regulated market than ever before.”

There’s much more in the research to explore than one blog post – so do your data prowess a favor and download the full report here.

And let’s keep the competition concerns open and continuing. There’s more at stake here than simply a broken customer identity or the receipt of an irrelevant ad.

Know What to Automate With Machine Learning

There are many posers in the data and analytics industry. Unfortunately, some of them occupy managerial positions, making critical decisions based on superficial knowledge and limited experiences. I’ve seen companies wasting loads of money and resources on projects with no substantial value — all because posers in high places bought into buzzwords or false promises. As if buzzwords have some magical power to get things done “auto-magically.”

I’ve written articles about how to identify posers and why buzzwords suck. But allow me to add a few more thoughts, as the phrase “Machine Learning” is rapidly gaining that magical power in many circles. You’d think that machines could read our minds and deliver results on their own. Sorry to break it to you, but even in the world of Star Trek, computers still wouldn’t understand illogical requests.

Beware of people who try to employ machine learning and no other technique. Generally, such people don’t even understand what they are trying to automate, only caring about the cost reduction part. But the price that others end up paying for such a bad decision could be far greater than any savings. The worst-case scenario is automating inadequate practices, which leads to wrong places really fast. How can anyone create a shortcut if he doesn’t know how to get to the destination in the first place, or worse, where the destination is supposed to be?

The goal of any data project should never be employing machine learning for the sake of it. After all, you wouldn’t respect a guitarist who can’t play a simple lick, just because he has a $5,000 custom guitar on his shoulder.

Then, what is the right way to approach this machine learning hype? First, you must recognize that there are multiple steps in predictive modeling. Allow me to illustrate some major steps and questions to ask:

  1. Planning: This critical step is often the most difficult one. What are you trying to achieve through data and analytics? Building the most eloquent model can’t be the sole purpose outside academia. Converting business goals into tangible solution sets is a project in itself. What kind of analytics should be employed? What would be the outcome? How will those model scores be applied to actual marketing campaigns? How will the results would be measured? Prescribing proper solutions to business challenges within the limitation of systems, toolsets, and the budget is one of the most coveted skill sets. And it has nothing to do with tools like machine learning, yet.
  2. Data Audit: Before we chart a long analytics journey, let’s put a horse before the cart, as data is the fuel for an engine called machine learning. I’ve seen too many cases where the cart is firmly mounted before the horse. What data are we going to use? From what sources? Do we have enough data to perform the task? How far in time do the datasets go back? Are they merged in one place? Are they in usable forms? Too many datasets are disconnected, unstructured, uncategorized, and unclean. Even for the machines.
  3. Data Transformation: Preparing available data for advanced analytics is also a project in itself. Be mindful that you don’t have to clean everything; just deal with the elements that are essential for required analytics to meet pre-determined business goals. At this stage, you may employ machine learning to categorize, group, or reformat data variables. But note that such modules are quite different from the ones for predictions.
  4. Target Definition: Setting up proper model targets is half-art/half-science. If the target is hung on a wrong spot, the resultant model will never render any value. For instance, if you are targeting so-called “High Value” customers, how would you express it in mathematical terms? It could be defined by any combinations of value, frequency, recency, and product categories. The targets are to be set after a long series of assumptions, profiling, and testing. No matter what modeling methodology eventually gets employed, you do NOT want targets to be unilaterally determined by a machine. Even with a simple navigator, which provides driving directions through machine-based algorithms, the user must provide the destination first. A machine cannot determine where you need to go (at least not yet).
  5. Universe Definition: In what universe will the resultant model be applied and used? Model comparison universe is as important as the target itself, as a model score is a mathematical expression of differences between two dichotomous universes (e.g., buyers vs. non-buyers). Even with the same target, switching the comparison universe would render completely different algorithms. On top of that, you may want to put extra filters by region, gender, customer type, user segment, etc. A machine may determine distinct sets of universes that require separate models, but don’t relinquish all controls to machines, either. Machine may not aware of where you would apply the model.
  6. Modeling: This statistical work is comprised of sub-steps such as variable selection, variable transformation, binning, outlier exclusion, algorithm creation, and validation, all in multiple iterations. It is indeed laborious work, and “some” parts may be done by the machines to save time. You may have heard of terms such as Deep Learning, Neural Net, logistic regression, stepwise regression, Random Forest, CHAID analysis, tree analysis, etc. Some are to be done by machines, and some by human analysts. All those techniques are basically to create algorithms. In any case, some human touch is inevitable regardless of employed methodology, as nothing should be released without continuous testing, validation, and tweaking. Don’t blindly subscribe to terms like “unsupervised learning.”
  7. Application: An algorithm may have been created in a test environment, but to be useful, the model score must be applied to the entire universe. Some toolsets provide “in-database-scoring”, which is great for automation. Let me remind you that most errors happen before or after the modeling step. Again, humans should not be out of the loop until everything becomes a routine, all the way to campaign execution and attribution.
  8. Maintenance: Models deteriorate and require scheduled reviews. Even self-perpetuating algorithms should be examined periodically, as business environments, data quality, and assumptions may take drastic turns. The auto-pilot switch shouldn’t stay on forever.

So, out of this outline for a simple target modeling (for 1:1 marketing applications), which parts do you think can fully be automated without any human intervention? I’d say some parts of data transformation, maybe all of modeling, and some application steps could go on the hands-free route.

The most critical step of all, of course, is the planning and goal-setting part. Humans must breathe their intention into any project. Once things are running smoothly, then sure, we can carve out the parts that can be automated in a step-wise fashion (i.e., never in one shot).

Now, would you still believe sales pitches that claim all your marketing dreams will come true if you just purchase some commercial machine-learning modules? Even if decent toolsets are tuned up properly, don’t forget that you are supposed to be the one who puts them in motion, just like self-driving cars.

How the Impact of COVID-19 Is Changing Marketing

Well, it’s not as if we can start 2020 all over again — we’re already halfway through this year thus far. Yet, we can say one thing, COVID-19 and its recessionary impacts may be hanging around awhile. How may this have changed marketing mid-year, and possibly changed it permanently?

Well, it’s not as if we can start 2020 all over again — we’re already halfway through this year thus far. Yet, we can say one thing: COVID-19 and its recessionary impacts may be hanging around awhile. How may this have changed marketing mid-year, and possibly changed it permanently?

Such prognostications have kept The Winterberry Group, a marketing research consultancy, plenty busy since March: reading the tea leaves of government data, industry interviews, marketing dashboards, econometric algorithms, and the like. Principal Bruce Biegel told a Direct Marketing Club of New York audience this past week that indeed June has been better than May, which was better than April — when the U.S. (and much of the global) economy was in free fall.

So what’s underway and what’s in store for us midyear? Have we turned a corner?

Our Comeback Will Not Be a U-Turn — ‘Swoosh!’

When unemployment shoots up to 17.1%, and 40 million American jobs either furlough or disappear, there’s going to be a lag effect. The “wallet” recession is upon us, as consumers hang onto their savings, or eat through them, so there’s not going to be the same level of demand that drives upward of two-thirds of the U.S. economy.

New York City is a COVID-19 epicenter — and the commercial real estate market may take five to 10 years to recover, reports The Economist (subscription required). Knowledge workers will return, eventually. But densely populated urban centers, where innovations accelerate the economy, may look and feel different for some time, and that in and of itself could hamper national and global growth. Can other innovation clusters stave off the virus to protect collaboration?

And then there’s our world of advertising. Biegel sees digital being a “winner,” as traditional media continues to take a drubbing. Linear TV spending dropped by a quarter this quarter, and direct mail by half. Experiential and sponsorship spending has been slashed by 75%, as concerts, live sports, conferences, and festivals all took a public health-ordered hiatus. Yet, even in digital categories, Q2 has yelled “ouch.”

Email is the only channel to have held its own, though pricing pressure has cut margins. Social, search, and digital display all have posted drops from 25% to 40% during the quarter — and though all our eyes were home watching Disney+, Netflix, and the like, even OTT/addressable TV ad spending was down by 5%. With the Newfronts coming this week, it will be interesting to see what types of digital media may post gains.

So if June’s “recovery” in media spend is any indication, Q3 (sans Olympics) and Q4 (yes, we’re still having an Election, last time I checked) should be solid though not buoyant. Biegel says it may be a “swoosh” recovery — think Nike’s logo — down fast, but up again slowly, steadily and resiliently. Which begs the questions: Can ad businesses, business models, and brands cope with a new reality?

The “new normal” is about coming out of the COVID-19 crisis — and half of executives surveyed by The Winterberry Group aren’t expecting miracles:

Medium-Term Budget Cuts

IAB-Winterberry Group State of Data (2020)

 Q3 Will Start a Recovery … of Sorts

Source: Advertiser Perceptions, Pivotal Research Group (2020), as reported by Winterberry Group

And, Biegel reported, that it may indeed take to 2024 — with COVID-19 firmly in a rear view mirror — for a recovery to be complete, according to IPG Mediabrands Magna. It is predicting a 4.4% ad spend contraction this year, a 4% recovery next year, and “subdued” results thereafter until mid-decade.

So How Have We Changed — and Will These New Behaviors Stick?

Some effects, though, may indeed have permanence in how Americans consume media — perhaps hastening trends already underway, or creating a whole rethink of how we act as consumers. Consider these impacts:

  • Streaming to TVs more than doubled during COVID-19 crisis. Have we rewired our video consumption habits away from scheduled programming for good?
  • Mobile data traffic surged 380% in March alone. Consumers have taken to their smartphones everywhere — so how has mobile viewing altered consumer’s screen habits across devices, and will it stick?
  • DTC brands and catalogs know all about remote selling — and so do millions of consumers who have now come to love shopping this way.
  • Video game use is up 60% — opening the door to more in-game advertising opportunities. This may change the mix of brands seeking to engage consumers there.
  • In January there were 280,000 posted job openings in data analytics. There are 21,000 today. More than half of marketers expect predictive modeling and segmentation to occupy their marketing strategy concerns for the balance of 2020.
  • Tangible value matters. Consumers will be demanding more pricing benefits from brand loyalty, and less VIP experiences. We may be getting tired of lockdowns but we are steadfast in a recession, savings conscious mindset.
  • Business travel – yes, your clients may be returning to the office, but do they really want to see YOU? What can B2B marketers and sellers achieve virtually?

It’s ironic, Biegel said, that privacy laws and the crumbling cookie are making customer recognition harder in the addressable media ecosystem, just as consumers expect and demand to be recognized. Identity resolution platforms will evolve to cope with these new marketplace realities — both of which are independent of COVID-19 – but the solutions will bring forth a blend of technologies, processes, and people yet to be fully formulated. These are still open and important marketplace issues.

So assuming we’re healthful health-wise, we have some challenges ahead in ad land. I’m glad to have some guideposts in this unprecedented time.

3 Resource Allocation Questions to Ask for Better Returns

Here are three questions data-driven marketers and those in customer-focused functions need to ask in order to evaluate their resource allocation during uncertain times.

These are obviously times of great uncertainty and change. Smart business people know that with change comes new opportunities. Somewhere, entrepreneurial spirits are already making bets and shifting strategies. There is another powerful axiom, however, which rarely gets enough airtime during times of change: In times of uncertainty, focus on what is certain. One certainty in business is that resources can always be better relocated to achieve better returns.

Unless you are one of the lucky businesses booming in these times, there will be budget cuts. This is the perfect time to reevaluate resource allocations using an agile, data-driven picture of your business. Considering that there are few industries untouched by COVID-19, agile decisions will need to be made based on sparse but recent data.

Here are three questions data-driven marketers and those in customer-focused functions need to ask in order to evaluate their resource allocation.

1. Do I know who my best customers are and are they okay? Your best customers should be based on current sales and lifetime value. Yes, your best customers today are important. However, most businesses survive on the 20% to 30% of customers who are consistently loyal and profitable over many years. Once you have identified the most important customers, you should evaluate if their buying behaviors are changing and why? How can you reallocate resources to better serve this segment?

2. Do I know the channels where most of my business comes from and is it under threat? The first step to answer this question should involve a data-driven accounting of your marketing and sales channels. However, some of your most influential channels may be the most difficult to track. Therefore, it is important that you establish or refresh your multi-touch attribution models so that you can better allocate sales to channels. Right now, it might be very tempting to simply rely on direct attribution or easily measurable channels. After all, this approach feels more certain, but it is rarely the right answer.

3. Do I have the data I need to make quick decisions? If your data was messy and hard to work with before COVID-19, then it will be even less helpful now. This might be the right time to think about the minimal data needed to make agile decisions. The word minimal is critical here as the more data you collect, the more complex the solutions become, and agility diminishes. Do you know what measures are most important? Do you need to spend resources on agile data-driven capabilities?

Beware of One-Size-Fits-All Customer Data Solutions

In the data business, the ability to fine-tune database structure and toolsets to meet unique business requirements is key to success, not just flashy features and functionalities. Beware of technology providers who insist on a “one-size-fits-all” customer data solution.

In the data business, the ability to fine-tune database structure and toolsets to meet unique business requirements is key to success, not just flashy features and functionalities. Beware of technology providers who insist on a “one-size-fits-all” customer data solution, unless the price of entry is extremely low. Always check the tech provider’s exception management skills and their determination to finish the last mile. Too often, many just freeze at the thought of any customization.

The goal of any data project is to create monetary value out of available data. Whether it is about increasing revenue or reducing cost, data activities through various types of basic and advanced analytics must yield tangible results. Marketers are not doing all this data-related work to entertain geeks and nerds (no offense); no one is paying for data infrastructure, analytics toolsets, and most importantly, human cost to support some intellectual curiosity of a bunch of specialists.

Therefore, when it comes to evaluating any data play, the criteria that CEOs and CFOs bring to the table matter the most. Yes, I shared a long list of CDP evaluation criteria from the users’ and technical points of views last month, but let me emphasize that, like any business activity, data work is ultimately about the bottom line.

That means we have to maintain balance between the cost of doing business and usability of data assets. Unfortunately, these two important factors are inversely related. In other words, to make customer data more useful, one must put more time and money into it. Most datasets are unstructured, unrefined, uncategorized, and plain dirty. And the messiness level is not uniform.

Start With the Basics

Now, there are many commoditized toolsets out in the market to clean the data and weave them together to create a coveted Customer-360 view. In fact, if a service provider or a toolset isn’t even equipped to do the basic part, I suggest working with someone who can.

For example, a service provider must know the definition of dirty data. They may have to ask the client to gauge the tolerance level (for messy data), but basic parameters must be in place already.

What is a good email address, for instance? It should have all the proper components like @ signs and .com, .net, .org, etc. at the end. Permission flags must be attached properly. Primary and secondary email must be set by predetermined rules. They must be tagged properly if delivery fails, even once. The list goes on. I can think of similar sets of rules when it comes to name, address, company name, phone number, and other basic data fields.

Why are these important? Because it is not possible to create that Customer-360 view without properly cleaned and standardized Personally Identifiable Information (PII). And anyone who is in this game must be masters of that. The ability to clean basic information and matching seemingly unmatchable entities are just prerequisites in this game.

Even Basic Data Hygiene and Matching Routines Must Be Tweaked

Even with basic match routines, users must be able to dictate tightness and looseness of matching logics. If the goal of customer communication involves legal notifications (as for banking and investment industries), one should not merge any two entities just because they look similar. If the goal is mainly to maximize campaign effectiveness, one may merge similar looking entities using various “fuzzy” matching techniques, employing Soundex, nickname tables, and abbreviated or hashed match keys. If the database is filled with business entities for B2B marketing, then so-called commoditized merge rules become more complicated.

The first sign of trouble often becomes visible at this basic stage. Be aware of providers that insist on “one-size-fits-all” rules, in the name of some universal matching routine. There was no such thing even in the age of direct marketing (i.e., really old days). How are we going to go through complex omnichannel marketing environment with just a few hard-set rules that can’t be modified?

Simple matching logic only with name, address, and email becomes much more complex when you add new online and offline channels, as they all come with different types of match keys. Just in the offline world, the quality of customer names collected in physical stores vastly differs from that of self-entered information from a website along with shipping addresses. For example, I have seen countless invalid names like “Mickey Mouse,” “Asian Tourist,” or “No Name Provided.” Conversely, no one who wants to receive the merchandise at their address would create an entry “First Name: Asian” and “Last Name: Tourist.”

Sure, I’m providing simple examples to illustrate the fallacy of “one-size-fits-all” rules. But by definition, a CDP is an amalgamation of vastly different data sources, online and offline. Exceptions are the rules.

Dissecting Transaction Elements

Up to this point, we are still in the realm of “basic” stuff, which is mostly commoditized in the technology market. Now, let’s get into more challenging parts.

Once data weaving is done through PII fields and various proxies of individuals across networks and platforms, then behavioral, demographic, geo-location, and movement data must be consolidated around each individual. Now, demographic data from commercial data compilers are already standardized (one would hope), regardless of their data sources. Every other customer data type varies depending on your business.

The simplest form of transaction records would be from retail businesses, where you would sell widgets for set prices through certain channels. And what is a transaction record in that sense? “Who” bought “what,” “when,” for “how much,” through “what channel.” Even from such a simplified view point, things are not so uniform.

Let’s start with an easy one, such as common date/time stamp. Is it in form of UTC time code? That would be simple. Do we need to know the day-part of the transaction? Eventually, but by what standard? Do we need to convert them into local time of the transaction? Yes, because we need to tell evening buyers and daytime buyers apart, and we can’t use Coordinated Universal Time for that (unless you only operate in the U.K.).

“How much” isn’t so bad. It is made of net price, tax, shipping, discount, coupon redemption, and finally, total paid amount (for completed transactions). Sounds easy? Let’s just say that out of thousands of transaction files that I’ve encountered in my lifetime, I couldn’t find any “one rule” that governs how merchants would handle returns, refunds, or coupon redemptions.

Some create multiple entries for each action, with or without common transaction ID (crazy, right?). Many customer data sources contain mathematical errors all over. Inevitable file cutoff dates would create orphan records where only return transactions are found without any linkage to the original transaction record. Yes, we are not building an accounting system out of a marketing database, but no one should count canceled and returned transactions as a valid transaction for any analytics. “One-size-fits-all?” I laugh at that notion.

“Channel” may not be so bad. But at what level? What if the client has over 1,000 retail store locations all over the world? Should there be a subcategory under “Retail” as a channel? What about multiple websites with different brand names? How would we organize all that? If this type of basic – but essential – data isn’t organized properly, you won’t even be able to share store level reports with the marketing and sales teams, who wouldn’t care for a minute about “why” such basic reports are so hard to obtain.

The “what” part can be really complicated. Or, very simple if product SKUs are well-organized with proper product descriptions, and more importantly, predetermined product categories. A good sign would be the presence of a multi-level product category table, where you see entries like an apparel category broken down into Men, Women, Children, etc., and Women’s Apparel is broken down further into Formalwear, Sportswear, Casualwear, Underwear, Lingerie, Beachwear, Fashion, Accessories, etc.

For merchants with vast arrays of products, three to five levels of subcategories may be necessary even for simple BI reports, or further, advanced modeling and segmentation. But I’ve seen too many cases of incongruous and inconsistent categories (totally useless), recycled category names (really?), and weird categories such as “Summer Sales” or “Gift” (which are clearly for promotional events, not products).

All these items must be fixed and categorized properly, if they are not adequate for analytics. Otherwise, the gatekeepers of information are just dumping the hard work on poor end-users and analysts. Good luck creating any usable reports or models out of uncategorized product information. You might as well leave it as an unknown field, as product reports will have as many rows as the number of SKUs in the system. It will be a challenge finding any insights out of that kind of messy report.

Behavioral Data Are Complex and Unique to Your Business

Now, all this was about relatively simple “transaction” part. Shall we get into the online behavior data? Oh, it gets much dirtier, as any “tag” data are only as good as the person or department that tagged the web pages in question. Let’s just say I’ve seen all kinds of variations of one channel (or “Source”) called “Facebook.” Not from one place either, as they show up in “Medium” or “Device” fields. Who is going to clean up the mess?

I don’t mean to scare you, but these are just common examples in the retail industry. If you are in any subscription, continuity, travel, hospitality, or credit business, things get much more complicated.

For example, there isn’t any one “transaction date” in the travel industry. There would be Reservation Date, Booking Confirmation Date, Payment Date, Travel Date, Travel Duration, Cancellation Date, Modification Date, etc., and all these dates matter if you want to figure out what the traveler is about. If you get all these down properly and calculate distances from one another, you may be able to tell if the individual is traveling for business or for leisure. But only if all these data are in usable forms.

Always Consider Exception Management Skills

Some of you may be in businesses where turn-key solutions may be sufficient. And there are plenty of companies that provide automated, but simpler and cheaper options. The proper way to evaluate your situation would be to start with specific objectives and prioritize them. What are the functionalities you can’t live without, and what is the main goal of the data project? (Hopefully not hoarding the customer data.)

Once you set the organizational goals, try not to deviate from them so casually in the name of cost savings and automation. Your bosses and colleagues (i.e., mostly the “bottom line” folks) may not care much about the limitations of toolsets and technologies (i.e., geeky concerns).

Omnichannel marketing that requires a CDP is already complicated. So, beware of sales pitches like “All your dreams will come true with our CDP solution!” Ask some hard questions, and see if they balk at the word “customization.” Your success may depend on their ability to handle exceptions than executing some commoditized functions that they had acquired a long time ago. Unless you really believe that you will safely get to your destination on a “autopilot” mode.

 

Understanding What a Customer Data Platform Needs to Be

Marketers try to achieve holistic personalization through all conceivable channels in order to stand out among countless messages hitting targeted individuals every day, if not every hour. If the message is not clearly about the target recipient, it will be quickly dismissed. So, how can marketers achieve such an advanced level of personalization?

Modern-day marketers try to achieve holistic personalization through all conceivable channels in order to stand out among countless marketing messages hitting targeted individuals every day, if not every hour. If the message is not clearly about the target recipient, it will be quickly dismissed.

So, how can marketers achieve such an advanced level of personalization? First, we have to figure out who each target individual is, which requires data collection: What they clicked, rejected, browsed, purchased, returned, repeated, recommended, look like, complained about, etc.  Pretty much every breath they take, every move they make (without being creepy). Let’s say that you achieved that level of data collection. Will it be enough?

Enter “Customer-360,” or “360-degree View of a Customer,” or “Customer-Centric Portrait,” or “Single View of a Customer.” You get the idea. Collected data must be consolidated around each individual to get a glimpse — never the whole picture — of who the targeted individual is.

You may say, “That’s cool, we just procured technology (or a vendor) that does all that.” Considering there is no CRM database or CDP (Customer Data Platform) company that does not say one of the terms I listed above, buyers of technology often buy into the marketing pitch.

Unfortunately,the 360-degree view of a customer is just a good start in this game, and a prerequisite. Not the end goal of any marketing effort. The goal of any data project should never be just putting all available data in one place. It must support great many complex and laborious functions during the course of planning, analysis, modeling, targeting, messaging, campaigning, and attribution.

So, for the interest of marketers, allow me to share the essentials of what a CDP needs to be and do, and what the common elements of useful marketing databases are.

A CDP Must Cover Omnichannel Sources

By definition, a CDP must support all touchpoints in an omnichannel marketing environment. No modern consumer lingers around just in one channel. The holistic view cannot be achieved by just looking at their past transaction history, either (even though the past purchase behavior still remains the most powerful predictor of future behavior).

Nor do marketers have time to wait until someone buys something through a particular channel for them to take actions. All movements and indicators — as much as possible — through every conceivable channel should be included in a CDP.

Yes, some data evaporates faster than others — such as browsing history — but we are talking about a game of inches here.  Besides, data atrophy can be delayed with proper use of modeling techniques.

Beware of vendors who want to stay in their comfort zone in terms of channels. No buyer is just an online or an offline person.

Data Must Be Connected on an Individual Level

Since buyers go through all kinds of online and offline channels during the course of their journey, collected data must be stitched together to reveal their true nature. Unfortunately, in this channel-centric world, characteristics of collected data are vastly different depending on sources.

Privacy concerns and regulations regarding Personally Identifiable Information (PII) greatly vary among channels. Even if PII is allowed to be collected, there may not be any common match key, such as address, email, phone number, cookie ID, device ID, etc.

There are third-party vendors who specialize in such data weaving work. But remember that no vendor is good with all types of data. You may have to procure different techniques depending on available channel data. I’ve seen cases where great technology companies that specialized in online data were clueless about “soft-match” techniques used by direct marketers for ages.

Remember, without accurate and consistent individual ID system, one cannot even start building a true Customer-360 view.

Data Must Be Clean and Reliable

You may think that I am stating the obvious, but you must assume that most data sources are dirty. There is no pristine dataset without a serious amount of data refinement work. And when I say dirty, I mean that databases are filled with inaccurate, inconsistent, uncategorized, and unstructured data. To be useful, data must be properly corrected, purged, standardized, and categorized.

Even simple time-stamps could be immensely inconsistent. What are date-time formats, and what time zones are they in?  Dollars aren’t just dollars either. What are net price, tax, shipping, discount, coupon, and paid amounts? No, the breakdown doesn’t have to be as precise as for an accounting system, but how would you identify habitual discount seekers without dissecting the data up front?

When it comes to free-form data, things get even more complicated. Let’s just say that most non-numeric data are not that useful without proper categorization, through strict rules along with text mining. And such work should all be done up front. If you don’t, you are simply deferring more tedious work to poor analysts, or worse, to the end-users.

Beware of vendors who think that loading the raw data onto some table is good enough. It never is, unless the goal is to hoard data.

Data Must Be Up-to-Date

“Real-time update” is one of the most abused word in this business. And I don’t casually recommend it, unless decisions must be made in real-time. Why? Because, generally speaking, more frequent updates mean higher maintenance cost.

Nevertheless, real-time update is a must, if we are getting into fully automated real-time personalization. It is entirely possible to rely on trigger data for reactive personalization outside the realm of CDP environment,  but such patch work will lead to regrets most of the time. For one, how would you figure out what elements really worked?

Even if a database is not updated in real-time, most source data must remain as fresh as they can be. For instance, it is generally not recommended to append third-party demographic data real-time (except for “hot-line” data, of course). But that doesn’t mean that you can just use old data indefinitely.

When it comes to behavioral data, time really is of an essence. Click data must be updated at least daily, if not real-time.  Transaction data may be updated weekly, but don’t go over a month without updating the base, as even simple measurements like “Days since last purchase” can be way off. You all know the importance of good old recency factor in any metrics.

Data Must Be Analytics-Ready

Just because the data in question are clean and error-free, that doesn’t mean that they are ready for advanced analytics. Data must be carefully summarized onto an individual level, in order to convert “event level information” into “descriptors of individuals.”  Presence of summary variables is a good indicator of true Customer-360.

You may have all the click, view, and conversion data, but those are all descriptors of events, not people. For personalization, you need know individual level affinities (you may call them “personas”). For planning and messaging, you may need to group target individuals into segments or cohorts. All those analytics run much faster and more effectively with analytics-ready data.

If not, even simple modeling or clustering work may take a very long time, even with a decent data platform in place. It is routinely quoted that over 80% of analysts’ time go into data preparation work — how about cutting that down to zero?

Most modern toolsets come with some analytics functions, such as KPI dashboards, basic queries, and even segmentation and modeling. However, for advanced level targeting and messaging, built-in tools may not be enough. You must ask how the system would support professional statisticians with data extraction, sampling, and scoring (on the backend). Don’t forget that most analytics work fails before or after the modeling steps. And when any meltdown happens, do not habitually blame the analysts, but dig deeper into the CDP ecosystem.

Also, remember that even automated modeling tools work much better with refined data on a proper level (i.e., Individual level data for individual level modeling).

CDP Must Be Campaign-Ready

For campaign execution, selected data may have to leave the CDP environment. Sometimes data may end up in a totally different system. A CDP must never be the bottleneck in data extraction and exchange. But in many cases, it is.

Beware of technology providers that only allow built-in campaign toolsets for campaign execution. You never know what new channels or technologies will spring up in the future. While at it, check how many different data exchange protocols are supported. Data going out is as important as data coming in.

CDP Must Support Omnichannel Attribution

Speaking of data coming in and out, CDPs must be able to collect campaign result data seamlessly, from all employed channels.  The very definition of “closed-loop” marketing is that we must continuously learn from past endeavors and improve effectiveness of targeting, messaging, and channel usage.

Omnichannel attribution is simply not possible without data coming from all marketing channels. And if you do not finish the backend analyses and attribution, how would you know what really worked?

The sad reality is that a great majority of marketers fly blind, even with a so-called CDP of their own. If I may be harsh here, you are not a database marketer if you are not measuring the results properly. A CDP must make complex backend reporting and attribution easier, not harder.

Final Thoughts

For a database system to be called a CDP, it must satisfy most — if not all — of these requirements. It may be daunting for some to read through this, but doing your homework in advance will make it easier for you in the long run.

And one last thing: Do not work with any technology providers that are stingy about custom modifications. Your business is unique, and you will have to tweak some features to satisfy your unique needs. I call that the “last-mile” service. Most data projects that are labeled as failures ended up there due to a lack of custom fitting.

Conversely, what we call “good” service providers are the ones who are really good at that last-mile service. Unless you are comfortable with one-size-fits-all pre-made — but cheaper — toolset, always insist on customizable solutions.

You didn’t think that this whole omnichannel marketing was that simple, did you?

 

Brands Need to Keep Engaging – Don’t Just Stop Because of Crisis

We are in extraordinary times – and it’s only prudent to recognize this. While the Fed may be doing everything possible to keep our economy afloat, we likely will remain in limbo until a public health victory is apparent. It’s time to take stock of what we do on behalf of our brands and clients, to immediate effect.

Among thousands of businesses these past two-plus weeks, many of us have effectively handed our marketing decisions over to finance and accounting. Which means, if you’re not producing an immediate revenue gain, you’re probably being cost-reduced to the bone, if not entirely out of work. Such is the illiquidous, flash-frozen effect of COVID-19 on our economy. We’ve lost more U.S. jobs in three weeks than we did during the expanse of the Great Recession.

Cash is in crunch, and though The Fed may be doing everything possible to keep our economy afloat (will it work?) we likely will remain in limbo until a public health victory is apparent. That could be months. It may yet take longer to resume growth – and who knows how business and consumer behavior may have changed by then? We are in extraordinary times – and it’s only prudent to recognize this.

It’s time to take stock of what we do on behalf of our brands and clients, to immediate effect. There is much work to do.

Marketing Must Continue … With Prudence

  • Every pharmacy, drug store, food store, and big-box retailer – and the agencies that support them – should proactively communicate store safety measures, and elevate “conveniences” such as shop-online-and-pick-up-in-store to the preferred method of distribution. This is an opportunity to build consumer and brand trust.
  • For financial marketers, the need to connect with consumers right now regarding savings, budgeting tools, and capital preservation should be a high priority. Make it happen.
  • On television, I’ve seen the messages of optimism from the likes of Walmart, Toyota, and Ford. (Post your inspired ad in the comments section below to share, please.) We need these messages right now. Beyond our own mortality, we will emerge on the other side of this. Brands need to be megaphones for hope and empathy. And certainly not insensitivity.
  • Perhaps TV spending is too steep for many brands’ budgets. In my email inbox, my favorite restaurants offer meals-to-go, my coffee house enables virtual tips for unemployed baristas and healthcare workers, and nonprofit organizations are postponing their live fundraising events with an online ask for the here and now. Needs don’t stop, in fact, the chronic has become acute. For those of us who can afford to help, there’s a collective mood to give. There are reasons to keep relevant communication appropriately flowing to audiences.
  • My previous post addressed data quality. Let me repeat: all those mobile and data visitors to your sites right now must not go unrecognized. Ensure you have a data and tech plan to identify (perhaps in the form of free registration, analyze, and engage accordingly.
  • Respond to the Census. Yes, do it for democracy. But we in the marketing business also know how invaluable Census data is to the economy, and the strategies we map for our brands.

So, yes, we’re all facing a flash freeze. And marketing-as-normal needs to be re-calibrated. So let’s re-calibrate … show our CFOs the likely payback, and let’s get going.