Subject Lines in Sheeps’ Clothing: A Go or a No?

I’m sure you’ve seen it, if not used it yourself: Marketing emails wearing a friendly disguise, boasting “RE:” or “FW:” in their subject lines, usually with a real person’s name in the from line rather than a publication or company name. Obviously, the objective is to give the recipient a sense of familiarity. But is it worth the risks?

I’m sure you’ve seen it, if not used it yourself: Marketing emails wearing a friendly disguise, boasting “RE:” or “FW:” in their subject lines, usually with a real person’s name in the from line rather than a publication or company name.

Obviously, the objective is to give the recipient a sense of familiarity, or curiosity about whether this is a correspondence they were previously involved in, thus hopefully prompting an open.

I can tell you that in my three years copywriting for the Target Marketing Group’s marketing department, I’ve used subjects like these several times, as have most of my colleagues—and to be perfectly honest, we’ve seen impressive results as far as response and conversion rates.

Many marketers feel strongly that this method is simply too dishonest, erring on the devious rather than the clever side of crafty. Integrity and ethics are never negligible factors in what we do, even when a high open rate seems like the most important goal.

After some consideration, our marketing department decided to stash away the “RE”s and “FW”s for a while. Still, I thought I’d check out the stats for a few of these emails, to see if it was at all possible that the benefits outweighed the risks. Here’s what I found at a glance:

Subject 1
Re: Your Direct Marketing Day @ Your Desk Registration

Subject 2
Re: 2014 email marketing plans

Subject 3
FW: Reasons to register

Registrants:

340

Registrants:

336

Registrants:

15

Open rate:

28%

Open rate:

18%

Open rate:

21%

Unsubs:

372

Unsubs:

309

Unsubs:

90

Spam Complaints:

6

Spam Complaints:

7

Spam Complaints:

4

The first two examples were used in promotions for free virtual conferences, while the third promoted a paid workshop. You can see that the open rates were rather good, especially the first of the three. You wouldn’t know from the table, but I can tell you that these registration numbers were among the highest of any email in these events’ respective campaigns.

Now for the bad news: Example No. 2 had the highest number of unsubscribers and spam complaints in its campaign by far. Nos. 1 and 3 were not the “winners” in this respect, but certainly too close to the top to be in the clear. We also received a small handful of, shall we say, colorfully phrased (so colorful they’d have been bleeped on network cable) criticisms from offended readers.

So, what’s the conclusion? Does the fact that all of these emails were huge successes purely in terms of conversion mean that a large majority of recipients were fans, or at least not bothered by the tactic? Or are those unsubs, spam complaints, or simply the principle of the thing too significant to handwave?

As of now, I treat them as I treat wasabi: Use sparingly and with extreme caution. I’d love to hear what you think, or if you’ve done some testing with it yourself!

The Problem With A/B Testing

This week we set up an elaborate A/B test on subject lines. I liked “How 1.75 Billion Mobile Users See Your Website” and my client manager liked “Business Cards are No Longer the First Impression.” We learned long ago not to be a focus group of two, but our testing also proves something else I’ve been saying for years—A/B tests do not stand alone.

This week we set up an elaborate A/B test on subject lines. I liked “How 1.75 Billion Mobile Users See Your Website” and my client manager liked “Business Cards are No Longer the First Impression.” We learned long ago not to be a focus group of two, but our testing also proves something else I’ve been saying for years—A/B tests do not stand alone.

For our Mobile Users campaign, we dropped in an actual screenshot of every recipient’s website as viewed on an iPhone 6 (see image), because we knew this level of personalization could add a sizeable bump to engagement. It’s one thing to tell a recipient their website looks awful on a mobile device; it’s another thing to show them.

At the end of the campaign, we will have sent under 10,000 emails, but before we get to the balance, we felt it was important to know which of the two subject lines would perform better. All of us want to have the very best chance of success, so this was a necessary step. Ensure our subject line would foster a higher open rate.

For our initial test, we sent 600 emails, half to each subject line. One subject line performed best with opens, the other subject line performed best for clicks to the form. What that means is we now have a new question: is it better for us to get more people to open and see the message, or is it better to get fewer people to open, but to have accurately set their expectation about what was inside so they would click?

The open rate differed by more than 10 percent, and the CTR by about 2 percent.

Should I stop my analysis here and answer the only question I started with (which subject line should we use), or would it be better to take a look at other factors and try to improve the overall success in any way we can? For me, the problem I see with many marketers’ A/B tests is they ask one question, answer it, and then move on. In fact, many email automation systems are set up in precisely this manner: send an A/B test of two subject lines, and whichever performs better, use it to send the balance. What about the open rate and the CTR combined? Isn’t that far more important in this case (and many others)? Let’s take it one step further: what about the open rate, CTR and form completion rate combined? Now we’re on to something.

There are many factors at work here: time of day, past engagement, lifecycle and more. The subject line is a good place to start, but I can’t afford to ignore what we’ve gleaned from other campaigns.

This then becomes the hardest part of testing—be that A/B or multivariate—isolating what we’ve actually learned, and that usually means I cannot analyze just this one campaign. It must be an aggregate.

For our campaign, I took our test results and put those into a spreadsheet of 2014 campaign results and started to look for patterns. We’ve all read Thursday mornings are good (as an example), but does that hold true for my list? Were my open rates affected by time of day, by date, by day, by business type, by B-to-C vs. B-to-B? These are all analytics we track because we’ve found each of these does, in fact, influence open rate.

So, yes, we did learn which of the two subject lines performed better for opens, but what we also learned is that a repeat of the test to another 600 recipients on Tuesday morning instead of Thursday morning resulted in almost exactly opposite performance.

A/B tests can be hard. If they were easy, everyone would do them. Our simple one-time test was not enough information to make decisions about our campaign. It took more testing to either prove or disprove our theories, and it took aggregating the data with other results to paint the full picture.

We did find a winner: an email with a good subject line to get it opened, good presentation of supporting information inside, that led recipients to a form they actually completed, and all sent on the right day at the right time, from the right sender,

While you’re not privy to all of the data we have, on the top of the subject lines alone, which do you prefer?

Are Autoresponders Killing Email Marketing?

Two events in the same week have triggered an email unsubscribe flurry on my behalf. First, a change in my spam provider is permitting more unwanted emails than usual to leak through. And second, a conversation with a long-time colleague and regular reader of my blog, where she wondered if marketing automation software is being abused to a point where we’re drowning

Two events in the same week have triggered an email unsubscribe flurry on my behalf. First, a change in my spam provider is permitting more unwanted emails than usual to leak through. And second, a conversation with a long-time colleague and regular reader of my blog, where she wondered if marketing automation software is being abused to a point where we’re drowning in email and ignoring it more than before.

A smart strategy used by many direct marketers is the invitation to opt-in for emails. Often there is a carrot dangled in front of prospects to opt-in, such as a few dollars off an order, a free report, the promise of being the first to be informed, or because they’ve made a purchase transaction. Of course, legit direct marketers always assure privacy and provide a link in their emails to unsubscribe.

As an outcome of this strategy, marketing automation software companies report impressive stats about autoresponder welcome email performance:

  • The average open rate for welcome emails is a whopping 50 percent, making them significantly more effective than email newsletters.
  • Welcome messages typically have four times the open rate and five times the clickthrough rate of other bulk mailings.
  • Subscribers who receive a welcome email show more long-term engagement with a brand.

What these stats don’t reveal is the long-term effect after time of high frequency marketing automation software autoresponder emails.

Of course, opens, clicks and unsubscribe rates are good early warnings if you’re emailing too much. If your unsubscribe rate is 0.5 percent, according to various email deployment firms, you’re performance is great. Even 1 percent is good. Some email providers suggest industry unsubscribe norms are acceptable at 2 percent.

But I wonder how many of us have given up on the step to unsubscribe and simply delete. Is there a tipping point where enough is enough?

One day last week I made an inquiry for a direct mail list from the automated website of a mailing list organization. I gave them my email (a fair trade for quickly accessing counts). Obviously, the organization’s automated system knew I had run some counts. I didn’t order that day, but suggested to a client that they place an order. An hour later, an autoresponder asked if I needed help with my unfulfilled order.

Smart, I thought.

But then the next day, another autoresponder email arrived. While a bit annoyed with seeing still another email not even a full 24 hours later after I didn’t purchase, they presented me an offer of 15 percent off my order.

Smarter, I thought.

Until I realized that, had I ordered the day before, I would have paid full price (and would never have known because no doubt the marketing automation software would have placed me in a totally different sequence of follow-up messages). Such is a marketers’ challenge with autoresponders. Annoy me by sending them repeatedly, or too soon; surprise me with a 15 percent discount, but tick me off when I realize I could have paid more than needed had I ordered on the spot. Oh, and embarrass me when I contact the client to say “hold off on ordering!” And we wonder why shopping carts go abandoned. Marketers have trained people not to order on the spot because, if we wait, there may be a better deal.

Poor email content, little purpose and too high frequency of emails isn’t the fault of marketing automation software. It’s the fault of the marketers who are abusing a program that regularly, and systematically, automates the email marketing contact cycle.

What do you think? Too many email autoresponders? Poor email content and reason to email? Or are marketers sending email at what seems to be a reasonable pace?

Monitoring clicks, opens and unsubscribes reveals the true answer to these questions. But sometimes one wonders if the relatively inexpensive cost of email marketing is encouraging some marketers to abuse sending email, and that they’re not paying attention to their email marketing metrics.

The Demotion of the Open Rate

For years, marketers have been tracking open rates and using this stat for everything from choosing the best time to send to validating the deliverability of a particular email-automation vendor; and well, everything in between. With more and more email being opened on mobile devices, Gmail caching images, and fewer recipients choosing to download images (perhaps accounting for as much as 40 percent of your audience), the open rate simply isn’t what it used to be—not that it was ever all that accurate

For years, marketers have been tracking open rates and using this stat for everything from choosing the best time to send to validating the deliverability of a particular email-automation vendor; and well, everything in between. With more and more email being opened on mobile devices, Gmail caching images, and fewer recipients choosing to download images (perhaps accounting for as much as 40 percent of your audience), the open rate simply isn’t what it used to be—not that it was ever all that accurate.

Charting a high open rate does not necessarily equate to clicks or conversions, but this has always been true. You might have written the most fabulous or enticing subject line and enjoyed a very high open rate, only to have failed to deliver the message and lost in the long run.

The Mobile Effect
Mobile devices are lowering the dependability of the open rate for some analysis, too. Most people scan emails on their devices and save only those they wish to read or act upon later. Emails that don’t answer an immediate need, or that are not relevant, may be deleted prematurely and without much recipient consideration. Even with responsive designs, the recipient is less likely to take advantage of an offer on a smartphone than on a tablet or desktop device, it’s simply easier to engage on a bigger screen.

Open Rate Increases
Gmail’s new image caching system automatically downloads images, and, for those recipients using Gmail or Google Apps, this can further affect your open rate tracking—your open rate will likely increase. The first open will be tracked correctly by most ESPs, but subsequent (repeat) opens by the same recipient will likely decrease. Unique opens, like opens, will become more accurate.

As with Gmail and Google Apps, iPhone and iPad devices download images by default. If you’re tracking your stats year over year, this increase in open rates by Gmail and iOS users will affect your ability to accurately assess your campaigns.

You may find that your open rates increase, but click-through rates do not, resulting in lower click-to-open rates.

Best Time to Send
Some email automation systems, such as Variant4, are able to send messages at the same time as the last open from the recipient, and this can be useful, but determining the right time to send based upon open rates alone will be misleading for the reasons stated earlier. When possible, opt for the previous engagement time, since if the open occurred on a mobile device, the click or conversion may have taken place later from a desktop device and that actually represents the better time for future sends.

Ensuring your content is on the mark is more important than ever as this is the driving force behind clicks and conversions (and not opens). Getting your audience to engage will gain you future priority placement in the inbox rather than a continued relegation to the promotion tab of Gmail.

Still Some Value
As undependable as the open rate has become, it does still represent some value—especially for segmentation and A/B testing of subject lines, for instance. Show caution when basing your conclusions on open rate alone and take the necessary steps to validate your finding through other supporting metrics.

Design Wins
As more email providers download images by default, we as marketers make a major win in the design arena. No longer will we have to design text formats and forfeit brand recognition. Our emails will be displayed in the manner in which we had intended all along.

Email’s No. 1 Misunderstood Metric

So you’re sitting around a conference table discussing your company’s email marketing and someone starts talking about the program’s open rate. To the uninitiated, common sense says “open rate” refers to the average percentage of emails that get opened. … But that’s not what it means all.

So you’re sitting around a conference table discussing your company’s email marketing and someone starts talking about the program’s open rate. To the uninitiated, common sense says “open rate” refers to the average percentage of emails that get opened.

But that’s not what it means all.

An “open” is recorded when the receiving machine calls for graphics from the sender.

With most email inbox providers blocking images by default these days, a lot of email is getting opened and not registering as such.

At the same time, email landing in boxes using so-called preview panes—those small windows that allow users a glimpse into their email’s content—will register as having been opened whether the receiver read it or not.

So the open rate is useless, right?

Well, not really. While the open rate has been widely criticized—including, at some points, by me—it can be useful as long as it’s used correctly.

While it can’t be measured with even close to 100 percent accuracy, the open rate can serve as a barometric measure.

For example, it can indicate how engaged recipients are with a marketer’s brand. A high open rate means people are making the effort to turn on the graphics in the company’s messages, indicating they’re highly engaged.

Not surprisingly, though, open rates can be misleading. A newsletter I once edited, Magilla Marketing, had a low open rate, but at least one advertiser determined I had a highly engaged readership based on its ad activity.

The issue? Our designers had designed the newsletter so well it was highly readable without the graphics turned on.

In any case, what’s an average open rate? According to email service provider Epsilon, the average open rate across all the industries it tracks for the first quarter of 2011 was 23.3 percent.

However, open rates in the report varied widely from industry to industry, from a low of 14.1 percent for retail apparel to a high of 37.4 percent for financial services.

And like response rates in direct mail, open rates will vary widely from marketer to marketer even within the same industries based on many variables, such as how the list was built, how much email the firm sends, the types of messages it sends and the types of offers.

Nonetheless industry benchmarks can serve to manage expectations.

Where an email program’s open rate can really be useful, though, is when it changes.

If it’s going up, it means the sender is doing something right and recipients are getting more engaged with the brand.

If it’s plummeting, it means the marketer has probably begun doing something wrong. For example, maybe the marketer just added purchased names to the file-a big no-no-and email inbox providers have begun treating the marketer’s messages as spam.

Also, if opens begin plummeting in addresses managed by a specific ISP, say, Gmail, it means something has happened on Gmail’s end that needs to be investigated.

The open rate can be quite useful. But it needs to be understood, first.