Email marketing statistics: six misinterpretations

The reports you get back after sending out emails contain piles of gorgeous numbers, charts and tables. Those statistics tell you plenty about the performance of your emails.

There are many articles that offer guidance on what to measure and analyse (and how.) And a swathe of literature dealing with the ins, outs, convolutions and myths associated with open rates.

But here are a few of the less well-known statistical traps that might cause you to draw false conclusions about your email program...

Received/delivered emails are neither

If you check your reports, chances are that the first thing they tell you is how many emails went out the door and how many were "delivered," "received," or some other word that suggests arrival in the recipient's inbox. The difference is the number that were "returned to sender," i.e. the bounces.

Unfortunately, that "delivered" number is misleading. It does not take account of those emails that are deleted silently by various anti-spam mechanisms in place around the Internet.

Nor does it account for those emails that get delivered to the recipient, but diverted into their spam or junk folder.

If you want a better understanding of your real delivery rate to inboxes, you'll need to use some of the delivery monitoring services out there.

Trumpeting low unsubscribe rates

You often see case studies where a miniscule unsubscribe rate is held aloft as evidence of a successful email strategy. Maybe it is. Maybe it isn't. The assumption is that people who no longer want our emails use the unsubscribe link to stop them from arriving. Low unsubscribe rates imply people want the emails.

Unfortunately, many recipients happily use the "this is spam" or "this is junk" button to ban an email from their inbox. Especially where they don't trust the sender to honor an unsubscribe request.

In fact, I could make an argument that says an unsubscribe request is a sign of trust. Is your unsubscribe rate low because people love your email? Or is it because they're using delete and junk buttons to get rid of you?

Don't treat low unsubscribes as a measure of recipient interest; use more active metrics like opens, clicks or other actions instead.

Inactive subscribers that aren't

A good piece of email marketing advice is to pull out those people on your list who never open your emails and send them a different type of email to try and get them interacting again. Identifying, then re-engaging or eliminating, inactive subscribers is a useful task that can help your bottom line success (learn more here.)

An inactive subscriber is commonly regarded as one that never opens an email over some appropriate period of time. Knowing that, you have to be careful how you label people.

We know from how open rates are measured that people blocking the display of images in emails, or seeing text-only emails, do not register an open.

Better then to extend our definition of inactive to those who never open or click. Even then, if some links in your email (for example in the text version) are not tracked, then you will still get some folk tagged as inactive who aren't. Those who still read your emails, but don't register an open and never follow a tracked link.

So don't assume all those tagged as inactive are genuinely disinterested in your emails. Which is why it never does to simply delete inactive email addresses from your list.

You must give them a chance to (re)confirm their interest through one or more emails deliberately designed to get them to take some measurable action (like signing up again to the list or clicking on a tracked link unique to that recipient.)

Unimpressive forward rates

Your reports may include a number for the percentage of recipients who forwarded your email to others. It feels great when people consider your emails valuable or interesting enough to forward around. But that number is usually depressingly low.

But it's misleading. For many services and software, forwards are only measured when a recipient uses the formal "forward to a friend" feature that might be built into your emails. If recipients simply hit the forward button in their email client, it won't get measured. But that's how most people do actually forward email messages.

So chances are your reports underestimate the real number of forwards. Check and see how your system or software measures a forward before despairing over the results.

Averages masking important variation

Most reports present numbers as percentage averages for the entire list. Unless you dig deeper, these averages can mask some important variations in behavior between different parts of your list. Let's say you send out an email to a list made up of three different groups of subscribers (we'll define those groups later.) Here the open rate results...

Group 1: 80%
Group 2: 0%
Group 3: 30%

Your average open rate is, say, 37%. That's not a number likely to set any alarm bells ringing. If you didn't explore the open rate by group, you'd never know there was a clear problem with Group 2.

Suppose Group 2 was made up of all subscribers with an email address. Bingo! By not relying on averages, we discover a delivery problem at Yahoo. Suppose Group 2 was "email addresses obtained by a co-registration deal at Site X." Bingo! We discover that the quality of addresses they give us is pretty bad.

And we can ask why Group 1 responded so well. If we think we know why, we can repeat the trick in a later mailing to that group alone.

Underestimating active reader numbers

If you review the open rates of a sequence of emails, they probably look something like this:

25%, 24%, 26%, 22%, 25%, 27%, 26%...etc.

You might conclude you have an "active" readership of about 25%. That might even be true. But more often than not, it's not the same people opening each email. Instead of just looking at open rates on an email-by-email basis, ask a different question.

Ask, "how many people opened at least one email in the past month?" (Use whatever timeframe makes sense for your sending schedule.) You may be startled by the results. When I did this for my newsletter, I discovered an average open rate of just over 40% masked the fact that 65% of subscribers opened at least one of the last four issues.

Some of that difference is just the dynamics of life. People are busy, on vacation, short of time this week (but not next) to read your email. But some of it is due to variation in your audience's wants and needs, and can give you vital clues as to how you might split up your list to better target this audience.

If one bunch of addresses only ever opens when you promote red widgets and another only when you promote blue widgets, then it's time to put both groups into separate lists and only send them the content you know generates a response.

It's a question of context

You can avoid all these misunderstandings (and others) simply by applying a critical eye to the numbers presented to you in reports and asking these two questions:

Related reading:

Email chicken and eggs
Culling your email lists

Need more email marketing guidance? Try the email newsletter.

First published: Aug 2007