Email software popularity: 5 lessons for your list
Who cares what software people use to read your emails?
If you have a “safe” email design, you know each message displays gracefully whether viewed in Outlook 2007 or Gmail.
The only exception is when people use a mobile device, but you can get round that with the assumption that they’ll save your mail to view on a desktop later.
But there is value to knowing exactly where your emails are viewed.
Getting the stats
You may be wondering how on earth you can tell whether people are viewing your messages on AOL or Apple Mail. It’s only recently that the right tools have become available.
Here’s a snapshot summary of the 50 or so different clients and webmail services my list used to view a recent newsletter issue:
83% viewed the email using desktop software (like Outlook), 15% using a webmail service (like Yahoo! Mail) and some 2% using a mobile device.
1. Compare with benchmarks
The first thing you can do is compare the numbers with benchmarks to see how your list differs and then think about why that might be so:
- Email client popularity stats by Campaign Monitor (June 2009)
- Email client statistics for B2B and B2C by Fingerprint (Sept 2008)
- Recipient platform preferences for B2B and B2C by Pivotal Veracity (Oct 2009)
It seems I have an usually large number of people viewing on Outlook 2007 (even for a B2B list), Apple Mail and Gmail.
This suggests the list is attracting a business audience that’s updating its software faster than most, perhaps a little more first-mover, tech-savvy than others and with a significant minority of design/creative individuals?
Supporting evidence for that interpretation comes from the data provided by MailboxIQ on the browsers used to view my messages in webmail environments:
Both Firefox and Chrome are more popular with subscribers than you’d expect given broader stats on browser market share.
Such knowledge might influence my content strategy going forward. It also worries me that my email’s design is pretty simplistic: what must those cutting-edge and creative folk think?
2. Compare with your list
Software stats are only recorded when an open/render is recorded (more on that later). Your standard campaign reports should also tell you which subscribers “open” an email.
So you can calculate the percentage of recorded opens associated with a webmail address (i.e. how many of your gmail.com addresses recorded an open) and then compare this with the results from your software/webmail distribution stats.
Let’s take the newsletter issue used to produce the software stats above.
7.9% of opens recorded by my ESP were from subscribers with a gmail.com address, 4.2% subscribers with a yahoo.com address and 3.2% subscribers with a hotmail.com address.
All these numbers are higher than the equivalent number produced by MailboxIQ, suggesting that people are signing up with a webmail address but downloading their webmail to another viewing environment.
If the results were reversed, it would suggest many people are signing up with business domain addresses, but actually viewing the mail in a consumer webmail environment: either directly or because their email applications are actually powered by Gmail, etc..
[Gmail recently announced that over 20 million users do this.]
In reality, both these activities are happening. Your stats simply tell you which is happening more often.
The clear message is this: you can’t make assumptions about viewing environments based on the domain name of the email address.
B2B marketers may also be surprised at the volume of webmail users on their list (15% in my case), suggesting they need to pay just as much attention to webmail deliverability issues (particularly sender reputation) as their B2C counterparts.
3. Trend spotting and mobile strategies
If you follow your stats through time, you can pick up software trends that perhaps reflect changes in the makeup of your list. Most importantly, this data helps you decide on whether (and how) to tackle the issue of mobile email.
I have a small list and only 2% use mobile devices to view my emails. For now, it makes little sense to develop a fully-fledged mobile email strategy, with mobile-ready landing pages etc. But what if that number was 10%?
4. Design testing
Of course, if you haven’t got a “safe” email design, information on subscriber software use lets you know exactly what display environments you should be testing.
Perhaps you have a B2C list and never worried too much about Outlook 2007? Or a B2B list and ignored Windows Live Hotmail? Now you know if you were right to do so.
Even if your design is “safe”, there are things to learn. Most design testing tools do not include software/browser combinations. In other words, you get a single screenshot of how your design looks in Gmail. And maybe it looks just great.
But does it look great in Gmail when viewed in IE8, Firefox, Chrome and Safari? For example, I never really bothered to worry about Google’s Chrome browser. But now I see 17% of my Gmail users also use Chrome, maybe it’s worth investigating.
5. Customer-level design
What if you could associate a particular email software or webmail service with an individual email address? Could you then begin sending emails optimized for that particular display environment?
The possibilities are many. For example,
- Including detailed “add to address list” instructions that are a perfect match in terms of vocabulary and instructional steps
- Changing subject lines to fit the likely available space (especially for webmail users)
- Streamlined versions for mobile users
- Dumping inline CSS for an external stylesheet or “CSS in head” approach for those environments that support it (saving bandwidth costs)
The danger here is that people sometimes switch between software or webmail services. But you could build in rules: if someone records the same software over a certain time period, then you can feel safe sending them future emails customized for that software.
Theoretically you could do this anyway for the webmail domains on your subscriber list. But as we’ve already seen, the domain name in the email address does not necessarily tell you where the subscriber actually reads their email.
Obviously, you’d need to test to see if creating such customized versions was justified by the results. But the potential is clear, especially when combined with other targeting technologies, such as trigger emails.
A note on measurement issues
As mentioned, a recipient needs to “open” an email for the tools to capture data on the software that recipient is using. So no data is recorded where no open is registered.
If we assume that people’s propensity to activate images is independent of the software they use, then this technical problem is irrelevant: everyone is equally underrepresented. But there are still issues to take into account when interpreting software distribution numbers:
1. The assumption isn’t necessarily true. Recent data from MailChimp, for example, showed that Gmail users tend to engage more with email than other webmail users (i.e. they are more likely to open email). So maybe the stats overestimate Gmail use.
2. Some display environments don’t have the facility to display (tracking) images at all. So, for example, certain mobile devices will be heavily underrepresented in the stats.
3. One-off deliverability problems can skew the results. If you trigger a block at Yahoo.com, well, the number of people viewing your email using Yahoo.com is…um…likely to be low.
If you trigger a block at Postini, then corporate users see less of your email than webmail users. The result: a false impression of how many people use software like Outlook.
Keep those issues in mind. In particular, you might want to average numbers over several campaigns so that short-term or one-off delivery problems don’t bias the stats too much.
Find related articles:
You can follow any comments on this blog post through the RSS 2.0 feed.
2 comments on “Email software popularity: 5 lessons for your list”