So you’re sitting there lamenting the somewhat lethargic results of recent email campaigns and wondering if a little tweak to your email or landing page would improve results. Maybe change up the subject line – add or remove the organization. Maybe add a photo or two to the message. Maybe change the placement of a link or form or call to action on a landing page. Would that get more conversions you wonder? There has to be an easy way to bump this up, you think to yourself.
So you post an email to a handy helpful email list largely made up of folks doing similar work asking if a subject line change would help. The feedback is extensive but largely anecdotal. Hardly anyone offers up actual data and most of the stories are second or third hand… “well, a group I used to work with put the name of the organization in the subject line and it helped a little, I think.”
And you think, “well, that good but it’s not exactly the same situation I’m dealing with here. It’s a good story but doesn’t exactly apply to my list.”
My god, man… then why not test it on your list!
The thing is, testing on one’s own list and pages is pretty darn easy (though we can make it quite complicated and involved, sometimes for good reason and other times not) but rarely done.
Okay, so the ease of testing depends on the tools at hand. If your email system/online CRM is pretty unwieldy or you just don’t know how to use it then little tweaks here and there can be massive potholes in the road, not small bumps. If you don’t know how to move things around on your site – or have the staff to do so – then little changes can be tough.
Yet what I’ve found is more often a lack of interest or curiosity about testing. More often, folks are resigned to the results they get or, if not sure, just don’t know how to proceed. What to test? How to set it up? Is it worth the time?
Valid questions all. But I think the lack of a learning culture is more the culprit. More on that in a future post but first… what to test.
What to Test
Smashing Magazine ran a great article on A/B testing a while back. They look mostly at web page testing – landing pages, home pages and the like. They listed some good options for basic testing…
- The call to action’s (i.e. the button’s) wording, size, color and placement,
- Headline or product description,
- Form’s length and types of fields,
- Layout and style of website,
- Product pricing and promotional offers,
- Images on landing and product pages,
- Amount of text on the page (short vs. long).
The Smashing Magazine post goes on to cover a lot of great case studies and resources. It is, itself, a great resource. But what about email testing?
Page Testing Statistics and Testing Email
Frankly, a lot of web page tests can take a while to provide statistically significant results. It depends on the amount of traffic coming to the page. Nonprofit pages… well, a little can go a long way. In other words, nonprofits oftentimes have an advantage here because their traffic isn’t that great. Statistically significant changes can happen within a few days or week, depending on the test.
Email testing is likely to have a quicker/bigger bang for the buck for many nonprofits. There also seems to have been more effort put toward testing and documenting email metrics over the past several years. M+R, among others, does a good bit of testing with client accounts and has been good about documenting results.
Options for email testing run the gamut. Likely places to start include:
- Subject line variations,
- Images in the body of email,
- Font colors and sizes,
- Video in the email – well, an image link to a video player,
- Image-only email (I’m not a fan of them but if done well it’s worth testing)
Be sure to look at open rate and click-through rates. One change might raise open rates but not do much to click-through rates – and an unchanged click-through rate does not better conversion make.
A recent post on A/B testing that caught my eye was by randfish over at seomoz.org. He did a good job making the case that it doesn’t necessarily pay to test minutiae – the sort of tiny tweaks and bits and pieces described above. The timeline for statistically significant results can be weeks if not longer and the bounce may be marginal. Still good and a learning opportunity but, as he documents, testing significant changes could pay off sooner and bigger.
Testing: Part of the Whole, Not an Extra
A/B and other testing is often seen as optional: something nice to do if time and resources allow. Of course, the time of your staff is always a consideration. But that’s what makes testing valuable and inherent, not additional. You want to make the best, most efficient use of your time. Then why not better understand what makes the program – and you time – most effective. Build testing and analysis into the program; make it part of the culture. You’re guaranteed to benefit and it might just become a bit addictive.