One of the most famous examples of the Obama campaign’s reliance on analytics was their famous A/B tests on emails. In one case, by extrapolating the gifts generated from one version of an email to another (actually, there were 18 versions), they determined that the most-effective version raised $2.6 million more than the least-effective version. A perfect reason for why A/B testing is so valuable.
So how can you setup your own A/B tests on a budget? Some email tools, such as MailChimp and Emma, come with a built-in feature for split testing. But regardless of email platform, here are the basic steps to setup an A/B test quickly:
- Export the email list from the existing email platform and open it in Excel.
- Sort the list alphabetically by email address. This in effect randomizes the list.
- Scroll to the bottom, to get a count for know how many email addresses are in the list.
- Divide that number in half, then take the second half of the list and cut and paste it into a new worksheet.
- There should now be two worksheets, each with half of the original list. Save these as separate files in whatever format can be used to import into the email platform.
- Create two new lists in the email platform: Test A and Test B.
- Import the first list so it is added to Test A. Import the second list so it is added to Test B.
- The original email list is now split randomly between two separate test lists.
Now that the test lists are in place, the next steps is to determine what test to run on the email. Example tests include:
- Subject lines (infinite possibilities here)
- Call to action (“Donate now” vs. “Contribute”)
- Personalization (“Dear Judy” vs. “Friends”)
- Day sent
- Time sent
- Sender name (An individual vs. a brand)
- Frequency (Daily vs. weekly vs. two weeks vs. monthly)
These tests can take up a full-time job, so being realistic about the amount of time available to put into maintaining tests lists, creating different versions, and analyzing the results.
As A/B tests become more valuable, you can look at expanding to three separate lists, or even more if you have a large enough master list.
What do you think?
What are some lessons you’ve learned from running A/B tests? What other ideas do you have for testing?