How To Use Email A/B Testing To Improve Conversions

How To Use Email A/B Testing To Improve Conversions

Brands that run A/B tests for every email they send report a 37% higher email marketing ROI than brands that don’t run any A/B tests.

Whether it’s a B2B company sending nurturing drip email campaigns to its leads or an eCommerce brand sending promotional offers, adding experimentation in the form of A/B tests to an email marketing mix is one sure way to achieve more conversions.

But such email experimentation is intentional. Which means, simply Googling “email A/B testing ideas” and then working off from the list doesn’t work. You need to do more. So let’s see a few ways you can get started with email A/B tests that will set you up for repeatable email marketing success.

Choosing the right performance metrics 

When it comes to A/B testing emails, most email marketers start thinking in terms of what could get them more opens and clicks. But these top-of-the-funnel engagement metrics rarely reflect the real success of an email campaign.

For example, even if an email experiment (say on a subject line) does get more opens, it doesn’t really tell if the email campaign ended up driving more leads or revenue or business.

To understand the actual business an email campaign generates, you need to study the entire email interaction funnel. You need to be able to monitor and understand the on-site behavior of your email subscribers once they open and click-through from your emails. And then report on the “real” business metrics like trial signups or demo request the email opens and clicks resulted in.

Merely stopping at monitoring the opens and clicks in an email experiment and declaring winners to be the ones that get more isn’t the right way, as it leaves real conversions out of the experimentation.

Essentially, you need to tell how the engagement metrics like opens or clicks resulting from the experiment map to the real performance metrics (such as the conversion metric of the landing page the email open/click eventually leads to). Merely stopping at monitoring the opens and clicks in an email experiment and declaring winners to be the ones that get more isn’t the right way, as it leaves real conversions out of the experimentation.

Chad S. White (author of the top email marketing book “Email Marketing Rules”) explains this really well:

“Also, who cares if subject line A generates more opens than subject line B if the latter generates more conversions? And who cares if email content A generates more clicks than email content B if the latter produces more conversions? We guarantee that your boss will prefer more conversions.” 

So how do you observe your email subscribers’ on-site behavior to determine the “real” winners from your email A/B testing experiments?

One simple way to do it is via Google Analytics using UTM parametersWith UTM parameters, you can reasonably establish if your email was the final touchpoint before the conversion, or if it assisted the conversion. Let’s take an example.

Suppose you run an email experiment where you test two versions of content (each carrying only one link to your landing page). Further, let’s assume that one version (or version A) uses the fear of missing out approach to the content with a limited-time free trial and the other one uses the social proof approach (by using customers testimonials, badges, reviews, etc.).

So using UTM parameters for this experiment, you should create two distinct links for your call to action.

One could be:
convert.com?utm_source=newsletter&utm_medium=email&utm_campaign=q2&utm_content=
fomo

And the other could be:
convert.com?utm_source=newsletter&utm_medium=email&utm_campaign=q2&utm_content=
socialproof

As subscribers click on your links and visit your website, Google Analytics will capture the data from your UTM-powered links and be able to show you which content theme worked the best.

You might be surprised to find that although the fear-based content email inspired more clicks, it was actually the social-proof laden email that got more signups. You get the idea, right?

Some email marketing service providers also support such tracking out-of-the-box while others let you build integrations so you can get a better snapshot and accurately track your email campaign’s response.

Forming a good hypothesis

As with any experiment, even an email experiment must begin with a strong hypothesis. A hypothesis is always guided by a problem and essentially establishes why you want to run an experiment in the first place. 

Writing a hypothesis for your email experiment forces you to look at data (or the “problem” you’re currently struggling with), analyze why you think your experiment will have a positive impact on the conversion rate, and also list out the metrics that will define success.

Conversion optimization expert Craig Sullivan shares a simple hypothesis generation kit in his Medium post:

hypothesis generation kit

Here’s another handy online hypothesis generation tool. Simply fill out your details and your data-backed hypothesis should be ready. 

hypothesis generation tool.

As you can see, simply writing a hypothesis helps you set a solid foundation for an email experiment by stopping you from testing random changes, while at the same time prompting you to test changes based on your email channel’s goals.

Choosing the type of email A/B test to run

I don’t want to get into the elements you can test in your experiment — because you can test everything right from the sender’s field and subject line to the copy and layout. Rather, here I’d like to discuss what type of an element you should test based on your goals.

Essentially tests that test small tweaks like a different CTA button color or a different image only help you go from a conversion rate of X to X.2. 

But if you routinely see poor conversions from your email campaigns, then going from X to X.2 won’t help. Instead, you should be looking for conversion optimization opportunities that take you from X to 2X. These happen only when you run radical experiments.

The content experiment from the above section, for instance, is a radical experiment where you’re exposing your subscribers to completely different messaging styles from what you currently use. Such radical experiments help you discover the “Global Maximum” in your case or an approach that’s entirely new to you but can win massive conversions.

local and global maximum

Based on your email conversion goals, you can choose to go with a series of experiments testing small tweaks or start with a radical experiment and build upon it with a series of experiments testing small tweaks to further improve the improved conversion rate. 

Getting the logistics right

Once you’ve hypothesized your email A/B test, it’s time to determine the sample size, the duration of the experiment, and how you’re going to split your subscriber base.

When it comes to email A/B testing split types, the most popular one is the 50/50 split. Here you send version A to 50% of your subscribers and version B to the remaining 50%.

Alternatively, you can send version A to 25% of your subscribers, version B to another 25%, and the winning version (based on the opens or clicks generated) gets sent to the remaining 50% of the subscribers.

Some email marketers don’t include the entire subscriber base in their test. In this case, for example, if you have a 10-email drip campaign and decide to run an A/B test with an additional 11th email, then you could only experiment with 90% of your subscribers and the remaining 10% won’t be exposed to your experiment at all. Such holdout testing helps gauge the overall effectiveness of your experiment. The optimizing team at Pinterest often uses a “1 percent holdout group” for its experiments.

In addition to finalizing the split, you should also think about the sample size you’re going to test (this is a good resource about A/B testing sample sizes, by the way). 

[You] could only experiment with 90% of your subscribers and the remaining 10% won’t be exposed to your experiment at all. Such holdout testing helps gauge the overall effectiveness of your experiment.

In general, most email marketing service providers suggest that you can run winning A/B tests even with small subscriber bases. For instance, according to MailChimp, if you have 5000 subscribers to test for each of your versions (10,000 in total for versions A and B), you should be good. HubSpot, on the other hand, calls even a subscriber base with a 1000 contacts decent enough to run A/B experiments. 

However, if you want a more accurate sample size for your experiment, check out the sample size calculators from these conversion optimization tools.

Once you know how you’re going to split your subscriber base and how many contacts you’re going to use to run your email A/B experiments, it’s time to identify the stopping point for your test.

Depending on your test’s goals (opens or clicks), your stopping point will differ. According to MailChimp, a test for optimizing opens can find a winner in around 2 hours. And for tests for click rate optimization, it found the ideal test duration to be about 12 hours.

Note that your winner based on opens or clicks might or might not be the “final winner,” because as we saw above, the final winner is the one that leads to more conversions, and an email version that gets more engagement doesn’t guarantee higher conversions. In most cases, you’ll need a few days to determine the final winner based on the subscriber’s interaction with your email and their activity on your website. 

Building a robust email testing framework

When starting out with email A/B testing, it can be very addictive to test the button color or the effectiveness of personalization. Often, such small tweaks move the metrics because of the novelty effect, with subscribers responding to the “newness” in their emails. 

But while such instant wins help with generating interest in email experimentation, they do little for long-term email marketing success.

For that, you need to develop an email testing framework that lets you go after your specific email marketing goals and avoid ending up using your testing bandwidth to test random changes. 

Often, small tweaks move the metrics because of the novelty effect, with subscribers responding to the “newness” in their emails. 

Building an email experimentation framework also helps you document all the email tests you run and their results. The results of your earlier experiments will guide you in planning future email tests. For instance, if you find in one of your email A/B tests that pushing your email CTA button into the fold area gets more clicks, you might want to test another color for the CTA button in a follow-up experiment to see if doing so improves the conversion rate further. 

By investing in such a framework, each time you want to test emails (which you should do frequently for higher success), you’ll have a bunch of insights that you’d have already learned from earlier experiments.

This might sound like a lot of work but even a simple Google spreadsheet would do the job.

Wrapping it up

So that’s pretty much all you need to know to start running meaningful email A/B tests that actually impact your business’s bottom line. Over to you: do you A/B test emails? If so, how do you approach your hypothesis and find the real winners? Tell us in the comments!

Leave a Reply (0)

Copied to clipboard