How to schedule a good A / B test
Performing A / B and multivariate testing online is nothing new. The practice has been around for decades now, Google engineers reportedly do A / B tests in 2000 to decide on the correct number of search results to display.
So why are companies still ignoring this technique in 2021, arguing that they don’t have enough data or time for a test? Improvements in technological development combined with changes in online consumer behavior over the past 18 months make A / B testing even more essential than before.
An overview of planning split testing
An A / B test is a methodology for comparing the responses to a control item and a test item. The control element is the way your existing media – whether it is a web page, application page, or page element – currently displays. The test item is the proposed modification of the existing media you are exploring, such as a different image, different page text, or a combination of items.
Confusion sometimes arises around test results. An A / B test divides the displayed media among a given set of people. Some see the control version while others see the test version. For this reason, marketers confuse A / B testing results with an absolute choice between one item over another. Technically this is true, but an A / B test shows whether the choice tested represents a statistically significant difference. For example, given a sample of people who were shown a check and test layout, did one layout generate better conversion rates than the other?
Answering this question is why you plan your test through the lens of a hypothesis. A test hypothesis is a statement that establishes that, given normal distribution of data, your change in one control item – the test version – will result in a significant change in customer behavior. A null hypothesis implies that there is no significant difference between the control and test items.
Creating a hypothesis then helps you visualize the test results in terms of business goals. The end result is that marketers can make assumptions and decisions with a clear focus on impact on customers. As mentioned in a previous article on analysis errors, managers may be tempted to compare too many UI elements. Avoid this mistake as it results in wasted time and unnecessary costs for little or no value.
Related Article: 10 Mistakes To Avoid When Rethinking Your Analytics Strategy
What is a good item for an A / B test?
The things most likely to influence conversion rate optimization are usually good choices for testing. A web page can be tested for copy, or a combination of a call-to-action button and a copy (if the difference between the check and test items is clearly different).
Email campaigns are also well suited for A / B and multivariate testing. Longer campaigns allow a test to gain enough data points to validate the results. Sometimes a customer segment opens emails for a long enough period of time to generate enough data to know whether a change to a subject line or item in the email is performing better.
Testing a digital ad for a certain audience is also an interesting test case. Images or ad copy adjustments can be tested, as well as landing pages for a given ad.
Related article: How Google Optimize Testing Can Help Improve Customer Conversions
Factors behind good A / B testing
Testing, whether A / B or multivariate, cannot solve all conversion problems. So knowing the potential pitfalls ahead of time will help decide what a test can and cannot answer.
One of the factors is the amount of data required for the test to be accurate. It is possible to calculate the minimum amount of test data required. As a general rule, your test sample should be equal to 10% of your population size. So when you test an email distribution of 7,500 people, 750 would be your test sample. Online calculators are available to help you here.
All of these simple steps assume an equal distribution of test and control samples and a normal distribution of data. Advanced formulas based on data statistics, such as standard deviation, are also available to calculate a more precise estimate and to account for other data distribution issues.
After establishing the number of your test sample, you will weigh up against the practical considerations of winning enough samples. How long does an ad campaign need to run to reach the number? Will enough people see the test email or webpage?
The test audience should also be representative of your larger target audience. The analytics tool built into the testing platform, such as Google Optimize’s integration with Google Analytics, can help monitor test quality.
Related article: 7 Factors That Determine Email Deliverability
Test platforms available
A number of test platforms are available. I covered Google Optimize in a previous article. Another commonly used platform is Adobe Target, a direct competitor to Google Optimize 360 ââat the enterprise level. Like Google, Adobe initially offered Adobe Target in an analytics suite, but then offered it as a stand-alone platform. HubSpot also offers an A / B testing platform through a free software application called A / B Testing Kit. Users can download the kit which includes a statistical significance calculator to estimate sample size. Crazy Egg is popular webpage A / B testing software that includes a heat map to display results.
Beyond testing platforms, split test analysis can be performed in an open source program like R programming or Python. The advantage of this approach is in cases where the sample is divided unevenly or the exposure data is not normally distributed. Both languages ââare supported by a wide range of advanced statistical libraries. The downside here is that the setup requires some planning with a developer, as opposed to the self-service nature of platforms like Google Optimize and Adobe Target.
Whichever approach you choose, you should develop a split testing procedure that frames the elements of your website or app according to your business needs. Once you’ve established a regular testing routine, you’ll see how your media improvements can lead to stronger customer engagement.
Pierre DeBois is the founder of Zimana, a digital analysis consulting firm for small businesses. It examines data from web analytics and social media dashboard solutions, then provides web development recommendations and actions that improve business marketing strategy and profitability.