What Most E-Commerce Brands Get Wrong About A/B Testing — And How You Can Get it Right
Most retailers are familiar with how A/B testing can help them optimize different elements of their website to increase engagement, conversions and, ultimately, revenue. From product recommendations to banner images to discounting levels, there's almost an infinite number of elements that can be tested to improve the experience of each site visitor. Most retailers at this point have some experience with A/B testing.
However, just as the art and science of e-commerce is becoming more sophisticated, best practice thinking and the technology supporting testing is evolving. To remain on top of this evolution, here are three ways e-commerce retailers can yield the greatest impact from their A/B testing efforts:
1. Test complete customer experiences together, not just isolated elements.
At its core, and in the context of e-commerce, A/B testing gives retailers the ability to run tests based on two variations of a single element — e.g., an image, personalized product recommendation, call to action — against each other. For example, with A/B testing it's possible for a retailer to identify which product recommendation variation achieves the most conversions on a specific page.
What’s less common — and increasingly useful for e-commerce marketers — is the ability to run more complex, multivariate tests that assess complete experiences. This strategy compares multiple elements across a site’s page or pages to analyze how these elements influence a desired outcome. This helps to better understand how to lead consumers on a personalized journey, allowing for optimization of each element they're shown along the way. These types of capabilities are table stakes for e-commerce testing and personalization platforms.
2. Use segmentation capabilities to analyze test results at a deeper level; don’t just pick a test winner.
It’s no longer enough in e-commerce to just see the overall outcomes for A/B or multivariate testing. E-commerce teams should be able to explore how their different customer segments or affinity groups performed on any test. Without this detail, it's easy to miss surprisingly counterintuitive critical insights.
For example, it can easily be assumed that a test of two banner variations was inconclusive because it didn't result in a clear winner overall across all site traffic. However, by segmenting the audience further, it might be found that returning visitors that made at least two purchases previously, which may be a smaller portion of the traffic, respond at a much higher rate to a particular message and are more likely to make a repeat purchase. This insight would be missed without the right e-commerce testing approach and platform.
Similarly, it's important to have the testing tool of choice fully and automatically integrated with your entire product catalog to help pinpoint how variations in content elements directly affect the sales of individual products and brands.
This is when A/B testing for e-commerce can really come into its own. For example, if visitors see a promotion for a particular brand, such as Nike, it's natural to assume they're going to buy more products from that brand. However, that isn’t always the case. Your tests should go beyond tracking changes in conversion rates or average order values to surface the specific products and brands customers are buying as a result of the tested content.
3. Reduce revenue risk from testing — don’t let underperforming variants slow you down.
A longstanding concern about A/B testing among retailers is “burn to learn” — i.e., the risk of revenue loss during the testing process. That’s because both losing and winning variants are required to be shown to the same proportion of visitors for the duration in a traditional A/B test.
However, it's now possible to use sophisticated machine learning algorithms to automatically minimize this risk. These can be utilized to adjust the flow of site traffic to the highest performing test variations in real time as soon as a winner starts to emerge. Doing so means low-performing variations have minimal negative impact on your business.
Consequently, this also provides more freedom to test and explore more adventurous ideas. For example, say a retailer has two similar variations of product detail page recommendations, plus a third unconventional variation that contains no historical data to prove this variation will make any impact. Knowing that revenue loss is automatically minimized will increase the inclination to want to test lesser known variations. After all, success sometimes comes from being brave enough to try something different.
With online competitors increasing daily, every opportunity to optimize the shopper experience and increase the bottom line counts. This means getting the most from A/B testing is only going to increase in importance.
Matthew Levin is the global head of marketing at Nosto, a leader in e-commerce personalization and artificial intelligence for digital commerce.