A/B tests can improve conversions on an ecommerce site. There are many testing platforms — search Google for “A/B tests.”
To run a test, identify a key conversion element, such as a product description. Half of your traffic would see version A (which is typically the control group that is unchanged) and half would see version B. The version with the highest conversion rate would be winning element to adopt for all traffic
A/B tests are easy to conduct, typically. You can see results as quickly as a few days if not hours, depending on your traffic. There are a few challenges, however. These include knowing what to test and deciding when the data conclusive.
What to Test?
Ecommerce companies, in my experience, can test button colors, fonts, product descriptions, images, promotions, and many more elements. Never test two elements at the same time unless they are in combination. For example, if you test button colors, do not also test a new product description as you would not know which one drove the results. However, you could test the button color and a different wording on the button.
Develop an overall matrix for your tests. Here is an example.
- Brainstorm all potential elements to test.
- Determine the type of test, such as design test, messaging test, promotion test.
- Identify the pages. Figure out where to perform the test, such as the home page, product page, category page, or contact us page.
- Create benchmarks. Pull the average daily traffic and the average conversion rate before the test, for benchmarks.
- Prioritize the tests. Using the benchmarks, consider running different tests on different pages, assuming there is enough traffic.
|Elements to Test||Type of Test||Pages||Benchmark Traffic||Benchmark Conversion Rate|
|Button Color||Design||Product Page||5,000||1.20%|
|Feature Product A||Product||Home Page||20,000||0.85%|
|Feature Product B||Product||Home Page||20,000||0.85%|
|New Category||Category||Home Page||20,000||0.85%|
|Price Discount||Promotion||Category Page||15,000||1.00%|
|Return Policy Banner||Design||Product Page||5,000||1.05%|
Once you’ve developed a matrix and started the test, make sure you have enough data for a decision. This can be tricky. The first step is to identify the necessary amount of traffic. To do this, multiply your conversion rate by the number of sessions you get a day, week, or a month. This will provide the typical number of sales. That number should be significant enough to apply A and B tests.
A winning test that produces a sales lift of at least 10 percent is generally sufficient to switch from the control to the test. But for some tests, an increase of even 5 percent can be enough, especially for high-volume businesses.
Here are a few examples.
Test 1: A small amount of traffic with a large difference in results. A week of testing with 2,000 sessions produces these findings:
- Control group A: 1.5 percent conversion rate.
- Test group B: 1.75 percent conversion rate.
- Difference in conversion rates: 16.7 percent representing five sales.
- Conclusion: Enough data to decide to change to test group B
Test 2: High traffic website with a small difference in results. A day of testing with 50,000 sessions produces the following results:
- Control group A: 1.25 percent conversion rate.
- Test group B: 1.30 percent conversion rate.
- Difference in conversion rates: 4 percent representing 50 sales.
- Conclusion: While the percentage change is small, due to high traffic it is worth making the switch.
Test 3: Only 1,000 sessions in the test, with the following results.
- Control group A: 1.5 percent conversion rate.
- Test group B: 1.7 percent conversion rate.
- Difference in conversion rates: 13.3 percent, representing two sales.
- Conclusion: While the difference is significant, it is risky to rely on due to lack of traffic. Continue to run the test, or discontinue.
There is a calculation to detect required sample sizes. It’s called “statistical power” — the likelihood that a test will detect an effect if there is an effect. Testing platforms use the calculation to inform sample sizes. For example, see Optimizely’s “A/B Test Sample Size Calculator.” However, a bit of gut feeling and analyses is enough for most companies, in my experience.
Large ecommerce stores can test many elements daily. Smaller retailers with low traffic can still benefit, by testing on multiple pages, testing for longer periods, and by prioritizing on what to test. (Testing on multiple pages can help speed up a decision because you do not have to wait for visitors to visit, say, a certain product page or a certain category page.)
Reviewing offers by competitors can help determine priorities. For example, if five out of seven competitors display a “free shipping” banner, consider testing it on your site.
In short, testing can improve conversion rates. Most leading ecommerce merchants continually test and tweak, to drive sales. The work involves choosing the elements, running the tests, and analyzing the results.