Analyzing your test results to see how your pricing, images or copy performed is possibly the most exciting moment of an A/B test, but also, the most important. The decisions you make from A/B tests have vast impacts on your business, sales, profit and/or conversion rate - whatever is your goal of the test is.
When a test completes, your Splitly account will show four charts like the ones below:
The charts are:
- Average daily sessions
- Average daily sales
- Average conversion rate
- Average daily profit
The results are displayed as average daily values because this is how Amazon groups its data for each of your variants.
In the example above, the goal was to boost the average daily profit so the corresponding chart is highlighted in green. Depending on your test, the green highlight will indicate what you are testing and it will give you the overall result for that goal.
However, it is still important to check the other charts too. A sudden increase in sessions does not necessarily lead to an increase in sales so it is always good to run through these charts to make sure everything is in order.
The statistical significance values are displayed in the variants. The definition for statistical significance is the probability of a variant over or under-performing the original. Usually, we recommend running a test until it reaches >90% statistical significance. The additional 10% buffer allows for some degree of uncertainty, but also means we can end a test more quickly.
Finding a winner
In the example above the goal was to increase profit. Variant B (21.88) was the clear winner in this test, almost doubling the amount of profit for this product! We saw better metrics across the board: more sessions, better conversions, increased sales and a larger profit. In this case, the product was underpriced which is a common mistake sellers make. Many sellers have the belief that in order to get more sessions and sales, the price should be dropped to encourage more customers to buy. However, sometimes customers may want to buy a more expensive product because of its perceived quality and as value increases with price, it leads to more sales. This is probably what happened in this case. The only way to truly know for sure is by running an Amazon A/B test.
When a test is unsuccessful
Sometimes, results might not always be so dazzling! Your original listing might perform better than any variant and you may think you have wasted your time, but this is far from the case since you have learned what does not work.
As Thomas A. Edison once said, "I have not failed. I've just found 10,000 ways that won't work."
Knowledge on what didn't work is very valuable because you can apply that to other products and tests. After running several tests and finding winners, you will begin spot patterns for different kinds of variants, namely ones that will increase or decrease your profits. Overall, an unsuccessful test will help you become a better seller in the long run.
My test is taking too long!
Sometimes an A/B test can drag on for several weeks or months before it becomes significant to find a winner. This can happen for two reasons:
- Your sales volume is too small
- The variants are too similar to each other
If you are only making a handful of sales each month (about less than 20) then it isn't a good idea to focus on A/B testing at this moment in time. A/B testing works best after your product has gained some initial traction and you have enough volume of sales to determine how different variants perform.
In most cases, a test will drag on for a long time because the variants are too similar to each other. If that is the case, then we'll most likely recommend that you abort the test - our recommendation will be shown on the test results page. You should abort the test, re-run the test on the same product and ensure that the differences between variances are big enough that it has an effect on how customers behave when on Amazon. An example of this would be to raise or lower the prices of your listing.