Following on from any website usability study a number of usability problems are usually found. There can often be debate within any organisation as to the best solution for each problem, with no one really knowing the optimal solution. Rather than letting the person that shouts the loudest get his or her own way, a better solution can be to test 2 solutions in a live environment. Whichever performs the best is clearly the superior solution. Welcome to split A/B testing!
Buridan's ass and A/B testing
Did you hear the story of the donkey stood in the middle of 2 equally appealing stacks of hay? He spent so long trying to decide which one to start eating first that the poor animal starved to death without having moved a single inch. This paradox, first discussed by Aristotle is known as 'Buridan's ass' and is an often discussed psychological phenomenon.
If you're managing a website you might face similar situations when you need to decide which of 2 different designs to opt for. Nowadays, you need to continuously improve and evolve your site by making small frequent adjustments. But how do you know which change will have the highest impact on the customer experience?
Split A/B testing is a way of finding out which changes help your users' performance. It provides a controlled method of measuring the effectiveness (or not) of alterations to your site. It's often used for small tweaks (e.g. 'Is this style heading clearer than the original?') but can also be used to test bigger wholesale changes (e.g. 'Is this new 1-click checkout process better?'). In essence, it involves running 2 different versions side by side to see which is more effective.
A typical A/B scenario
You feel your users might not be finding the 'Proceed to checkout' button hidden below the fold of the page and think this might be causing people to leave your site before making a purchase. You've created a new design which you feel is more effective but you'd like to know for sure.
To A/B test your new page, you serve the regular page to say 90% of your visitors as usual, but a randomly selected 10% would be shown your new design. Then you sit back, pour yourself an ice cold gin and tonic, wait and watch your web statistics.
If your new page has a positive effect as you suspect, then you should see an improvement in the 10% group, as measured by conversion rates. Proof that you should publish the more successful page to all visitors.
Advantages of split A/B testing
There are a number of benefits to A/B testing:
- Low risk approach
- Cheaper than other methods such as focus groups
- Provides proof
- Invisible to most of your users
- Great way to do 'test run' new designs before full roll-out (to avoid negative surprises on the launch day)
- Can solve internal disputes
Disadvantages of split A/B testing
A/B testing isn't always suitable. Some of its disadvantages include:
- New designs might have to 'wear in' before you can measure their real performance (visitors' initial response might be negative because they're used to the old solution)
- You can only compare 2 versions with a single factor that differentiates both designs (for more factors / variations you need to deploy multivariate testing which is more difficult to analyse)
- It takes technical know-how to set up and analyse the results
- You're testing in a live environment so external factors might have an impact on the outcome
- If your site is really in bad shape, you'll still need to do a proper overhaul
In a nutshell
A/B testing is powerful stuff and a useful method for a quick comparison for 2 different designs. However, bear in mind that it's no substitute for getting proper feedback from your users. Only this will give you the whole picture straight from the ass's mouth.