If you have run a website or worked in web design you might be familiar with A/B testing, or split testing. In order to optimise your site (perhaps to get more people to buy your products) you serve up two slightly different versions of your website to different users.
Perhaps your existing site has a blue "Buy Now!" button, and you think a green "Buy Now!" button would work better. You could just change the button for all of your users and if the conversions increase maybe it was because of that (but it might also be because of a new marketing campaign or a new product that was launched at the same time).
A/B testing gets you to change the button for a small percentage of your users (perhaps 5 or 10%) and measure the difference between your blue and green buttons.
Because you know the only variable is the colour of your button, if the conversion ratio on your experimental group (green button) is higher than the conversion ratio on your control group (blue button) then you should change the button for all users. If it isn't then you should leave the button blue and try a new test.
Continuously running tests one after another allows you to optimise your site to its full potential.
A/B testing can be just as effective for improving your customer experience. Here are five easy ways to implement A/B testing in your contact centre so that you give the optimal experience to your callers and agents.
1. Implement Percentage Based Routing
In a previous Friday Feature post we looked at the Percentage Based Routing applet which enables you to route a percentage of calls in different directions. Decide what percentage of callers should be in your experimental group (perhaps 5 or 10%) and implement the applet.
The call plan on either side of the applet should be identical as you only want to test one change at a time.
2. Create a falsifiable hypothesis
In our example above the falsifiable hypothesis was "The green button will increase conversion ratio". It can be proved correct or false. Examples in your call plan might be "Fewer IVR options will reduce abandonment rate" or "Asking for customer ID's will improve customer satisfaction survey results"
3. Build the right reports
In order to prove your hypothesis you'll need to record data on your control and experimental group. Whatever technology you are using you should be able to build a report focused on a specific call queue. In our example above we'll be looking at the abandonment rate for calls in our two groups. Knowing that the only difference between the two is the number of IVR options we will be able to prove or falsify our hypothesis.
4. Amend your Experimental Call Plan
Now you can change your experimental call plan to test your hypothesis. It is essential to only test one hypothesis at a time. Reducing IVR options and asking for customer ID's might improve your statistics, but you will never know the relative benefits of each.
5. Study your reports. Quickly.
Depending on the volume of calls coming into your contact centre you should get meaningful results almost instantly. Run your report every hour to see if you can observe a trend. Once you are confident that you have proved or falsified your hypothesis then you can either implement your change to the Control Group and run a new test, or just run a new test.
If by the end of the first day you cannot determine a difference between the control and experimental group then you should assume the hypothesis has been falsified - in that you have not proved that your experimental group has achieved the change you predicted.
Web designers for busy sites will hammer through many tests in a single day - perhaps hundreds. You should look to implement a programme of continual testing that helps to optimise your call plan resulting in better customer and agent satisfaction.
Do you run A/B testing on your call plan? What are the best and worst tests you have run? Who is responsible for A/B testing in your business?
We hope this post has been useful. If so, please subscribe to the blog and share with your network.For more information on how you can manage A/B testing from your internet browser why not request a demonstration.