Take the guesswork out of website optimisation with A/B testing
Digital marketing revolves around data. It involves gathering and interpreting data, and using this information to enhance websites and marketing campaigns. A/B testing is a great way to test versions of – or specific changes on – a website, landing page, newsletter or other type of campaign.
What is A/B testing?
A/B testing works by creating two versions of a webpage, app or campaign and testing these two variations against each other. The versions are identical, except for one thing. Each version is shown to 50 percent of users. The purpose of it is to determine which version performs better. This is determined by means of data.
Data is analysed to see which variants on the page perform the best, according to the intended goal. This allows you to make data driven decisions on how to optimise your website.
For example, let’s say you’re about to send out a newsletter. You want to test which call to action works better: “Visit Our Website” or “View Our Products”. You send half your recipients a version with the call to action “Visit Our Website” and the other half a version with the call to action “View Our Products”. Once you’ve sent out the 2 versions of the newsletters and had enough time to assess the data, you can determine which call to action works better.
How do you go about A/B testing?
- Test one variable at a time. There are so many elements you can test, such as design, layout, wording, headings and, for emails, subject lines, sender names, etc. But, be patient and only test one element at a time. If you test more than one at a time, you may not be sure which is responsible for any changes in performance. This may seem insignificant, but even the smallest change can have the biggest impact.
- Pick a dependent variable. Before you run the test, pick a goal that will be your primary metric. This way, you can determine the success of the change easily.
- Make the alternative version. This version of the email, webpage, Facebook Ad, Google Ad or whatever you’re testing can be called the “challenger”. This version goes head to head with the original version.
- Split your sample groups. This can be done with audiences you have control over, such as email recipients. You can test two or more audiences that are equal to get results that are as accurate as possible.
- Test both variations at the same time. Timing can affect the campaigns, whether it’s a time of day, day of the week, month or year. So to keep the stakes fair, run the campaigns simultaneously. However, the exception to this is if you’re testing time itself. For example, what’s the best time of the day to send out emails?
- Let the test run long enough to produce enough data. Depending on the test and the amount of traffic, statistically significant results can happen in hours. On the other hand, it can take weeks. Usually, the less traffic you get to your site, the longer you will have to run the test to get enough data.
- Get quality feedback. Don’t just rely on quantitative data. Find out the real reason behind the user’s actions. You can do this through a poll or survey. Audience feedback may even spark your next A/B testing variable!
- Use the winning result. Once you have gathered enough data, you can choose a winner based on your main goals. If the “challenger” is better, then go with that version. However, stick to your original if it drives better results.
Should you use A/B testing?
Yes. Guesswork is dangerous. Once you start making more and more changes to your webpage, or any other campaign for that matter, if your site’s performance goes down, tracking what change affected your performance will be extremely difficult.
A/B testing helps you or your digital marketing company make careful changes to the user experience, based on results. Improve your user experience, conversion rate and sales quality by testing the content, form fields, call to actions, layout and imagery on your website.