Website optimisation | 13/02/2019

Our top three takeaways from Optimizely’s webinar series

Posted by Lawrence Greenlee

Since it’s been much colder, I’ve been spending a few lunchtimes tucked up at my desk with a hot chocolate, plugged into a webinar. A few weeks ago, we had a listen to DeepCrawl’s conversation with John Mueller, and this month I had a listen to Lisa Rohlf’s webinar with Optimizely (you can check out the slides here.)

A lot of what I took away was about keeping track, whether that was of ideas, learnings or keeping track of an individual experiment itself.

Idea jar

The webinar focused on the importance of creating a ‘culture of experimentation’ or developing a collaborative process that everyone has a say in. Just because one person sets up the tests, does not mean that they had to think of every test idea. Building up a bank of test ideas, whether it’s specific to clients or sectors, is a great way to get more experimentation underway and continually improve your offering.

As soon as the webinar finished, I got to work on creating an ideas list for website experimentation. This list isn’t just for the website optimisation team either it’s a list I’m hoping will be full of ideas from everyone at Cobb Digital.

Losses and learnings

Cobb Digital does offer AB testing and CRO as part of our website optimisation service; as a result, we’ve got a lot of learnings under our belt. Our only problem? We’ve not been putting them all in one place.

The idea for a Learnings doc has been floated before, at Brighton CRO last year for example, but it’s taken a little while for it to stick. After I created an ideas sheet, I added a second tab: “LEARNINGS”. Not every test is going to be successful or have a positive impact and, whilst the reason for this might sometimes be obvious, it’s still beneficial to create a structured way of viewing all this data at once.

The whole picture

This topic is a bit more involved, but brings us back to business KPIs. When it comes to measuring the success of a test, it’s important to see its overall impact on the user journey.

Example: You test a 10% discount on a product – more users are buying them, but they’re not buying anything else. You’re selling more of that product, but people aren’t interested in others. Therefore, we learn that the test is a success in that your original hypothesis was correct, but a loss in terms of your objective to increase overall revenue.

Example: You decide to test some copy on your website and receive a huge increase in enquiries on your test variation, but these all turn out to be users asking for clarification on part of the copy. The test is successful because you hypothesised you could increase the number of leads, but it’s a loss because none of the leads are leading to new business.

This isn’t just looking at the big picture: it’s seeing the whole picture and playing spot the difference.

 

At Cobb Digital, we love data and are always looking to test and experiment new things. If you want to talk to our team about AB testing, get in touch with us on 01273 208 913.