Hopefully, you we’re preparing to start shouting at me after reading the title. If you weren’t, you might be making some really wrong decisions when optimizing your webshop and losing a lot of money. Let me explain to you what is so wrong with the title…
Warning: only read this if you’re serious in investing in optimizing the user experience and conversion of your shop. If you’re happy with suboptimal results, go ahead and try to copy Zappos and Amazon like the rest of the web.
When the interface of a webshop is the discussion topic, I regularly get questions like “Which color should we use for…”, “How big should this button be to…”, “What do users expect when…” or “But on Amazon.com they…”.
For I am the psychologist among a group of technological people, of course I’m thé person to have an answer to these questions.
To have an answer would be great, but to get that illusion out of the door right away: the golden egg does not exist. Or better: the answer is different for every website, and sometimes even different for every page. This is actually a good thing, or else all websites would look completely the same…
1001 variations
Why does some text, image or color increase conversion on one site and why does that same text, image or color decreases conversion on another? There are a lot of variables that influence the effectiveness of a user interface.
Let’s take a look at some examples:
These are just some examples, but I can continue for a while. The idea is that every variable requires a different strategy for what works and what doesn’t work for your customers in a certain situation. And the worst part for you is that these variables can also influence each other (a large button might work best when it’s blue, a small button might work better when it’s red).
For a complete site, there are way too many variables for any expert or “best practices” to tell you exactly what will work and what won’t work. Of course there are guidelines/ heuristics/ best practices but to really find out what works best you will have to test it.
Setting up tests
Persuasion marketing, interface design and a scientific approach are indispensable when creating a proper test. Every online business owner should at least have 1 FTE in his team working on this, or get an external company to do this for you (hey, that’s me ;)).
But please don’t make the mistake of taking on advice and “best practices” as being the one and only truth and implement every tip right away. That something works great for Amazon, really doesn’t mean it will work well for you (see the variables above)! Same thing for collecting user feedback: if 2, 5 10 or 50 users in your quantitative research study tell you to change something, you still have to test if that change actually works better for the majority of your site visitors.
Good online marketers/ testers/ psychologists spent a lot of time with qualitative research. You can use that experience to save you (a lot of) time on getting ideas on what to test, implement testing procedures, testing properly and interpreting the results. But you still have to verify all assumptions about your (potential) customers to see if these assumptions actually work.
You can’t expect to throw some bad ideas in your testing machine and expect brilliant results…
For A/B and Multivariate testing in early design stages, check out Usabilla Survey, or take a look at http://www.whichmvt.com for a wide list of testing tools, such as Visual Website Optimizer.
To get back at the title of the article: Chris Goward of Widerfunnel has done a nice meta research of around 20 A/B tests of CTA buttons and he made a collection of all the winning variants.
This was the result…
The complete presentation of Chris can be found at Slideshare.
Most of my content is published on LinkedIn, so make sure to follow me there!
Recently I've seen some (often absolute) statements going around, generally in the line of "open source commerce platforms are a terrible idea". Now of course different solutions always have different pros and cons.
A hierarchy of evidence (or levels of evidence) is a heuristic used to rank the relative strength of results obtained from scientific research. I've created a version of this chart/pyramid applied to CRO which you can see below. It contains the options we have as optimizers and tools and methods we often use to gather data.