Hopefully, you we’re preparing to start shouting at me after reading the title. If you weren’t, you might be making some really wrong decisions when optimizing your webshop and losing a lot of money. Let me explain to you what is so wrong with the title…
Warning: only read this if you’re serious in
investing in optimizing the user experience and conversion of your shop.
If you’re happy with suboptimal results, go ahead and try to copy
Zappos and Amazon like the rest of the web.
When the interface of a webshop is the discussion topic, I regularly
get questions like “Which color should we use for…”, “How big should
this button be to…”, “What do users expect when…” or “But on Amazon.com
For I am the psychologist among a group of technological people, of course I’m thé person to have an answer to these questions.
To have an answer would be great, but to get that illusion out of the door right away: the golden egg does not exist.
Or better: the answer is different for every website, and sometimes
even different for every page. This is actually a good thing, or else
all websites would look completely the same…
Why does some text, image or color increase conversion on one
site and why does that same text, image or color decreases conversion on
another? There are a lot of variables that influence the effectiveness
of a user interface.
Let’s take a look at some examples:
What kind of products or services do you provide?
What kind of (brand) image do you have (business, cheap, durable, fast, traditional, …)?
How does you site work in different browsers and on different devices (mobile, tablet, pc, chrome, firefox, safari, …)
Who are you visitors (age, sex, business, consumers, …)?
How much money do they have or are they willing to spend?
When are they visiting your site (day, evening, weekends)?
Where do your visitors come from before they come to your site (search engine, comparison websites, adds, newsletter, …)?
The weather (especially when you have a travel agency)
Pricing at competitor websites (for example the interest rate if you’re a bank)
Advertising campaigns of your competitor.
If the national team won an important match the day before (yes I’m still very serious)
These are just some examples, but I can continue for a while. The
idea is that every variable requires a different strategy for what works
and what doesn’t work for your customers in a certain situation. And
the worst part for you is that these variables can also influence each
other (a large button might work best when it’s blue, a small button
might work better when it’s red).
For a complete site, there are way too many variables for any expert
or “best practices” to tell you exactly what will work and what won’t
work. Of course there are guidelines/ heuristics/ best practices but to
really find out what works best you will have to test it.
Setting up tests
Persuasion marketing, interface design and a scientific approach are
indispensable when creating a proper test. Every online business owner
should at least have 1 FTE in his team working on this, or get an
external company to do this for you (hey, that’s me ;)).
But please don’t make the mistake of taking on advice and “best
practices” as being the one and only truth and implement every tip right
away. That something works great for Amazon, really doesn’t mean it
will work well for you (see the variables above)! Same thing for
collecting user feedback: if 2, 5 10 or 50 users in your quantitative
research study tell you to change something, you still have to test if
that change actually works better for the majority of your site
Good online marketers/ testers/ psychologists spent a lot of time
with qualitative research. You can use that experience to save you (a
lot of) time on getting ideas on what to test, implement testing
procedures, testing properly and interpreting the results. But you still
have to verify all assumptions about your (potential) customers to see
if these assumptions actually work.
You can’t expect to throw some bad ideas in your testing machine and expect brilliant results…
A hierarchy of evidence (or levels of evidence) is a heuristic used to rank the relative strength of results obtained from scientific research. I've created a version of this chart/pyramid applied to CRO which you can see below. It contains the options we have as optimizers and tools and methods we often use to gather data.
This is a bonus episode with Emily Robinson (Senior Data Scientist at Warby Parker) en Lukas Vermeer (Director of Experimentation at Booking.com).
In her earlier session that day, Emily said that real progress starts when you put your work online for others to see and comment on which in this case was about Github. Someone from the audience wondered how that works out in larger companies where a manager or even a legal department might not be overly joyous about that to say the least so I asked Emily about her thoughts on that.
Recorded live with audience pre-covid-19 at the Conversion Hotel conference in november 2019 on the island of Texel in The Netherlands.
(oorspronkelijk gepubliceerd op https://www.cro.cafe/)