In rapid changing online environments, continues business experimentation is a great way of constantly learning what works and what doesn’t. But then the question might arise: how far do you go? What do you test? And is it sometimes ok NOT to test something?
Our brains are naturally trained against taking risk, a bias psychologists call loss aversion: it's the feeling that giving something up is much worse, than the joy of acquiring something new. So often when we make choices, our brain estimates if the potential loss is an affordable loss. In other words: can you afford to be wrong?
Now I think you should test as many things as possible within the resources you have. At some companies (like Microsoft or booking.com) they even have the capacity to A/B test all bugfix releases. And as CRO specialists we are trained to be courageous risk-takers and run experiments. We might actually be biased the other way around and want to experiment on everything. Which is good, but I do think there are still exceptions where it's ok NOT to test something. And here's a story to illustrate that.
Two statisticians are hiking in the woods. Big glasses, cargo pants, big wellies, the nerds were well prepared. At some point during their hike however it starts to rain which is still a bit uncomfortable so they start looking for shelter. After a couple of minutes they find a cave and the first statistician says that he will go inside to check if it’s safe. So while the second statistician waits outside, the first one moves in and about 5 seconds after the first statistician rounds the first corner in the cave, there is a loud roar and ruffling noise of a big hairy beast. And then… silence..
So the statistician outside goes like: that was only one person so not a large sample size. We also didn't have a control group, he wasn't a good representative sample of statisticians and there might be lots of confounders going on… BUT I’m not going inside. Because if there is a bear in there, I don’t want to know.
On your website, you usually do want to know. If you’re wrong and spend a few hundred bucks testing a new hypotheses, there is no bear coming out to eat you. And if you run experiments systematically, over time you'll learn a lot and can greatly improve the way you communicate with your website visitors.
So when you ask: are there decisions that are too risky.... Yes, there are. But then you consciously decide between knowing and risk and sometimes you might decide that the risk is so great that you don’t want to know. I’ll wait outside the cave and become wet, that’s ok. As long as you know that’s what you did.
The "bear in the cave" story came from Lukas Vermeer!
Recently I've seen some (often absolute) statements going around, generally in the line of "open source commerce platforms are a terrible idea". Now of course different solutions always have different pros and cons.
A hierarchy of evidence (or levels of evidence) is a heuristic used to rank the relative strength of results obtained from scientific research. I've created a version of this chart/pyramid applied to CRO which you can see below. It contains the options we have as optimizers and tools and methods we often use to gather data.