Possible experiment outcomes

Blog

Possible experiment outcomes

Written by Gui.do X Jansen,
September 2017
Written by a human, not by AI

Question: what are the different possible outcomes of A/B test experiments and what do you do depending on each of those scenarios?

Answer:

  1. Findings support hypothesis - Action: implement winner and/or re-test (more subtle variants or on other sites/platforms) to validate hypothesis.
  2. Findings refute hypothesis - Action: create other variants to re-test hypothesis or abandon hypothesis.
  3. No change found (but sample size requirement is reached) - Action: Check to see if we can think of more extreme variants to test the hypothesis.
  4. Experiment error, didn’t reach sample size etc. etc. - Action: Redo the test with new measures in place to prevent the errors.

It’s important to regularly share these findings (at least options 1,2 and 3) with everyone in the team/company to get feedback and to increase everyone’s knowledge of our customers behaviour.

What do you think? Are there more outcome options for your CRO experiments?

More like this? Follow me on LinkedIn!

Most of my content is published on LinkedIn, so make sure to follow me there!

Follow me on

Recent posts

Often Confused Commerce Terms
 Often Confused Commerce Terms

Recently I've seen some (often absolute) statements going around, generally in the line of "open source commerce platforms are a terrible idea". Now of course different solutions always have different pros and cons.

Optimization hierarchy of evidence
Optimization hierarchy of evidence

A hierarchy of evidence (or levels of evidence) is a heuristic used to rank the relative strength of results obtained from scientific research. I've created a version of this chart/pyramid applied to CRO which you can see below. It contains the options we have as optimizers and tools and methods we often use to gather data.