An A/B test is an experiment in which the participants are divided into sub-groups, and each subgroup is asked to evaluate different concept in the same way.
The results can then be analyzed to identify if one of the tested concepts performed significantly better than the other on the success metrics measured in the survey.
It is usually conducted to test different versions of ad campaigns, product designs, pricing strategies, etc. You can use all possible media types for concept presentation, e.g. images, videos, texts, and audio.
Downloadable A/B Testing Template
Below you can find a downloadable A/B Testing template file, that contains both Survey and Report templates. You will find intuitive instructions inside. To use it, you need to select “upload .json” while creating a new survey and then upload this file:
Solution templates Screenshots:
A/B Testing further explained
The A/B test is a special case of the monadic test design, of which the implementation with Survalyzer is described in detail on this education center page.
Participants are dynamically allocated to groups at the start of the interview. The participants are assigned alternately to one group or the other according to the order in which the interviews were started, so that even if they drop out, there is an approximately equal distribution of completed interviews. This basic principle can also be used for a division into more than two groups, if more than 2 versions are to be tested.
How do you implement an A/B test in a Survalyzer survey?
For the alternating allocation to the groups you need a “counter” variable that provides information on how many interviews have already been started (“in progress” and “completed” interview state). This counter is then divided by the number of groups (number of versions you want to test); the remainder of the division determines the assigned group for this interview.
Value assignment code to perform the logic above :
counter = survey.count_started + survey.count_completed - 1
SplitGroup = counter%2 + 1
The SplitGroup variable is then used to filter the product or concept variant to be shown.
If the participants in your survey are allocated according to certain criteria (e.g. age and gender), you can also allocate the groups according to these quotas so that at the end for each quota there is an approximately equal distribution to the groups.
For that purpose you need to use “If then else” function that will check which quota group is the one related to the respondent and use the counter with modulus function only for that subset of interviews. More on “if then else” function here.
As all participants are asked the same questions in the A/B test regardless of their group allocation, all answers are stored in the same variables.
Therefore, in order to analyze and compare results between tested sub-groups, the proper segmentation or filtering of raw data is needed, usually based on on some group variable value (SplitGroup variable in our example above).
In the Professional Report (only available for Professional Analytics users), custom segmentation can be set up easily in the table wizard. On the segmentation step of the wizard just select “Variable based segmentation” and select your grouping variable (e.g. SplitGroup). Separate version-related segments will be automatically created for all tables created in the next step.
Table wizard in Professional Reports also allows to turn on Significance Testing per each segmented table. This allows to verify, if differences in observed results are actually statically significantly different. Read more about significant testing in Survalyzer here.