A/B Testing Your Digital Campaigns? Ask These 3 Questions First

A culture of constant testing is crucial to digital mobilisation. Without tests, without results, and without data, strategy becomes guesswork.

Running A/B tests is one way you can quickly gain insights into what drives action. We have countless examples where decisions based on test results have increased both supporter recruitments and donations for our partners.

One striking example is our test results for opt-in copy – we found that emotive language can increase opt-in rates on an action page by up to 71%. Armed with this data, we can make small optimisations that win thousands more opted-in supporters for our partners.

There’s a lot to consider to make sure your tests are robust, reliable and effective – here’s where we’d start.

1. What are you going to test?

This seems like an obvious question but establishing a clear hypothesis is the foundation for a strong test.

For example, if you decide to test varying donation amounts in a fundraising email, you couldn’t also test subject lines or images in that same email. This is so you can trace the differing results back to that one specific element you changed – in this case, donation prompts.

It’s essential to ask yourself this question because the more focused your hypothesis, the more meaningfully you’ll be able to interpret your results.

2. Is this test fair & reliable?

The key to making an A/B test fair and reliable is to change one variable at a time. This reduces the likelihood of your results being influenced by anything other than the chosen variable.

The higher the sample size, the more reliable your results, so use the largest sample size possible.

Of course, there are always uncontrolled variables that affect results. So you’ll need to calculate whether your results are statistically significant (i.e. they aren’t explainable by chance alone). You’ll need to find a P-value – we recommend this calculator to help.

It’s vital to question the fairness and reliability of your test when designing it – misleading test results could lead to poor decision-making, negatively impacting your donation or supporter recruitment rates.

3. What happens after you finish the test?

Will you run the test again to establish a trend? Will you run a different test to learn about another part of your supporter journey? Will you optimise the campaign by implementing the winning variant across your digital work?

Ask yourself this question so that your tests are purpose-driven – you’re not just testing for testing’s sake. It should serve the purpose of shaping your wider strategy.

For example, our tests revealed that adding upsells to donation pages can boost income by up to 17%. Because of this consistent finding, we often suggest this as a tactic to partners when optimising donation pagesour partners gain extra income at no extra cost.

Well-designed tests help you understand what’s gone wrong in past campaigns, optimise present ones, and gain learnings for the future.


If you’d like to know more about how testing and optimisation can give your digital fundraising and campaigns a boost, feel free to get in touch here.