The Digital Mobilisation Glossary You Didn’t Know You Needed – Part 2

If you missed part 1 of our digital mobilisation glossary, be sure to read up on the key concepts and tools here.

Now we’ll dive right into what you need to know when it comes to:

  1. Measuring success (if we want to have real-world impact, it’s vital we set measurable goals and constantly evaluate them).
  2. Running tests (testing and optimising equips us to be data-driven in all we do – this is how we can make sure our campaigns are as effective as possible).


These are key ways we measure success in digital mobilisation. It’s important to create your own benchmarks to measure against, so you know what is and isn’t going well.

Click-through rate:

The percentage of users who receive an email and click a link within it.

Calculation: clicks / sends

Click-to-open rate (CTO):

The percentage of users who open an email and then click a link within it.

Calculations: clicks / opens

Conversion rate:

The percentage of users who visit the page and complete the form.

Calculation: form completions / unique pageviews

Cost per subscriber:

You may know this as cost per lead / opt-in / acquisition. It’s the average amount it costs to acquire one new subscriber.

Calculation: ad spend / leads
Benchmark: a good benchmark for this metric is £0.35-£0.70

Key Performance Indicator (KPI)

Your KPI’s could be one, two or a combination of any of the above metrics. They are the most important measures of success for your campaign, and should be based on what your goals are.

Example: If your campaign goal is supporter recruitment, then one of your KPIs would inevitably be the cost per subscriber.

Opt-in rate:

The percentage of users who say yes to being added to your mailing list.
For us, this pool of users is often those who’ve added their name to a handraiser and completed the form.

Calculation: opt-ins / form completions
Benchmark: a good benchmark for this metric is 50-70%

Projected lifetime income

When we measure income from regular givers we calculate it on a four year model, with benchmark attrition baked in. You should look to calculate your lifetime income with benchmarks from your existing online regular givers.

Calculation: average monthly regular gift x 48 (48 months in 4 years) x number of active donors

Return on ad spend (ROAS)

Income raised as a percentage of ad spend (this figure can’t be a negative percentage).

At Forward Action, we often pilot programmes that we design to eventually be ‘always-on’, so ROAS is the metric that best indicates long-term expected results and is a good data point to secure further investment.

Example: your organisation spends £2,000 on ads in a single month. In this month, the campaign raises £10,000. Therefore, the ROAS is a ratio of 5 to 1 (or 500%) – for every £1 you spend, you’ll get £5 back.

Calculation: income / ad spend

Return on investment (ROI)

Profit on total investment, including people, tools and other expenses such as agency fees etc – in addition to ad spend (this figure can be a negative percentage).

Calculation: (income – total spend) / total spend


Tests & Experiments

Testing and optimisation is integral to digital mobilisation so that all we do is data-driven. These are some key testing terms.

A/B test:

A way to compare two versions of something to figure out which performs better. To make sure your test is effective, change just one thing at a time.


The default you test other variants against.

Example: The previously run ads with images of animals

Fair test:

A fair test is conducted by making sure that you change only one factor at a time, while keeping all other conditions the same. This means the result will not be not biased.


The theory you’re testing.

Example: Images of people will perform better (and have a higher click-through rate) than the images of animals on our current Facebook ads.

Reliable test:

The result can be repeated/can be generalised to other scenarios.

Statistical significance:

There’s always a chance that a test result is due solely to error and not the thing we’re testing (error is noise in the data – uncontrolled variables that influence the results).

Statistical significance calculations tell us how likely it is our test result is solely due to error. You can use an online calculator to calculate the p-value. A p-value less than 0.05 is statistically significant – there’s less than a 5% probability that your results are solely due to error.


The thing you’re changing.

Example: The ad images.


A version of something you’re testing.

Example: New ads with images of people, but with the same copy as the animal ads.


We know that as the world of digital mobilisation grows and expands, so will our glossary of terms. This is only a springboard, but it’s important to us that the way we work remains accessible to all. So if this was helpful to you, then we’ve done our job.

If you’ve got any suggestions to include in our digital mobilisation glossary, or any questions on any of the terms mentioned, we’d love to hear from you. Feel free to get in touch here.