Simulating Retention

Simulating Retention

As part of our consideration of metrics, we needed to do a deep dive into how retention is calculated in the sector and better understand if current approaches have any unintended side effects. Below is the outcome of our investigations.

The most common method of measuring retention for a given year, say 2021, is to take all the active donors at the beginning of 2021 and see how many are still active at the end of the year. Practically, this becomes the number of donors who gave in 2020, who go on to donate in 2021, since the only way that you can be active on January 1st 2021, is to give some time in 2020, and the only way to be active on the 31st of December is to give some time in 2021. This is often called snapshot retention because you effectively take a “snapshot” of your donors at the beginning of 2021 and see how their giving plays out over the year.

This measure is easy to understand, straightforward to use when forecasting, and simple to calculate. It seems reasonable that analysts would use it to understand churn in their donor base. Unfortunately, there is a fundamental issue with snapshot retention in comparative analysis; it assumes that snapshots are consistent between charities (and between years for the same charity).

We can illustrate this with a straightforward example. Suppose we have two recruitment or giving times each year; June and December, and that a donor has a:

  1. 50% chance of giving if their last gift was six months ago
  2. 20% chance of giving if their last gift was 12 months ago
  3. 10% chance of giving if their last gift was 18 months ago

For a charity (we’ll call them Charity A) whose recruitment schedule is focussed on June with 70% of recruitment taking place then, and 30% in December, their overall retention rate will be 48.8%.

How did we work this out?

We need to look at each of the giving scenarios to calculate their likelihood and whether they correspond to a retained donor or not:

Scenario Recruit Date June 2020 December 2020 June 2021 December 2021 Probability Retained
1 JUNE 🎁 36.0%
2 DEC 🎁 40.0%
3 JUNE 🎁 🎁 20.0%
4 JUNE 🎁 🎁 5.0%
5 DEC 🎁 🎁 25.0%
6 JUNE 🎁 🎁 🎁 12.5%
7 JUNE 🎁 🎁 4.0%
8 DEC 🎁 🎁 10.0%
9 JUNE 🎁 🎁 🎁 5.0%
10 JUNE 🎁 🎁 🎁 5.0%
11 DEC 🎁 🎁 🎁 25.0%
12 JUNE 🎁 🎁 🎁 🎁 12.5%

Consider scenario 2. The recruitment is in December 2020:

  • Coming into June 2021, the donor last donated six months ago, so their probability of not donating is 100% - 50% = 50%
  • Coming into December 2021, they haven’t donated for 12 months, so the probability of not donating is now 100% - 20% = 80%
  • Therefore, to get the probability of this scenario occurring, you multiply the two non-donation event probabilities together: 50% x 80% = 40%
  • Repeat this for all scenarios, and you have your probability of the scenario, given the recruitment period.

To get the overall retention, we multiply the probabilities by the proportion recruited in June or December, sum up the results for retained scenarios, and divide by the results for all scenarios.

But what about another charity (charity B) with a focus on December with recruitment being 70% in December and 30% in June. Charity B would have a retention rate of 55.2%. A 6.4% jump just for recruiting in December rather than June. So does this mean we should recruit donors in December rather than June?

Nope! Both charities are retaining donors with the same success; the recruitment schedule somehow influences the retention rate measure. Things become clearer when considering the donor snapshots at the beginning of 2021. For the donors recruited in June, 50% gave (their second gift) in December, and 50% gave six months ago. For the donors recruited in December, 100% of them just gave. So when we look at Charity A’s snapshot, 35% gave six months ago, and 65% just gave. When we look at Charity B, 15% gave six months ago, and 85% just gave. Their snapshots are different!

So, how do we fix this?

The central issue is that the snapshots are different, so we can either

  1. Adjust the estimate to correct for the difference in the snapshots.
  2. Change the metric to treat donors more equally, regardless of their initial gift date.

Seasonal Adjustment

The idea behind seasonal adjustment is to reweight the component parts to match a predetermined distribution of those components. For the example above, we could weight the June retentions and December retentions equally. The retention rate for June 2020 recruits is 44% and 60% for December 2020 recruits, and this is true for both charities since they are both based on the same giving probabilities. The resulting adjusted rate for both charities is 52%.

Adjusting works, but choosing a reasonable set of weights is important. Otherwise, the resulting adjusted values can be unrepresentative of the charity (or any charity).

Forward-looking retention

Another option is to replace snapshot retention with something that doesn’t require us to set a snapshot date. In forward retention, we select a cohort of donors, e.g. donors from 2020, and see how long it took to give a second gift after their first gift in the year. If they took longer than the retention period or never gave again, they became inactive and were not retained. Each donor has the same retention period to give another gift, and they start with the same level of recency (i.e. 0 months).

The forward retention rate is 60% for both charities A and B.

Last thoughts

Whether using seasonal adjustments or forward retention, there are differences between donors other than when they last gave. For example, donation value and giving history (number of gifts and years of consecutive giving) affect how likely a donor is to be retained. Retention needs to be studied for granular cohorts of donors with similar history and value to fully understand how well an organisation retains donors year after year.

Fundraising Insights is working to provide a range of well-considered metrics just like this for all participating charities. If you need metrics like this to develop your strategy, register to participate in 2022.